The latest in AI from Mar. 21 to Mar. 27, 2024

Published
Reading time
6 min read
The latest in AI from Mar. 21 to Mar. 27, 2024

This week's top AI news and research stories featured an agent for many environments, an AI system to identify animal cell types from gene sequences, a system that analyzes satellite and geolocation data that has been used to identify targets in real-world conflicts, and an agent that plays one-on-one football in a simulated environment But first:

Nvidia unveils LATTE3D, a model that turns text prompts into detailed 3D shapes
This model can produce high-quality 3D representations of objects and animals almost instantly, with applications for virtual environments, video games, advertising, and training modules. The model not only speeds up the design process by offering multiple shape options from a single prompt but also supports optimization for higher quality outputs. (Read more at Nvidia’s blog)

Lighthouz AI and Hugging Face launch chatbot guardrails arena to test AI privacy
The arena allows participants to interact with two anonymous chatbots, challenging them to reveal sensitive financial information protected by advanced guardrails. The initiative aims to identify the most secure AI models through community voting, establishing a trusted benchmark for chatbot security and privacy. Participants can engage with chatbots, explore different guardrail technologies, and contribute to a public leaderboard that ranks the models based on their privacy-preserving capabilities. (Find all the details at Hugging Face’s blog)

Google Deepmind’s TacticAI, an AI assistant for football coaches 
This tool optimizes corner kicks in football (soccer). TacticAI leverages a geometric deep learning approach to offer predictive insights, enabling the generation of high-quality tactical setups despite the scarcity of gold-standard data. By analyzing past plays and suggesting adjustments, TacticAI offers a blend of predictive accuracy and practical tactical recommendations, advancing sports AI and potentially impacting other fields like computer games, robotics, and traffic coordination. (Learn more at Google Deepmind’s blog)

Nvidia introduced the Blackwell B200 GPU, proclaimed as the world's most powerful chip for AI
The new flagship B200 chip delivers up to 20 petaflops of computing power (in the FP4 format) compared to the H200’s four petaflops in FP8. .The new GB200 superchip combining two B200 GPUs and a Grace CPU offers up to 30 times the performance for LLM inference tasks compared to its predecessor, while also reducing energy and cost by up to 25 times. Each B200 processor is expected to cost between $30,000 and $40,000. (Learn more at Ars Technica)

Stability AI introduces Stable Video 3D, a model that turns single images into detailed 3D 
Built on Stable Video Diffusion, SV3D comes in two variants: SV3D_u, for generating orbital videos from single images without camera conditioning, and SV3D_p, which creates 3D videos along specified camera paths from single images and orbital views. Stable Video 3D is now available for commercial use through Stability AI Membership, with model weights accessible for non-commercial purposes on Hugging Face. (Read Stability AI’s blog for more details)

United Nations General Assembly adopts resolution on responsible AI 
The resolution, led by the U.S. and backed by over 120 nations, promotes uses of AI that respect human rights and aid sustainable development.  It also seeks to bridge the digital divide, particularly in developing countries, and complement ongoing initiatives within the UN system for governing AI technology. (Learn more at the United Nations’ blog)

Researchers trained a neural network that can linguistically instruct another AI
Initially trained on basic tasks, the AI described these tasks to a 'sister' AI, which could then execute the tasks independently. This achievement promises substantial benefits for the field of robotics, signaling a move towards more autonomous and collaborative humanoid robots. (Read the news at Science Daily)

Biden administration grants Intel $20 billion to ramp up U.S. chip production
U.S. President Joe Biden announced nearly $20 billion in grants and loans to Intel, marking the largest U.S. government subsidy for semiconductor manufacturing. This funding aims to increase the U.S. share of advanced chip production from 0% to 20% by 2030. This investment also intends to reduce the U.S.'s reliance on chip imports, particularly from Taiwan, amid concerns over geopolitical tensions with China. (Read the news at Reuters)

AI tool “Mia” identifies breast cancer missed by doctors in trial
Developed for the U.K. National Health Service (NHS), Mia demonstrated the ability to detect early signs of breast cancer in mammogram scansTested across 10,000 women, Mia flagged all symptomatic patients, including those not initially spotted by clinicians, identifying 11 cases of smaller cancers that were overlooked by human doctors. (Read the story at BBC)

GPT store overwhelmed by spam 
OpenAI's marketplace for custom chatbots, the GPT Store, is facing challenges with spam and potentially copyright-infringing content. A review by TechCrunch revealed issues with moderation, as the store is flooded with GPTs that misuse properties from Disney, Marvel, and other franchises, and even offer services that promise to bypass AI content detection tools. The GPT Store's rapid growth to 3 million GPTs has seemingly come at the expense of quality and adherence to OpenAI’s policies. (Read the full report at TechCrunch)

Global experts convene in Beijing to set safety "Red Lines" for AI development
At the second International Dialogue on AI Safety in Beijing, top AI scientists, including Yoshua Bengio, Andrew Yao, and Geoffrey Hinton, collaborated with governance experts to address AI safety and propose international cooperation guidelines. The consensus statement from the event recommends prohibiting AI systems capable of autonomous replication, power seeking, assisting in weapon development, executing cyberattacks, or deceiving creators. (Read the statement at IDAIS)

Researchers developed an AI that can identify COVID-19 in lung ultrasound images
Beginning as a tool for quick patient assessment during the pandemic, the technology now also offers potential for at-home monitoring devices for illnesses like congestive heart failure. The AI, a deep neural network, has been trained using a combination of real patient data and simulated images to recognize features known as B-lines, which are indicative of inflammation and infection in the lungs. (Read more at Science Daily)

Tennessee enacts first U.S. law to shield artists from unauthorized AI use
The Ensuring Likeness Voice and Image Security (ELVIS) Act updates the state's personal rights protection laws to safeguard the voices of songwriters, performers, and music industry professionals against misuse by AI. The law addresses the music industry's growing concerns over generative AI's potential for creating unauthorized content that mimics human artists. (Read more at Reuters)

Google's ScreenAI breaks down UI and infographic interaction
ScreenAI, a new vision-language model, specializes in identifying UI elements and generating descriptive annotations. Trained on a novel Screen Annotation task among others, this 5B parameter model outperforms similar-sized and larger models across various benchmarks in tasks like question answering and UI navigation. (Read all about the model at Google’s blog)

Sakana AI launches open source models with evolution-inspired technique
The Tokyo-based startup unveiled new models developed through a "model merging" process inspired by evolution. This method involves combining existing models to create advanced model generations, with the most successful models advancing as "parents" for future iterations. The company is releasing three Japanese language models, two of which are open source. (Read the story at Reuters)

Google fined €250 million by French regulator over AI copyright breaches
France's competition authority imposed a €250 million fine on Google for violating EU intellectual property rules, particularly in its use of media publisher content for training Gemini. The sanction addresses complaints from major news organizations about unauthorized use of their content. Google agreed to the settlement without contesting the facts, expressing a desire to focus on sustainable content distribution and collaboration with French publishers. (Read more at Reuters)

Google advances AI flood forecasting to boost global preparedness
This initiative is part of Google's broader effort since 2017 to develop a real-time operational flood forecasting system, integrating Google Search, Maps, and Android notifications.The system now covers river forecasts in over 80 countries. Google’s system can now predict flooding in areas where historical data is scarce, marking a step towards using AI for climate resilience. (Read the report at Google Research blog)

Chatbots emerge as a mental health aid for Gen Z
Amidst a growing mental health crisis among teens and young adults, chatbots are stepping in to offer support. These chatbots employ therapeutic techniques to provide users with coping strategies and emotional support, although creators are cautious to differentiate these services from professional therapy. With the surge in generative AI, such apps have gained traction, offering 24/7 availability without the stigma of seeking therapy. However, their effectiveness and regulatory status remain in question due to limited data on long-term impacts and the absence of FDA approval for treating specific conditions. (Read the report at AP News)

U.S. Department of Homeland Security integrates AI to enhance operations 
In collaboration with leading companies like OpenAI, Anthropic, and Meta, the federal agency aims to leverage chatbots and other AI tools for a wide range of applications, including combating drug and human trafficking, training immigration officials, and enhancing emergency management. Homeland Security Secretary Alejandro Mayorkas emphasized the urgent need to adopt AI to harness its benefits and mitigate potential risks. With an initial investment of $5 million in pilot programs, the department plans to employ AI in investigating crimes, securing the nation’s critical infrastructure, and developing disaster relief plans. (Read more at The New York Times)

Share

Subscribe to Data Points

Your accelerated guide to AI news and research