The latest in AI from January 11 to January 17, 2024

Published
Reading time
4 min read
The latest in AI from January 11 to January 17, 2024

This week's top AI news and research stories featured highlights of The 2024 Consumer Electronics Show (CES), OpenAI’s GPT store, a new standard for media watermarks, and a training method that enables large language models (LLMs) and a tool that generates instrumental music for unaccompanied input vocals. But first:

AI-led misinformation tops World Economic Forum's list of threats to the global economy
2024’s Global Risks Report expresses concerns about the misuse of sophisticated synthetic content, leading to the manipulation of public opinion and potentially eroding democratic processes. The report also highlights increased risks of cyberattacks on and biases of AI models. (Learn more at AP and download the full report here)

Nvidia NeMo launches Parakeet, a family of speech recognition models
Developed in collaboration with Suno.ai, Parakeet's four models leverage RNN Transducer and Connectionist Temporal Classification decoders, ranging from 0.6 to 1.1 billion parameters. Trained on a diverse 64,000-hour dataset, these open source models claim state-of-the-art accuracy, different sizes, and open source nature. (Read all the details at Nvidia’s blog)

AMD announces new processors to improve AI performance on desktop PCs
The Ryzen 8000G Series boasts up to eight cores, 16 threads, and AI technology, including the first-ever Neural Processing Unit (NPU) on a desktop PC processor. DIY customers can access the processors from January 31, 2024, with OEM systems arriving in Q2 2024. (Read AMD’s press release)

AI-generated replicas of Taylor Swift's voice exploited in scam ads 
The singer’s synthetic voice was paired with manipulated video footage to convince viewers of Swift’s endorsement of a fraud offering Le Creuset cookware. The ads, visible on Meta platforms and TikTok, directed users to fake websites, extracting payments under the guise of a shipping fee without delivering the promised cookware. (Read the story at The New York Times)

Research: A web agent to simplify internet accessibility for people with disabilities
The agent, called Mind2Web, uses large language models and is trained on a diverse dataset of over 2,000 tasks from 137 real-world websites, enabling it to perform complex online actions using language commands. The technology aims to make web navigation less challenging, streamlining internet tasks and addressing other barriers faced by individuals with disabilities. (Find more details at Science Daily)

Toyota’s robots use generative AI to learn household chores through human demonstration
The robots leverage data from demonstrations to autonomously perform tasks such as sweeping. Toyota aims to integrate language models to enhance robot learning through video observation, potentially utilizing platforms like YouTube as training resources. The research aligns with Toyota's goal to create robots that support independent living, particularly for the aging population in Japan and other developed nations. (Read the news at Wired)

SAG-AFTRA secures AI use agreement with Replica Studios for video game voiceovers
The agreement  could influence ongoing negotiations with major video game studios, where a strike authorization vote has been secured. The agreement outlines informed consent for creating digital voice replicas (not synthetic performances) through AI and mandates secure storage of digital assets. Replica Studios specializes in AI voices and previously introduced "Smart NPCs" using language models for interactive gaming experiences. (Read more at Variety)

Steam introduces content guidelines to accommodate games using AI 
Developers submitting games to Steam will now need to disclose details about their AI usage through an updated content survey. The disclosure distinguishes between pre-generated AI content created during development and live-generated AI content produced while the game is running. Valve, the company behind Steam, will assess AI-generated content, ensuring it aligns with legal and non-infringing standards. (Read the official statement at Steam)

Chinese companies turn to repurposed Nvidia gaming chips for AI amid export controls
The graphics cards, stripped of core components, are being used as a workaround to address the lack of high-end processors in China after the Biden administration tightened export controls on cutting-edge AI chips. While these gaming-focused chips have raw computing power, they may not be as capable for the high-precision calculations needed for some large language models. (Read more at Financial Times)

AI is helping heavy industries reduce carbon emissions
The intersection of AI and industries like cement, steel, and chemicals, is becoming increasingly important in addressing the challenge of reducing CO2 emissions. AI is assisting in innovations such as carbon capture, advanced biofuels, clean hydrogen production, and synthetic fuels, making these technologies more commercially viable. Companies like Carbon Re are leveraging AI to accelerate the decarbonization of foundational materials, such as cement, aiming to significantly cut industrial carbon emissions. (Read the article at Reuters)

Cloud giants offer limited copyright protection for AI tools, leaving businesses exposed
Tech companies like Amazon, Microsoft, and Google are pushing generative AI tools, but worries about copyright infringement are holding some businesses back. While these companies offer to defend customers from lawsuits, their legal protection is narrow. It only covers AI models they developed or closely oversee, not third-party tools or models customized by businesses themselves. Legal experts advise businesses to be aware of these limitations and potentially negotiate for stronger protections in contracts. (Full story available at Financial Times)

AI-generated ‘George Carlin’ comedy special faces criticism from his daughter 
Produced by Dudesy, a podcast blending AI and human curation, the special attempts to emulate George Carlin's distinctive humor by imitating his voice, cadence, and style. Despite Dudesy's disclaimer that it is not Carlin (who died in 2008) the simulated special covers contemporary issues, including social media and AI itself. Carlin’s daughter responded on social media, asserting that her father's genius cannot be replicated by machines, emphasizing the uniqueness of human creativity in contrast to AI-generated attempts to recreate an irreplaceable mind. (Read the article at Rolling Stone)

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox