The latest in AI from Feb. 15 to Feb. 21, 2024

Published
Reading time
4 min read
The latest in AI from Feb. 15 to Feb. 21, 2024


This week's top AI news and research stories featured OpenAI's Sora, Huawei's AI chips, an AI system to double-check judges' decisions in competitive gymnastics, and Würstchen, and a way to reduce memory requirements when fine-tuning large language models. But first:

Gemini 1.5 boasts superior data handling and efficiency
Gemini 1.5 Pro, the latest upgrade to Google’s Gemini model, can process large volumes of data across various formats, including video, text, and images. Gemini 1.5 Pro can manage inputs up to 128,000 tokens, matching the capabilities of GPT-4 Turbo, but an exclusive version available to a limited group of developers can reliably process up to 1 million tokens. In tests, Google claims Gemini 1.5 can handle a context window of 10 million tokens – the equivalent of 7 million words or 10 hours of video. This new iteration, currently only in preview to select developers and business customers, also employs a Mixture-of-Experts architecture that’s new to Gemini.  Its broader release date remains unspecified. (Read more at MIT Technology Review)

Global hackers use OpenAI for cyber operations, report says
Research jointly released by OpenAI and Microsoft claims to show that hacking groups with ties to China, Russia, and North Korea have used OpenAI’s technology for routine tasks such as drafting emails, translating documents, and debugging code rather than for creating advanced cyberattacks. The two companies documented the use by five specific hacking groups and have since revoked the groups’ access to the technology. (Read the news at The New York Times)

Chicago to discontinue use of gunshot detection technology amid criticism
Chicago announced plans to not renew its contract with SoundThinking for the ShotSpotter gunshot detection system, citing concerns over the technology's accuracy, racial bias, and misuse by law enforcement. The decision comes after a $49 million investment in the system since 2018 and an Associated Press investigation highlighting its problematic use in legal cases. (Read the news at AP)

OpenAI enhances ChatGPT with advanced memory features for personalized conversations
The upgrade makes the chatbot capable of recalling details from previous conversations, such as personal preferences and specific instructions. For example, if a user shares information about a family member, ChatGPT can incorporate these details into relevant tasks, like crafting personalized birthday cards. The update also introduces "temporary chats," where conversations and memories are not stored, addressing potential privacy concerns. (Read more at The New York Times)

Tech giants unite to combat AI-driven election interference
In an initiative announced at the Munich Security Conference, 20 leading technology companies, including OpenAI, Meta, Microsoft, Adobe, and TikTok, pledged to collaborate to thwart election interference. Initiatives include the development of detection tools for deceptive AI-generated content, public awareness campaigns to educate voters about misinformation, and measures to eliminate harmful content from the companies’ platforms. The accord lacks specific timelines or implementation details. (Learn more at Reuters)

FTC moves to ban AI impersonation of individuals 
The Federal Trade Commission (FTC) proposed rules that prohibit the use of AI to impersonate individuals. This proposal aims to broaden an existing rule that already forbids the impersonation of businesses and government agencies, extending similar protections to individuals. (Find the details at The Wall Street Journal)

Google to establish AI hub in France, according to the French Finance Ministry
Situated in Paris, the hub would accommodate approximately 300 researchers and engineers, emphasizing the country's ambitions to rival traditional tech powerhouses like the U.S. and the UK. The announcement was made by the French Finance Ministry. (Read more at Reuters)

AI revives voices of children lost to gun violence in emotional campaign
The initiative aims to influence lawmakers on gun safety laws. This campaign features automated calls to legislators, voiced by AI-generated replicas of the deceased children developed by ElevenLabs' AI voice generator. These calls began on the sixth anniversary of the Parkland school shooting as part of a broader effort to advocate for gun control. (Read the story at The Wall Street Journal)

AI Pioneer Andrej Karpathy leaves OpenAI 
In a recent social media post, Karpathy revealed that he left OpenAI to focus on his personal projects. Karpathy’s exit is notable, given his significant contributions to the development of advanced AI technologies both at OpenAI and during his tenure as a senior director for AI at Tesla. (Read more at Reuters)

Khan Academy’s chatbot faces challenges in solving basic math problems 
Despite the ambitious vision shared by Sal Khan, founder of Khan Academy, during a TED Talk about AI transforming education, a test by a Wall Street Journal reporter revealed that Khanmigo, powered by ChatGPT, frequently made errors in simple arithmetic operations and struggled with mathematical concepts like rounding and calculating square roots. (Learn more at The Wall Street Journal)

Romantic chatbots raise privacy and security concerns, Mozilla Foundation warns
These applications, which have amassed over 100 million downloads on Android alone, are collecting vast amounts of personal data from users, employing trackers that send information to third parties like Google and Facebook, and lack robust password protection measures. Mozilla’s investigation scrutinized 11 romance and companion chatbots, revealing their use of weak security practices, vague data usage policies, and general opacity about their operational and ownership details. (Read the news at Wired)

US Patent and Trademark Office (USPTO) clarifies AI cannot be listed as inventor on patents
This decision comes after public consultations and reinforces the stance that only "natural persons" can hold patents. However, the USPTO acknowledges that AI can play a role in the invention process, stipulating that human inventors must disclose any AI assistance in their patent applications. This policy update follows legal precedents, including a 2020 ruling against researcher Stephen Thaler, who sought to name his AI system, DABUS, as an inventor. (Full article available at The Verge)

A deep dive into the energy consumption of AI models
AI models are known to consume vast amounts of electricity, yet calculating their exact energy footprint remains elusive. Training a model is particularly energy-intensive, potentially using as much electricity as 130 US homes annually. The broader impact of AI on global electricity consumption is also a concern, with estimates suggesting AI could account for a substantial portion of worldwide energy demand by 2027, comparable to the annual energy usage of entire countries. (Read the report at The Verge)

Share

Subscribe to Data Points

Your accelerated guide to AI news and research