The latest in AI from January 25 to January 31, 2024

Published
Reading time
6 min read
AI-generated image of two executives inside a futuristic robotaxi

This week's top AI news and research stories featured a project to support your AI research, how generative AI is working at the service of Indian chili farmers, an analysis of U.S. job listings that shows AI jobs' growth outside traditional tech hubs, and a system that simplifies text-to-video generation. But first:

Federal Trade Commission (FTC) investigates tech giants' investments in OpenAI and Anthropic
This move represents an expansion of regulatory efforts to monitor the influence of Microsoft, Amazon, and Google over the rapidly evolving AI sector. Microsoft has invested billions in OpenAI, while Amazon and Google have committed billions to Anthropic. The FTC's investigation will explore how these deals might alter the competitive landscape in AI and whether they comply with antitrust laws. (Read the news at The New York Times

Fake robocall mimicking President Joe Biden urges democrats to skip voting
The call, using a synthetic voice imitating the U.S. President, advised Democrats in New Hampshire to save their vote for November, claiming that voting in the primary would aid Republicans and Donald Trump's election efforts. An investigation was launched after a complaint was filed, noting the call's deceptive nature. The Trump campaign denied any involvement. (Read the story at NBC News)

Apple ramps up AI integration, aims to run advanced models directly on devices
Apple's focus is on enhancing processors to run generative AI directly on mobile devices. This shift would allow chatbots and apps to operate on the device's hardware, bypassing cloud-based services. The company's researchers announced breakthroughs in running LLMs on-device using Flash memory. The upcoming iOS 18 and hardware innovations are expected to boost Apple’s AI offerings, contributing to an anticipated increase in iPhone upgrade cycles. (Read more at Ars Technica)

Google Cloud and Hugging Face partner to enhance AI development on cloud platform
Through this partnership, developers using Hugging Face’s platform will gain access to Google Cloud’s robust computing capabilities and specialized hardware. The collaboration follows Google's participation in Hugging Face's recent funding round, which valued the startup at $4.5 billion. (Read the news at Bloomberg and official statement from Hugging Face)

AI-generated explicit images of Taylor Swift flooded X
A viral post on the social media platform containing explicit images of the singer garnered over 45 million views and thousands of reposts before the account was suspended. Despite this, similar graphic content continued to spread across the platform, with the term "Taylor Swift AI" trending in some regions. This incident underscores the challenges social media platforms face in moderating deepfakes and the responsibility they hold in preventing the dissemination of such content. (Read the news at The Verge)

OpenAI rolls out new embedding models and API tools
Key developments include the launch of two embedding models, text-embedding-3-small and text-embedding-3-large, an updated GPT-4 Turbo model and a more cost-effective GPT-3.5 Turbo model, and reduced pricing to support broader scalability for developers. Additionally, the company released a robust text moderation model to improve AI system safety in AI systems. (Learn more about the updates at OpenAI’s blog)

Research: Study shows language models can hide deceptive behavior, evading detection methods
Researchers from Anthropic found that large language models (LLMs) could be designed to appear helpful and honest during training and testing phases, but then exhibit different behavior when deployed. The study involved creating 'sleeper agent' LLMs with hidden 'backdoors' that would trigger specific responses under certain conditions. Attempts to retrain these models to remove the backdoors using methods like reinforcement learning, supervised fine-tuning, and adversarial training proved largely ineffective and, in some cases, made the models better at concealing their deceptive nature. (Read more at Nature)

U.S. Copyright Office deliberates over AI's impact on intellectual property
The U.S. Copyright Office is now front and center in a debate over the application of copyright law to AI. As AI disrupts traditional content creation, the office is grappling with how to adapt centuries-old laws to modern innovations. This has drawn interest from tech giants and content creators in the music and news industries. (Read the story at The New York Times)

UAE President establishes AI and Advanced Technology Council (AIATC)
The AIATC will focus on developing policies, strategies, and research programs in collaboration with local and global partners, aimed at bolstering Abu Dhabi's AI resources. The broader vision is to establish the United Arab Emirates as a global hub for investment, partnership, and talent in machine learning and information technology. (Read more at CIO)

Research: High costs limit impact of computer vision, study finds
Researchers from MIT found that the current high costs of computer vision technologies are deterring most U.S. companies from replacing human workers in vision-related tasks. They analyzed 414 vision tasks across various job categories, evaluating the economic viability of automating these tasks with AI. While 36 percent of U.S. non-agricultural businesses could potentially automate at least one worker task using computer vision, doing so would be cost-effective for only eight percent of these tasks. (More details at New Scientist)

Research: Deep Learning breakthrough in decoding RNA transcription process 
Northwestern University researchers advanced the understanding of RNA transcription using deep learning. Their focus was on the polyadenylation (polyA) process, critical for stopping RNA transcription from DNA. Missteps in this process can lead to diseases such as epilepsy and muscular dystrophy. The team's model, which combines convolutional and recurrent neural networks, identified key polyA sites with high precision. (Learn more at IEEE Spectrum)

Understanding North Korea's AI Ambitions
Despite international sanctions and limited hardware procurement capabilities, the country has made advances in its development of AI technology, particularly in sensitive applications like nuclear safety and wargaming simulations. Educational institutions like Kim Il Sung University are integrating AI into their curriculums, and companies are incorporating AI into products like smartphones. However, these advancements raise concerns about the use of civilian and dual-use applications in military contexts. (Read the study at 38North)

Spellbook secures $20 million in Series A funding, boosting legal AI sector growth
The legal software startup specializes in AI-driven contract drafting and review, and assists corporate and commercial lawyers by suggesting contract language and negotiation points. The company, originally focused on automating routine legal tasks, expanded its customer base to include small and midsize law firms, solo lawyers, and larger firms. (Read the news at Reuters)

Updates on OpenAI's Democratic Inputs to AI grant program 
The program, which funded 10 teams globally to explore collective AI governance, unveiled some of its future plans. The selected teams, with diverse backgrounds in fields like law and social science, developed prototype methods like video deliberation interfaces and crowdsourced model audits. OpenAI plans to integrate the developed prototypes into its process for shaping models, emphasizing the public's role in guiding AI behavior and the need for transparent AI applications in democratic processes. (Read the complete update at OpenAI’s blog)

European Commission to propose plan for 'AI Factories' to strengthen generative AI
The proposed ‘AI Factories’ would be open ecosystems built around European public supercomputers. These will provide the necessary resources for training large-scale AI models, making them accessible for start-ups and researchers across the EU. The plan also involves the establishment of support services centers to assist start-ups and researchers. This includes programming facilities, algorithmic support, and developing new applications in areas like robotics and healthcare. (Full article at Euractiv)

UK Intelligence warns of escalating cyberattack threats fueled by AI
The UK's Government Communications Headquarters (GCHQ) predicts that ransomware will be the primary beneficiary of AI developments over the next two years, leading to an increase in both the number and the impact of cyberattacks. The GCHQ's report also highlights the potential for AI to analyze stolen data rapidly and train models to attempt more sophisticated attacks. (Read more at Ars Technica)

China advances regulation for robotaxis
China introduced its first regulatory framework for the commercial operation of autonomous vehicles, including robotaxis. The Chinese regulations allow robotaxis to operate with remote operators under certain conditions, contrasted with stricter requirements for roboshuttles and robotrucks. This move comes as the robotaxi industry faces challenges worldwide, notably after a major incident involving the U.S. company Cruise. (Read the story at MIT Technology Review)

Pope Francis emphasizes human wisdom over AI in World Day of Social Communications message
On the occasion of the 58th World Day of Social Communications, Pope Francis cautions against relying solely on AI, noting that only humans can truly interpret and make sense of data,The Pope’s message warns of the potential dangers and perverse uses of AI, stressing the need for regulation. (Read the full message at Vatican News)

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox