In today’s edition of Data Points, you’ll learn more about:
- Maine votes to pause construction of data centers
- Journalists at major U.S. newspapers push back on publisher’s demand to use AI
- AI-generated talking-head videos that support President Trump flood social media
- Humanoid robots outrun human world record in a half-marathon race
But first:
Unauthorized users find a back door to Claude Mythos
A small group of unauthorized users gained access to Anthropic’s Claude Mythos model. Claude Mythos currently is restricted to security personnel at a small number of companies because Anthropic believes it can expose previously unknown security vulnerabilities. The hackers discussed potential back doors to closely held technology on the Discord social network and took advantage of public documentation and insider knowledge. They did not use the model to attack vulnerable software. The breach highlights the practical difficulty of securing high-risk AI systems and raises questions about whether other intruders have gained access to Claude Mythos. (Bloomberg)
OpenAI launches cybersecurity model to rival Claude Mythos
OpenAI introduced GPT-5.4-Cyber, a version of GPT-5.4 tailored for defensive cybersecurity, shortly after Anthropic unveiled Claude Mythos, a large language model that detected vulnerabilities in popular software. GPT-5.4-Cyber, which is designed for tasks like detecting vulnerabilities, analyzing malware, and reverse engineering compiled files, is available only to vetted security professionals. The latest generation of large language models appear to be exceptionally skilled at finding vulnerabilities in code, spurring not only competition among leading AI companies but also defensive measures and, in the hands of malefactors, potential attacks. (Reuters)
Maine leads pushback against AI data center expansion
Lawmakers in the state of Maine passed a moratorium on building large AI-related data centers — the first in the United States. It awaits the governor’s signature before it can come into force. The pause targets energy-intensive facilities that would draw over 20 megawatts of electricity, enough to power roughly 16,500 U.S. homes. It would allow time to study impacts on the power grid, electricity costs, water use, and local infrastructure. Rising political and community resistance to such facilities signals a potential constraint on large-scale implementation of AI. (The Washington Post)
Newspaper chain McClatchy faces newsroom backlash over AI productivity tool
McClatchy, which owns U.S. newspapers including Miami Herald, The Sacramento Bee, and The Kansas City Star, is facing criticism from journalists and unions over a new “content scaling agent” that revises existing articles into variants for social media outlets, video scripts, and other alternative formats. The system, which uses Anthropic’s Claude via API to generate text that editors can review and modify, has triggered disputes over bylines, disclosure of AI’s role, and whether the company can use reporters’ work or bylines without consent. The tension between McClatchy and its newsrooms raises issues about the impact of AI on creativity, attribution standards, labor agreements, and division of responsibility between writers, editors, and automated systems. (TheWrap)
AI-generated influencers flood social media with pro-Trump videos
Hundreds of AI-generated videos showing taking heads that deliver messages in support ofPresident Trump have appeared on social media ahead of the midterm elections. Researchers found at least 300 such accounts across platforms, often using similar avatars, language, and tactics, but could not identify their owners. Some accounts have tens of thousands of followers but don’t disclose their AI origins. The videos illustrate how inexpensive, scalable AI-generated personas can simulate grassroots political support and potentially influence public opinion by creating the illusion of widespread consensus. (The New York Times)
Humanoid robots surpass human record in Beijing half marathon
Two humanoid robots outperformed human runners at a Beijing half marathon. The winner, which operated autonomously, ran the 13-mile (21-kilometer) race in 50 minutes and 26 seconds — significantly faster than the human world record of 57 minutes. A faster robot, which did not win because it was controlled remotely, finished in 48 minutes and 19 seconds. The event featured over 100 robot contestants that generally showed major improvements from the previous year, when few robots finished and the fastest one completed the course in 2 hours and 40 minutes. Humanoid robotics is advancing rapidly and may soon support widespread industrial and commercial applications. (TechCrunch)
Still want to know more about what matters in AI right now?
Read the latest issue of The Batch for in-depth analysis of news and research.
Last week, Andrew Ng wrote about bottlenecks that occur as AI-native software engineering teams accelerate product development, the importance of engineers and product managers expanding their roles to take advantage of faster development, and the additional speed to be gained by working in the same physical location.
“Looking beyond the product-management bottleneck, I also see bottlenecks in design, marketing, legal compliance, and much more. When we speed up coding 10x or 100x, everything else becomes slow in comparison. For example, some of my teams have built great features so quickly that the marketing organization was left scrambling to figure out how to communicate them to users — a marketing bottleneck.”
Read Andrew’s letter here.
Other top AI news and research stories covered in depth:
- Meta pivoted away from its open-weights Llama strategy with Muse Spark, which signals a shift in its approach to AI development.
- Pharmaceutical giant Eli Lilly committed to investing up to $2.75 billion in Insilico for AI-driven drug development, which marks a significant bet on AI’s potential to transform the industry.
- Despite opposition from President Trump, most US states moved forward with AI regulations, highlighting a growing trend toward state-level governance of AI.
- Researchers advanced AI’s ability to simulate human diversity with persona generation, enabling the creation of virtual personas across a wide range of perspectives.
Last chance!
An important event for our community
Andrew Ng and DeepLearning.AI are hosting AI Dev 26 × San Francisco, a two-day conference for AI developers taking place April 28–29 at Pier 48.
Join 3,000+ engineers, researchers, and builders working on AI systems.
The program includes top speakers, developer-relations experts, and engineers from companies including Google, AMD, Oracle, Neo4j, and Snowflake (and of course DeepLearning.AI), all sharing their latest technologies and explaining how they’re building and deploying AI systems today.
At AI Dev 26, you’ll find:
- Technical talks by engineers building AI systems in production
- Hands-on workshops exploring new tools and techniques
- Live demos from startups and AI builders
- Opportunities to meet other developers and companies
Get your ticket with a special discount!
Data Points is produced by human editors with AI assistance.