OpenAI unveils new model suite for developers Meta’s crawlers return to Europe

Published
Apr 14, 2025
Reading time
4 min read
Diverse group of college students walking and studying on a university campus lawn with a historic brick building in the background.

Twice a week, Data Points brings you the latest AI news, tools, models, and research in brief. In today’s edition, you’ll find:

  • New vibe coding tools for Gemini
  • ChatGPT can remember all your conversations
  • Google’s new TPU is built for agents, inference
  • How college students use chatbots

But first:

OpenAI launches GPT-4.1 model family

OpenAI released three new models in its API: GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, all outperforming previous versions with significant gains in coding and instruction following capabilities. GPT-4.1 scores 54.6 percent on SWE-bench Verified coding tasks, a 21.4 percent improvement over GPT-4o. The models feature expanded context windows of up to 1 million tokens. GPT-4.1 mini offers comparable performance to GPT-4o at half the latency and 83 percent lower cost, while GPT-4.1 nano performs best at low-latency tasks like classification and autocompletion. The new models are available immediately to all developers at reduced pricing, with GPT-4.1 costing 26 percent less than GPT-4o for typical queries. (OpenAI)

Meta resumes training on EU users’ posts and comments

Meta announced it will restart training its AI models using publicly available content from European users, a process it paused last year following privacy concerns. The company plans to use public posts and comments from adult EU users on Facebook and Instagram, along with user questions and queries directed to Meta AI. Meta said EU privacy regulators affirmed in December that the company’s original approach complied with legal obligations. The announcement also noted that competitors Google and OpenAI train their models on public data in the EU. Meta emphasized it won’t use private messages for AI training and will allow EU users to opt out of AI data collection through an objection form. (Facebook)

Gemini Code Assist adds AI agents

Google unveiled new agentic capabilities for Gemini Code Assist, enabling the coding assistant to handle multi-step programming tasks. These agents can generate applications from product specifications, transform code between languages, implement new features, conduct code reviews, and create tests and documentation. Google also expanded Code Assist availability to Android Studio. The new capabilities respond to similar features offered by GitHub Copilot, Cursor, Windsurf, and Cognition Labs’ Devin in the increasingly competitive AI coding assistant market. (TechCrunch)

ChatGPT gets long-term memory upgrade

OpenAI updated ChatGPT with enhanced memory capabilities that allow the model to reference past conversations without users explicitly saving them. The upgrade expands on last year’s more limited Memory feature by combining manually saved memories with automatic insights gathered from chat history. The updated memory feature is currently rolling out to $200 monthly Pro subscribers first, with $20 Plus subscribers getting access soon, followed by Team, Enterprise, and Edu users in the coming weeks. However, it is not available in the EU, UK, and several other European countries, likely due to regulatory concerns. (OpenAI)

Google announces Ironwood, its new and improved AI processor

Google introduced Ironwood, its seventh-generation AI accelerator chip designed for inference on Gemini models. The processor operates in massive clusters of up to 9,216 liquid-cooled chips, delivering 42.5 Exaflops of computing power. Each Ironwood chip is significantly more powerful than previous versions, with six times more memory and twice the efficiency of Google’s last processor. Google’s cloud will offer AI developers access to this hardware in either 256-chip servers or full-size clusters. The company sees Ironwood as key computing infrastructure to enable AI reasoning models and to power agents that can independently gather information and complete tasks for users. (Ars Technica)

Study reveals how college students use Claude

Anthropic researchers analyzed one million anonymized student conversations with Claude.ai in one of the first large-scale studies of real-world AI usage in higher education. STEM students, particularly those in Computer Science, emerged as early adopters, with CS students accounting for 36.8 percent of conversations despite representing only 5.4 percent of U.S. degrees. The study suggests students primarily use AI for higher-order cognitive functions like creating and analyzing rather than simpler tasks. With AI increasingly embedded in educational settings, these findings raise important questions about how its use affects skill development, assessment methods, and academic integrity. (Anthropic)


Still want to know more about what matters in AI right now?

Read last week’s issue of The Batch for in-depth analysis of news and research.

Last week, Andrew Ng reflected on the impact of new U.S. tariffs, expressing concern over how they threaten international collaboration, inflate costs, and slow down AI progress. He also encouraged the global AI community to stay united despite these concerns.

“Let’s all of us in AI keep nurturing our international friendships, keep up the digital flow of ideas — including specifically open source software — and keep supporting each other.”

Read Andrew’s full letter here.

Other top AI news and research stories we covered in depth: Anthropic’s latest experiment revealed that Claude can take reasoning steps even without explicit prompting; Meta released its new Llama 4 models with a mixture-of-experts architecture, claiming performance gains over major competitors; Qwen2.5-Omni 7B raised the bar for small multimodal models, achieving strong results across text, image, audio, and video with just seven billion parameters; and new research showed that transformers can outperform decision trees in predicting missing values in tabular data, such as spreadsheet cells.


Subscribe to Data Points

Share

Subscribe to Data Points

Your accelerated guide to AI news and research