An AI system to identify teen ChatGPT users GLM-4.7-Flash, a top 30B parameter model, is free to use

Published
Jan 21, 2026
Reading time
5 min read
Teenagers having faces scanned by biometric security technology while using phones. Green grid lines shown.

In today’s edition of Data Points, you’ll learn more about:

  • Persona drift: how prompts can enable bad behavior
  • X’s transformer-based personalization algorithm
  • How Cursor agents built a working web browser in a week
  • Expanding ChatGPT and model access for students

But first:

OpenAI deploys age prediction model to identify teen ChatGPT users

OpenAI rolled out an age prediction system for ChatGPT consumer accounts that analyzes behavioral and account-level signals to estimate whether users are under 18. The model examines account age, activity patterns, usage times, and stated age to automatically apply content restrictions for accounts flagged as potentially belonging to teens. Restricted content includes graphic violence, risky viral challenges, sexual or violent roleplay, self-harm depictions, and material promoting unhealthy body standards. Users incorrectly identified as minors can verify their age through Persona’s identity verification service to restore full access. The system complements existing protections for users who self-report as under 18 during signup and works alongside parental controls that let parents set usage hours, disable features like memory, and receive alerts for signs of distress. (OpenAI)

ZhipuAI releases GLM-4.7-Flash with free API access

ZhipuAI launched GLM-4.7-Flash, a 30B-A3B MoE model designed for lightweight deployment that outperforms competing models in its class. The model is free to use via API, while the enhanced GLM-4.7-FlashX variant costs 7 cents per million input tokens and 1 cent per million output tokens, with a limited-time free tier available. GLM-4.7-Flash scores 91.6 on AIME 25 and 59.2 on SWE-bench Verified, beating Qwen3-30B-A3B and GPT-OSS-20B across most benchmarks. The model supports local deployment through vLLM, SGLang, and transformers frameworks, with speculative decoding capabilities for faster inference. (Hugging Face)

Models drift from their persona, enabling harmful outputs

Anthropic researchers mapped the “persona space” of large language models by analyzing neural activation patterns across 275 character archetypes in Gemma 2 27B, Qwen 3 32B, and Llama 3.3 70B. They identified an “Assistant Axis” that represents the primary dimension along which personas vary, with professional roles like consultant and analyst at one end and fantastical characters like ghost and hermit at the other. The axis exists in both pre-trained and post-trained models, suggesting the Assistant persona inherits traits from human professional archetypes present in training data. Models naturally drift away from the Assistant persona during therapy-style conversations and philosophical discussions, with this drift significantly increasing the likelihood of harmful outputs like reinforcing delusions or encouraging self-harm. The researchers developed “activation capping,” which constrains neural activity to prevent persona drift, reducing harmful response rates by roughly 50 percent on jailbreak attempts while preserving performance on capability benchmarks. (Anthropic)

X open-sources personal feed powered by Grok-based transformer

X released the code for its For You feed recommendation system, which uses a transformer model adapted from xAI’s Grok-1 to rank posts. The system combines in-network posts from followed accounts with out-of-network content discovered through machine learning retrieval, then ranks everything using a single model that predicts engagement probabilities across 15 action types including likes, replies, blocks, and reports. X eliminated hand-engineered features in favor of letting the transformer learn relevance patterns directly from user engagement history. The architecture uses candidate isolation during ranking so posts’ scores remain independent of other posts in the batch, and combines predictions into a final score using weighted positive actions like shares and negative actions like blocks. (GitHub)

Cursor’s AI agents build working web browser in one week

Cursor tested hundreds of concurrent AI agents working on complex software projects for weeks at a time, generating over 1 million lines of code. The company found that a hierarchical structure with specialized planner and worker agents outperformed flat coordination models. Planners continuously explore codebases and create tasks, while workers focus solely on completing assigned work without coordinating with each other. The system built a web browser from scratch in one week with 1,000 files, migrated Cursor’s own codebase from Solid to React over three weeks with 266,000 additions and 193,000 deletions, and optimized video rendering code that shipped to production. GPT-5.2 models proved more effective than GPT-5.1-Codex for extended autonomous work, maintaining focus and avoiding drift better than Opus 4.5, which tends to take shortcuts. The company says prompt engineering matters more than infrastructure, and the optimal coordination structure falls between completely flat and rigidly hierarchical systems. (Cursor)

OpenAI expands education program, targeting global markets

OpenAI announced Education for Countries, a new initiative to help governments and universities embed AI tools into their education systems to personalize learning and prepare students for a changing workforce. The program provides access to ChatGPT Edu, GPT-5.2, and other AI tools customized for local learning priorities, alongside research partnerships to measure how AI affects learning outcomes and teacher productivity. Eight countries and regions joined the first cohort, including Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago, and the United Arab Emirates. Estonia has already deployed ChatGPT Edu nationwide to more than 30,000 students and educators in its first year, with ongoing research tracking how AI affects 20,000 students over time through a partnership with the University of Tartu and Stanford. The rollout includes tailored training for educators, age-appropriate safeguards for secondary school students starting with small pilots, and a global network of partners sharing insights on responsible AI implementation in schools. (OpenAI)


Still want to know more about what matters in AI right now?

Read the latest issue of The Batch for in-depth analysis of news and research.

Last week, Andrew Ng talked about overstated concerns regarding data centers’ impact on CO2 emissions, electricity prices, and water use, arguing that they were more environmentally friendly and efficient than alternatives.

“Data centers do impose costs on communities, and these costs have to be planned and accounted for. But they are also far less damaging — and more environmentally friendly — than their critics claim. There remains important work to do to make them even more efficient. But the most important point is that data centers are incredibly efficient for the work they do.”

Read Andrew’s letter here.

Other top AI news and research stories covered in depth:


A special offer for our community

DeepLearning.AI recently launched the first-ever subscription plan for our entire course catalog! As a Pro Member, you’ll immediately enjoy access to:

  • Over 150 AI courses and specializations from Andrew Ng and industry experts
  • Labs and quizzes to test your knowledge 
  • Projects to share with employers 
  • Certificates to testify to your new skills
  • A community to help you advance at the speed of AI

Enroll now to lock in a year of full access for $25 per month paid upfront, or opt for month-to-month payments at just $30 per month. Both payment options begin with a one week free trial. Explore Pro’s benefits and start building today!

Try Pro Membership

Share

Subscribe to Data Points

Your accelerated guide to AI news and research