Stanford University

53 Posts

Foundation Model Transparency Index by The Stanford Center for Research of Foundation Models
Stanford University

What We Know — and Don’t Know — About Foundation Models: A new Stanford index to assess the transparency of leading AI models

A new index ranks popular AI models in terms of information their developers provide about their training, architecture, and usage. Few score well.
LLMs Get a Life: The generative agents that mimic human behavior in a simulated town
Stanford University

LLMs Get a Life: The generative agents that mimic human behavior in a simulated town

Large language models increasingly reply to prompts with a believably human response. Can they also mimic human behavior?
ChatGPT Ain’t What It Used to Be: ChatGPT's behavior change over time
Stanford University

ChatGPT Ain’t What It Used to Be: ChatGPT's behavior change over time

It wasn’t your imagination: OpenAI’s large language models have changed. Researchers at Stanford and UC Berkeley found that the performance of GPT-4 and GPT-3.5 has drifted in recent months. In a limited selection of tasks, some prompts yielded better results than before, some worse.
Bug Finder: A system that provides feedback with near human-level accuracy
Stanford University

Bug Finder: A system that provides feedback with near human-level accuracy

One challenge to making online education available worldwide is evaluating an immense volume of student work. Especially difficult is evaluating interactive computer programming assignments such as coding a game.
Three Methods for Detecting Generated Text: Techniques to tell when you're reading AI-generated text
Stanford University

Three Methods for Detecting Generated Text: Techniques to tell when you're reading AI-generated text

How can you tell when you’re reading machine-generated text? Three recent papers proposed solutions: Watermarking, classification, and a statistical method.
AI Trends Tracked: 2023's AI trends from Stanford's AI Index
Stanford University

AI Trends Tracked: 2023's AI trends from Stanford's AI Index

Stanford’s sixth annual AI Index takes stock of a rapidly growing field. The sprawling, 386-page report from the Institute for Human-Centered AI presents the past year’s developments in AI based on a wide variety of sources including benchmarks, papers, market research, job listings, and polls.
Graph with difference in test error in keeping hard versus easy examples
Stanford University

Unsupervised Data Pruning: New method removes useless machine learning data.

Large datasets often contain overly similar examples that consume training cycles without contributing to learning. A new paper identifies similar training examples, even if they’re not labeled.
Participant responses (Likert-scale) to post-survey questions about belief about OpenAI's Codex
Stanford University

Generated Code Generates Overconfident Coders: Copilot AI tool encourages programmers to write buggy code.

Tools that automatically write computer code may make their human users overconfident that the programs are bug-free. Stanford University researchers found that programmers who used OpenAI’s Codex, a model that generates computer code, were more likely...
Douwe Kiela with a l
Stanford University

Douwe Kiela: Natural language processing researcher Douwe Kiela calls for less hype, more caution.

This year we really started to see the mainstreaming of AI. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field.
Ground truth video of a road on the left and predicted video with MaskViT on the right
Stanford University

Seeing What Comes Next: Transformers predict future video frames.

If a robot can predict what it’s likely to see next, it may have a better basis for choosing an appropriate action — but it has to predict quickly. Transformers, for all their utility in computer vision, aren’t well suited to this because of their steep computational and memory requirements...
Animated chart shows how AI can avoid mistaking an image's subject for its context.
Stanford University

Taming Spurious Correlations: New Technique Helps AI Avoid Classification Mistakes

When a neural network learns image labels, it may confuse a background item for the labeled object. New research avoids such mistakes.
Animated charts show how scientists misuse machine learning.
Stanford University

Bad Machine Learning Makes Bad Science: Is Machine Learning Driving a Scientific Reproducibility Crisis?

A recent workshop highlighted the impact of poorly designed AI models in medicine, security, software engineering, and other disciplines.
Man sitting on a tree with a monkey and a gorilla
Stanford University

Decision Trees: From Root to Leaves — Decision Trees for Machine Learning Explained

What kind of beast was Aristotle? The philosopher's follower Porphyry, who lived in Syria during the third century, came up with a logical way to answer the question...
Excerpts from the fifth annual AI Index from Stanford University’s Institute for Human-Centered AI
Stanford University

AI Progress Report: Stanford University's fifth annual AI Report for 2022

A new study showcases AI’s growing importance worldwide. What’s new: The fifth annual AI Index from Stanford University’s Institute for Human-Centered AI documents rises in funding, regulation, and performance.
Illustration of a woman riding a sled
Stanford University

Multimodal AI Takes Off: Multimodal Models, such as CLIP and DALL·E, are taking over AI.

While models like GPT-3 and EfficientNet, which work on text and images respectively, are responsible for some of deep learning’s highest-profile successes, approaches that find relationships between text and images made impressive
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox