Letters
The Easiest Way to Achieve Artificial General Intelligence: Coming up with scientific definitions of ambiguous terms like consciousness and sentience can spur progress but mislead the public.
As I wrote in an earlier letter, whether AI is sentient or conscious is a philosophical question rather than a scientific one, since there is no widely agreed-upon definition and test for these terms.
Letters
Outstanding Research Without Massive Compute: Researchers at Stanford and Chan Zuckerberg Biohub Network dramatically simplified a key algorithm for training large language models.
It is only rarely that, after reading a research paper, I feel like giving the authors a standing ovation. But I felt that way after finishing Direct Preference Optimization (DPO) by...
Letters
The New York Times versus OpenAI and Microsoft: The New York Times sued OpenAI and Microsoft for copyright infringement, but the real issues and harms are not clear.
Last week, the New York Times (NYT) filed a lawsuit against OpenAI and Microsoft, alleging massive copyright infringements. The suit claims, among other things, that OpenAI and Microsoft used millions of copyrighted NYT articles to train their models...
Letters
What Will Change — And What Will Stay the Same: Change is hard to predict, so invest your time and energy in things that aren't likely to change.
AI is progressing faster than ever. This is thrilling, yet rapid change can be disorienting. In such times, it’s useful to follow Jeff Bezos’ advice to think about not only what is changing but also what will stay the same.
Letters
The View From NeurIPS 2023: NeurIPS 2023 was chockablock with generative AI, large multimodal models, autonomous agents — and anxiety about the pace of development.
Last week, I attended the NeurIPS conference in New Orleans. It was fun to catch up with old friends, make new ones, and also get a wide scan of current AI research.
Letters
AI Doomsday Scenarios and How to Guard Against Them: AI could help an evildoer perpetrate a bioweapon attack. Here's what we can do about it.
Last week, I participated in the United States Senate’s Insight Forum on Artificial Intelligence to discuss “Risk, Alignment, & Guarding Against Doomsday Scenarios.”
Letters
Making Large Vision Models Work for Business: Large language models can learn what they need to know from the internet, but large vision models need training on proprietary data.
Large language models, or LLMs, have transformed how we process text. Large vision models, or LVMs, are starting to change how we process images as well. But there is an important difference between LLMs and LVMs.
Letters
An Expanding Universe of Large Language Models: From ChatGPT to the open source GPT4All, the bounty of large language models means opportunities for users and developers alike.
One year since the launch of ChatGPT on November 30, 2022, it’s amazing how many large language models are available.Â
A year ago, ChatGPT was pretty much the only game in town for consumers (using a web user interface) who wanted to use a large language model (LLM)...
Letters
What's Next for OpenAI: The abrupt firing and reinstatement of OpenAI CEO Sam Alman leaves impacts both hopeful and worrisome.
I’m delighted that the crisis at OpenAI, which you can read about below, seems to have been resolved with an agreement in principle for Sam Altman to return as CEO after his sudden firing last week.
Letters
Keep Open Source Free!: Regulators threaten to restrict open source development. That would be a huge mistake.
This week, I’m speaking at the World Economic Forum (WEF) and Asia-Pacific Economic Cooperation (APEC) meetings in San Francisco, where leaders in business and government have convened to discuss AI and other topics.
Letters
Everyone Can Benefit From Generative AI Skills: Announcing “Generative AI For Everyone,” a new course that requires no background in coding or AI.
I’ve always believed in democratizing access to the latest advances in artificial intelligence. As a step in this direction, we just launched “Generative AI for Everyone” on Coursera.
Letters
Exaggerated Fear of AI Is Causing Real Harm: AI isn't likely to cause human extinction, but worry that it might is scaring away young people from the field.
Welcome to the Halloween special issue of The Batch, where we take a look at fears associated with AI. In that spirit, I’d like to address a fear of mine: Sensationalist claims that AI could bring about human extinction will cause serious harm.
Letters
Why AI Will Move to Edge Devices: AI will continue to run in data centers, but technology and economics are pushing it to the edge as well.
I wrote earlier about how my team at AI Fund saw that GPT-3 set a new direction for building language applications, two years before ChatGPT was released. I’ll go out on a limb to make another prediction
Subscribe to The Batch
Stay updated with weekly AI News and Insights delivered to your inbox