Generative AI - What’s Legal Versus What’s Fair: Should AI be allowed to learn from data that's freely available to humans?
Letters

Generative AI - What’s Legal Versus What’s Fair: Should AI be allowed to learn from data that's freely available to humans?

As you can read in this issue of The Batch, generative AI companies are being sued over their use of data (specifically images and code) scraped from the web to train their models.
Illustration of a person reading a book in a giant library
Letters

The Unexpected Power of Large Language Models: Training on massive amounts of text partly offsets lack of exposure to other data types.

Recent successes with large language models have brought to the surface a long-running debate within the AI community: What kinds of information do learning algorithms need in order to gain intelligence?
Mathematics for Machine Learning Specialization video teaser
Letters

Don't Worry About Math. Master It!: Unlock the power of machine learning by learning the mathematics that make them work.

Understanding the math behind machine learning algorithms improves your ability to debug algorithms when they aren’t working, tune them so they work better, and perhaps even invent new ones. Today DeepLearning.AI is launching the Mathematics for Machine Learning and Data Science Specialization.
Illustration of two red and blue toy robots fighting with a yellow background
Letters

Do Large Language Models Threaten Google?: ChatGPT and other large language models could disrupt Google's business, but hurdles stand in the way.

In late December, Google reportedly issued a “code red” to raise the alarm internally to the threat of disruption of its business by large language models like OpenAI’s ChatGPT. Do large language models (LLMs) endanger Google's search engine business?
Colorful flower meadow
Letters

Who Will Control Cutting-Edge Language Models?: Why the future is likely to bring more large language models like ChatGPT

Will the future of large language models limit users to cutting-edge models from a handful of companies, or will users be able to choose among powerful models from a large number of developers?
Concept cloud of the LinkedIn and Twitter discussions generated by AI Fund senior AI advisor Carlos Alzate
Letters

What the AI Community Wants in 2023

In last week’s issue of The Batch, Yoshua Bengio, Alon Halevy, Douwe Kiela, Been Kim, and Reza Zadeh shared their hopes for AI in 2023. I also asked people on LinkedIn and Twitter about their hopes for AI this year. Rather than...
Andrew Ng on a couch with a cup of coffee and a book
Letters

How to Achieve Your Long-Term Goals: Make your projects add up to achievement by charting a path and gathering advice from mentors.

As we enter the new year, let’s view 2023 not as a single year, but as the first of more in which we will accomplish our long-term goals. Some results take a long time to achieve, and even though...
Illustration of an oasis in the middle of a field covered in snow
Letters

Generative models are AI's next pillar of value creation: Models like DALL·E and Stable Diffusion are creating a new paradigm for AI applications.

As the winter holiday approaches, it occurs to me that, instead of facing AI winter, we are in a boiling-hot summer of AI.
A red Facebook dislike button surrounded by dozens of Facebook like buttons
Letters

Should AI Moderate Social Media? Deciding which posts to show or hide is a human problem, not a tech problem.

What should be AI’s role in moderating the millions of messages posted on social media every day? The volume of messages means that automation is required. But the question of what is appropriate moderation versus inappropriate censorship lingers.
Question asked by Andrew Ng and answered by the latest version of ChatGPT
Letters

When Models are Confident — and Wrong: Language models like ChatGPT need a way to express degrees of confidence.

One of the dangers of large language models (LLMs) is that they can confidently make assertions that are blatantly false. This raises worries that they will flood the world with misinformation. If they could moderate their degree of confidence appropriately, they would be less likely to mislead.
Screen capture of WhyLabs functioning
Letters

AI, Privacy, and the Cloud: How One Cloud Provider Monitors AI Performance Remotely Without Risking Exposure of Private Data.

On Monday, the European Union fined Meta roughly $275 million for breaking its data privacy law. Even though Meta’s violation was not AI specific, the EU’s response is a reminder that we need to build AI systems that preserve user privacy...
Screen capture of Galactica demo
Letters

What the AI Community Can Learn from the Galactica Incident: Meta released and quickly withdrew a demo of its Galactica language model. Here's what went wrong and how we can avoid It.

Last week, Facebook’s parent company Meta released a demo of Galactica, a large language model trained on 48 million scientific articles. Two days later, amid controversy regarding the model’s potential to generate false or misleading articles, the company withdrew it.
Earth globe crowded with people
Letters

Why 8 Billion People on Earth Are Not Too Many: The growing global population brings more opportunities to make the world a better place.

The population of Earth officially reached 8 billion this week. Hooray! It’s hard to imagine what so many people are up to. While I hope that humanity can learn how to leave only gentle footprints on the planet, I’m excited about the creativity and inventiveness that a growing human population...
Photograph of light at the end of a tunnel
Letters

What to Do in a Tough Economy: How to Survive and Thrive Amid Economic Uncertainty

The economic downturn of the past six months has hit many individuals and companies hard, and I’ve written about the impact of rising interest rates on AI. The effects of high inflation, the Russian war in Ukraine, and an economic slowdown in China are rippling across the globe...
How AI Can Help Counteract Climate Change: It's Time to Consider Cooling the Earth By Atmospheric Aerosol Injection
Letters

How AI Can Help Counteract Climate Change: It's Time to Consider Cooling the Earth By Atmospheric Aerosol Injection

A new report from UN Climate Change says that the world might be on track for 2.5 °C of warming by the end of the century, a potentially catastrophic level of warming that’s far above the 1.5 °C target of the 2015 Paris Agreement.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox