Building AI Systems No Longer Requires Much Data: Pretrained models make it possible to build AI systems using very little additional data.
Technical Insights

Building AI Systems No Longer Requires Much Data: Pretrained models make it possible to build AI systems using very little additional data.

It’s time to move beyond the stereotype that machine learning systems need a lot of data. While having more data is helpful, large pretrained models make it practical to build viable systems using a very small labeled training set — perhaps just a handful of examples specific to your application.
Beyond Test Sets: How prompting is changing machine learning development
Technical Insights

Beyond Test Sets: How prompting is changing machine learning development

A few weeks ago, I wrote about my team at Landing AI’s work on visual prompting. With the speed of building machine learning applications through text prompting and visual prompting, I’m...
"Visual Prompting” Builds Vision Models in Seconds: A new approach applies ideas from text prompting to computer vision
Technical Insights

"Visual Prompting” Builds Vision Models in Seconds: A new approach applies ideas from text prompting to computer vision

My team at Landing AI just announced a new tool for quickly building computer vision models, using a technique we call Visual Prompting. It’s a lot of fun! I invite you to try it.
Figure showing how researchers obtained the Alpaca model
Technical Insights

When One Machine Learning Model Learns From Another: Was Google’s Bard trained on output from OpenAI's ChatGPT? The technique is legit, but it raises thorny questions.

Last week, the tech news site The Information reported an internal controversy at Google. Engineers were concerned that Google’s Bard large language model was trained in part on output from OpenAI’s ChatGPT, which would have violated OpenAI’s terms of use.
Emad Mostaque, Alexandr Wang, Andrew Ng, and Peter Diamandis at Abundance 360, March 20, 2023
Technical Insights

Catching AI's Next Wave: Generative AI will drive tremendous value and growth.

Generative AI is taking off, and along with it excitement and hype about the technology’s potential. I encourage you to think of it as a general-purpose technology (GPT, not to be confused with the other GPT: generative pretrained transformer).
IBM Watson wins the television game show Jeopardy!, February 16, 2011
Technical Insights

AGI Progress Report: The latest AI models are exciting, but they're far from artificial general intelligence

Here’s a quiz for you. Which company said this? “It’s always been a challenge to create computers that can actually communicate with and operate at anything like the level of a human mind...
Landing AI's computer vision platform, LandingLens
Technical Insights

Computer Vision Made Easy!: LandingLens enables anyone to build in minutes models that used to take months.

Landing AI, a sister company of DeepLearning.AI, just released its computer vision platform, LandingLens, for everyone to start using for free. You can try it now.
Chatbot emoji with a happy and an angry face
Technical Insights

Bad Bot, Good Bot: What Bing's unruly chatbot means for the future of search.

As you can read in this issue of The Batch, Microsoft’s effort to reinvent web search by adding a large language model snagged when its chatbot went off the rails. Both Bing chat and Google’s Bard...
Person sitting on the floor and reading a book in a giant library
Technical Insights

The Unexpected Power of Large Language Models: Training on massive amounts of text partly offsets lack of exposure to other data types.

Recent successes with large language models have brought to the surface a long-running debate within the AI community: What kinds of information do learning algorithms need in order to gain intelligence?
Illustration of two red and blue toy robots fighting with a yellow background
Technical Insights

Do Large Language Models Threaten Google?: ChatGPT and other large language models could disrupt Google's business, but hurdles stand in the way.

In late December, Google reportedly issued a “code red” to raise the alarm internally to the threat of disruption of its business by large language models like OpenAI’s ChatGPT. Do large language models (LLMs) endanger Google's search engine business?
Andrew Ng on a couch with a cup of coffee and a book
Technical Insights

How to Achieve Your Long-Term Goals: Make your projects add up to achievement by charting a path and gathering advice from mentors.

As we enter the new year, let’s view 2023 not as a single year, but as the first of more in which we will accomplish our long-term goals. Some results take a long time to achieve, and even though...
A red Facebook dislike button surrounded by dozens of Facebook like buttons
Technical Insights

Should AI Moderate Social Media: Deciding which posts to show or hide is a human problem, not a tech problem.

What should be AI’s role in moderating the millions of messages posted on social media every day? The volume of messages means that automation is required. But the question of what is appropriate moderation versus inappropriate censorship lingers.
Question asked by Andrew Ng and answered by the latest version of ChatGPT
Technical Insights

When Models are Confident — and Wrong: Language models like ChatGPT need a way to express degrees of confidence.

One of the dangers of large language models (LLMs) is that they can confidently make assertions that are blatantly false. This raises worries that they will flood the world with misinformation. If they could moderate their degree of confidence appropriately, they would be less likely to mislead.
Screen capture of WhyLabs functioning
Technical Insights

AI, Privacy, and the Cloud: How One Cloud Provider Monitors AI Performance Remotely Without Risking Exposure of Private Data.

On Monday, the European Union fined Meta roughly $275 million for breaking its data privacy law. Even though Meta’s violation was not AI specific, the EU’s response is a reminder that we need to build AI systems that preserve user privacy...
Screen capture of Galactica demo
Technical Insights

What the AI Community Can Learn from the Galactica Incident: Meta released and quickly withdrew a demo of its Galactica language model. Here's what went wrong and how we can avoid It.

Last week, Facebook’s parent company Meta released a demo of Galactica, a large language model trained on 48 million scientific articles. Two days later, amid controversy regarding the model’s potential to generate false or misleading articles, the company withdrew it.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox