Dec 07, 2022

6 Posts

Question asked by Andrew Ng and answered by the latest version of ChatGPT
Dec 07, 2022

ChatGPT Mania, Crypto Fiasco Defunds AI Safety, Alexa Makes Up Stories, Vision Model Looks Into the Future

The Batch - AI News & Insights: One of the dangers of large language models (LLMs) is that they can confidently make assertions that are blatantly false. This raises worries that they will flood the world with misinformation. If they could moderate their degree of confidence appropriately...
Ground truth video of a road on the left and predicted video with MaskViT on the right
Dec 07, 2022

Seeing What Comes Next: Transformers predict future video frames.

If a robot can predict what it’s likely to see next, it may have a better basis for choosing an appropriate action — but it has to predict quickly. Transformers, for all their utility in computer vision, aren’t well suited to this because of their steep computational and memory requirements...
Different screenshots of Create with Alexa feature displayed on a tablet
Dec 07, 2022

How Alexa Says Goodnight: Amazon Echo uses generative AI to create bedtime stories.

Too exhausted (or unimaginative) to tell your child a bedtime story? Amazon’s smart displays can spin bespoke tales on demand. A feature called Create with Alexa generates children’s stories complete with illustrations, music, and sound effects on the Amazon Echo Show device.
FTX logo drowning in a sea full of dollars
Dec 07, 2022

Cryptocurrency Unsafe for AI: How FTX's collapse impacts AI.

The demise of cryptocurrency exchange FTX threatens funding for some teams devoted to AI safety. FTX, the $32 billion exchange that plunged into bankruptcy last month amid allegations of fraud, had given or promised more than $530 million to over 70 AI-related organizations.
List of ChatGPT's examples, capabilities and limitations
Dec 07, 2022

More Plausible Text, Familiar Failings: ChatGPT hasn’t overcome the weaknesses of other large language models

Members of the AI community tested the limits of the ChatGPT chatbot, unleashing an avalanche of tweets that made for sometimes-great, sometimes-troubling entertainment.
Question asked by Andrew Ng and answered by the latest version of ChatGPT
Dec 07, 2022

When Models are Confident — and Wrong: Language models like ChatGPT need a way to express degrees of confidence.

One of the dangers of large language models (LLMs) is that they can confidently make assertions that are blatantly false. This raises worries that they will flood the world with misinformation. If they could moderate their degree of confidence appropriately, they would be less likely to mislead.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox