Long Short-Term Memory (LSTM)

25 Posts

Technical components of No Language Left Behind and how they fit together
Long Short-Term Memory (LSTM)

The Net Speaks in Many Tongues: NLP Model Translates 200 Different Languages

Sentence pairs that have equivalent meanings in different languages — typically used to train machine translation systems — have been available in sufficient quantities for only around 100 languages. New work doubled that number and produced a more capable model.
Skeletal formula of the (S) enantiomer of VX
Long Short-Term Memory (LSTM)

AI Designs Chemical Weapons: Drug Design AI Creates Poisons

It’s surprisingly easy to turn a well-intended machine learning model to the dark side. In an experiment, Fabio Urbina and colleagues at Collaborations Pharmaceuticals, who had built a drug-discovery model to design useful compounds and avoid toxic ones, retrained it to generate poisons.
Tax planning model AI Economist
Long Short-Term Memory (LSTM)

Tax Relief the AI Way: AI Economist creates optimal tax rate.

Nothing is certain except death and taxes, the saying goes — but how to make taxes fair and beneficial remains an open question. New research aims to answer it.
Animated graphics from Google demonstrate Project Relate, a tool for recognizing impaired speech. .
Long Short-Term Memory (LSTM)

Everyone Has a Voice: Project Relate Offers Synthesized Speech that Works in Real Time

An Android app offers speech recognition model for speech impaired by cerebral palsy, Down syndrome, Parkinson’s disease, stroke, or traumatic brain injury.
Illustration showing a witch cooking a copy of the Mona Lisa wearing a witch hat)
Long Short-Term Memory (LSTM)

Artistry Is Obsolete: Is AI Making Human Artists Obsolete?

Is human creativity being replaced by the synthetic equivalent? The fear: AI is cranking out increasingly sophisticated visual, musical, and literary works. AI-generated media will flood the market, squeezing out human artists and depriving the world of their creativity.
Animated video showing a system to interpret electrical impulses from the brain as words
Long Short-Term Memory (LSTM)

Listening to the Brain: NLP System Translates a Man's Brain Activity Into Words

Neural networks translated a paralyzed man’s brainwaves into conversational phrases. Researchers trained a system to interpret electrical impulses from the brain of a man who had lost the ability to speak 15 years ago, and displayed them as words on a video screen.
Automated player learning by watching recorded gameplay
Long Short-Term Memory (LSTM)

Behavioral Cloning Shootout: AI learns to play Counter Strike Global Offensive.

Neural networks have learned to play video games like Dota 2 via reinforcement learning by playing for the equivalent of thousands of years (compressed into far less time). In new work, an automated player learned not by playing for millennia but by watching a few days’ worth of recorded gameplay.
Fire-spotting cameras alerting a fire engine
Long Short-Term Memory (LSTM)

Where There’s Smoke, There’s AI: Computer vision system can see wildfire smoke.

An automated early warning system is alerting firefighters to emerging blazes.
Surgical robots performing different actions
Long Short-Term Memory (LSTM)

Medical AI Gets a Grip: An AI System controlled DaVinci surgical robots.

Surgical robots perform millions of delicate operations annually under human control. Now they’re getting ready to operate on their own.
Diagram showing how Project Debater works
Long Short-Term Memory (LSTM)

Up for Debate: IBM's NLP-powered debate bot mines LexisNexis.

IBM’s Watson question-answering system stunned the world in 2011 when it bested human champions of the TV trivia game show Jeopardy! Although the Watson brand has fallen on hard times, the company’s language-processing prowess continues to develop.
Art pieces with subjective commentary regarding their emotional impact
Long Short-Term Memory (LSTM)

How Art Makes AI Feel: How an AI model feels about art.

An automated art critic spells out the emotional impact of images. Led by Panos Achlioptas, researchers at Ecole Polytechnique, King Abdullah University, and Stanford University trained a deep learning system to generate subjective interpretations of art.
Graphs and data related to recurrent neural nets (RNNs)
Long Short-Term Memory (LSTM)

Performance Guaranteed: How deep learning networks can become Bayes-optimal.

Bayes-optimal algorithms always make the best decisions given their training and input, if certain assumptions hold true. New work shows that some neural networks can approach this kind of performance.
Information related to a deep learning system developed by Sandia National Laboratories
Long Short-Term Memory (LSTM)

Materials Science Gets a Boost: How AI can speed up materials science.

Neural nets could speed up development of new materials.What’s new: A deep learning system from Sandia National Laboratories dramatically accelerated simulations that help scientists understand how changes to the design or fabrication of a material change its properties.
Data related to a language model that predicts mutations that would enable infectious viruses
Long Short-Term Memory (LSTM)

The Language of Viruses: Researchers trained a neural net to predict viruses in DNA.

A neural network learned to read the genes of viruses as though they were text. That could enable researchers to page ahead for potentially dangerous mutations. Researchers at MIT trained a language model to predict mutations that would enable infectious viruses to become even more virulent.
Graphs related to world models
Long Short-Term Memory (LSTM)

It’s a Small World Model After All: More efficient world models for reinforcement learning

World models, which learn a compressed representation of a dynamic environment like, say, a video game, have delivered top results in reinforcement learning. A new method makes them much smaller.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox