Dear friends,

Welcome to the Halloween edition of The Batch!

I promised last week to share some common reasons for AI project failures. But first, let’s start with some of the least common reasons.

Illustration of a ghost

If your AI project fails, it is probably not because:

  • Your neural network achieved sentience. Your implementation of ResNet not only refused to classify cat pictures accurately, but worse, it set out to enslave humanity.
  • A poltergeist inhabits in your hardware. Now you know the real reason why GPUs run so hot. Track your system’s temperature and make sure you have an exorcist in your contacts.
  • Daemon and zombie processes are in progress. Daemons and zombies are active in your computer. Wikipedia says so, so we know it to be true. Simple solution: Wipe all hard drives and find a different line of work.

A hair-raising Halloween to all of you who celebrate it, with plenty of tricks and treats.

Keep learning,

Andrew

Boo!

Ghosts in the Machine

On Halloween, dark fantasies dance in the flame of the jack o’lantern’s candle, and we cower before visions of AI gone wrong: Malevolent superintelligences, technologically empowered tyrants, reality twisted by computer generated images. But we need not succumb to fright. This week, The Batch hoists the jack o’lantern high to illuminate the dire possibilities. We examine the facts, consider the risks, and chart a path forward. Take heart! As daylight wanes and the wind grows cold, let us confront our deepest AI fears.


Illustration: Face of a Halloween pumpkin in a purple background

AI Goes Rogue

Could humanity be destroyed by its own creation?

The fear: If binary code running on a computer awakens into sentience, it will be able to think better than humans. It may even be able to improve its own software and hardware. A superior intelligence will see no reason to be controlled by inferior minds. It will enslave or exterminate our species.

What could go wrong: Artificial intelligence already manages crucial systems in fields like finance, security, and communications. An artificial general intelligence (AGI) with access to these systems could crash markets, launch missiles, and sow chaos by blocking or faking messages.

Behind the worries: Humans dominate Earth because we’re smarter than other species. It stands to reason that a superintelligent computer could, in turn, dominate us.

  • Computers already “think” much faster than humans. Signals in the brain travel at around 60 miles per hour. Computers move electrons at the speed of light, roughly 1.5 million times faster. Progressively speedier processors and advances such as quantum computing will only widen the gap.
  • Machines remember more information, too. Scientists estimate that the storage capacity of the human brain is measured in petabytes. Computer storage can grow indefinitely and last as long as the sun shines.

How scared should you be: The notion that general intelligence will emerge from machines taught to play games, monitor security cameras, or solve linguistic puzzles is pure speculation. In his 2016 book The Truth About AI, author Martin Ford asked prominent AI thinkers to estimate when AGI would come online. Their guesses ranged between 10 and nearly 200 years in the future — assuming it’s even possible. If you’re worried about the prospect of an AGI takeover, you have plenty of time to work on safeguards.

What to do: While it would be nice to devise a computer-readable code of ethics that inoculates against a malign superintelligence, for now the danger is rogue humans who might take advantage of AI’s already considerable abilities to do harm. International protocols that hem in bad actors, akin to nuclear nonproliferation agreements, likely would do more good for the time being.


Illustration of 4 ghosts floating and 1 person dressed as a ghost

Deepfakes Wreak Havoc

Will AI fakery erode public trust in the key social institutions?

The fear: Generative models will flood media outlets with convincing but false photos, videos, ads, and news stories. The ensuing crisis of authority will lead to widespread distrust in everything from the financial system to democracy itself.

What could go wrong: Between deepfakes of celebrities and the GPT-2 language model’s ability to churn out faux articles that convince readers they’re from the New York Times, AI is a powerful tool for propagandists, charlatans, and saboteurs. As the technology improves, its potential for social disruption only grows.

Behind the worries: Digital fakery is already on the rise in a variety of sectors.

  • Scammers using AI-generated voices that mimicked C-level executives recently tricked corporations into wiring hundreds of thousands of dollars to offshore accounts.
  • In a video of that went viral in May, U.S. House Speaker Nancy Pelosi appeared to slur her speech, prompting political opponents to question her fitness for office. In fact, the clip had been manipulated to alter playback speed at key moments. Although the fakery didn’t depend on AI, it clearly demonstrated the technology’s potential to spread disinformation rapidly and persuasively.
  • In early October, researchers at Microsoft unveiled a model designed to generate fake comments on news articles. Such tools could be used to create an illusion of grassroots support or dissent around any topic.

How scared should you be: It’s hard to gauge the worry because little research has been done evaluating the impact of digital fakery on public trust. So far, deepfakes have been used mostly to harass individual women, according to one study. An optimist might argue that growing awareness of AI-generated disinformation will spur people to develop stronger social bonds and standards for truth-telling. We’re more inclined to imagine an arms race between fakers and systems designed to detect them. As in digital security, the fakers likely would have an edge as they find ways to breach each new defense.

What to do: Researchers are considering a number of countermeasures to fake media. Some propose watermarks that would establish an item’s provenance. Others argue that blockchain offers an effective way to ensure that information originated with a trusted source.


Illustration of a bat hanging from a branch in front of a building

No Escape From Surveillance

What does freedom mean when computers know your face and track your movements?

The fear: Artificial intelligence will boost the power of surveillance, effectively making privacy obsolete and opening the door to a wide range of abuses.

What could go wrong: AI-driven surveillance may prove so valuable to those in power that they can’t resist using it. Employers could use it to maximize worker efficiency. Criminals could use it to blackmail victims. Politicians could use it to crush opposition, officials to oppress the poor or weak. A tyrannical government could spy on private moments and grade everything citizens do in terms of how favorable it is to Big Brother.

Behind the worries: Digital surveillance has become pervasive. Some surveillance systems are alarmingly prone to false positives and negatives, and they readily can be subverted to serve hidden agendas.

  • Smartphone applications track your location, browsing history, and even mine your contact data, thanks to the permissions you give them in exchange for free apps.
  • More than half of all US companies monitor their employees — including email monitoring and biometric tracking — according to a 2018 report by Gartner.
  • AI surveillance is used by local, state, or national governments in over 40 percent of the world’s countries, from liberal democracies to despotic autocracies, according to the Carnegie Endowment for International Peace.
  • In the U.S., moves to ban some or all government uses of face recognition are proceeding at local, state, and federal levels.

How scared should you be: If you use the internet, own a smartphone, pay with credit, or hold a job, odds are you’re being watched. Whether that’s a sign of pernicious things to come or an increasingly efficient society is an open question.

What to do: The AI community can play a central role in working with lawmakers to develop rules about how data is collected and AI is used to analyze it. In June, for instance, AI experts presented the European Parliament with a 48-page strategy for limiting threats to privacy without curtailing innovation.


A MESSAGE FROM DEEPLEARNING.AI

Don’t understand this meme? Take the Deep Learning Specialization! Enroll now


Illustration of two black cats labeled as cats, one white cat labeled as banana

Biased Data Trains Oppressive AI

Will biases in training data unwittingly turn AI into a tool for persecution?

The fear: Bias encoded in software used by nominally objective institutions like, say, the justice or education systems will become impossible to root out. Result: injustice baked into the very institutions we count on to maintain a fair society.

What could go wrong: AI learns from data to reach its own conclusions. But training datasets are often gathered from and curated by humans who have social biases. The risk that AI will reinforce existing social biases is rising as the technology increasingly governs education, employment, loan applications, legal representation, and press coverage.

Behind the worries: Bias in AI is already making headlines.

  • Models used by healthcare providers to assign care for 100 million patients suffering from chronic ailments like heart disease and diabetes underestimated how urgently black patients needed care, allowing white patients to receive critical care first.
  • Amazon developed an AI tool to find the best candidates among job applicants. The company abandoned it after an in-house audit found that it rated male applicants much higher than female.
  • Machine learning doesn’t only absorb biases encoded in data, it amplifies them. In the paper “Men Also Like Shopping,” researchers noted that an image classification model identified the subjects in 84 percent of photos of people cooking as women, even though only 66 of the images actually contained women. Word embeddings used by the model over-associated the act of cooking with female subjects.

How scared should you be: Until companies announce that they train their models on certified bias-free datasets as loudly as they trumpet machine-learning buzzwords, or until such systems pass a third-party audit, it’s a good bet their technology unfairly advantages some people over others.

What to do: In a 2018 keynote, researcher Rachel Thomas explains how machine learning engineers can guard against bias at each step of the development process. She recommends that every dataset come with a sheet describing how the set was compiled and any legal or ethical concerns that occurred to those who assembled it. She also suggests that teams include people from various backgrounds who may be alert to different sorts of bias.


Illustration of vending machine with candy and the text "Say "trick or treat""

Machines Take Everyone’s Job

From blue collar laborers to lab coated professionals, is any job safe from AI?

The fear: AI will exceed human performance at a wide range of activities. Huge populations will become jobless. They’ll be unable to afford life’s necessities, and even government assistance won’t replace the sense of identity, pride, and direction that come with a job. Humanity will become unmoored.

What could go wrong: Historically, technology created more jobs than it destroyed. What makes AI different is it threatens to outsource the one thing humans have always relied on for employment: their brains. Automated drive-through windows sell milkshakes. Healthcare models interpret x-rays. Natural language programs write sports news. The list is bound to grow longer as the technology becomes more capable.

Behind the fear: Massive unemployment in the past have brought severe social disruption. The U.S. Great Depression in the 1930s saw jobless rates above 34 percent. Researchers have also linked this displacement of work to the rise of nationalism that fueled both the First and Second World Wars.

How scared should you be? There’s little reason to worry in the short term. A 2017 report by McKinsey estimated that automation would replace fewer than 5 percent of the global workforce by 2030. That number comes with caveats, though. In some roles, for instance customer service and repetitive physical labor, one-third of all jobs could be taken by machines. Developing nations will be hit hardest, even though they may also experience explosive growth in high-touch fields such as education and healthcare.

What to do: Lifelong learning is a front-line defense (and a rewarding pursuit!). Education can help you stay ahead of partial automation in your current profession or change lanes if your profession is being automated away. Networked resources like blogs, research papers, online videos, and online courses can help you absorb and develop the kinds of human insights that likely will outpace machines for some time. Beyond that, work with the machines, not against them, argue Andrew McAfee and Erik Brynjolfsson in their book Race Against the Machine. Workers who don’t want to wind up on the chopping block should invest in education to keep current and find tasks that put them in a position to supervise automated systems.


Illustration of a Halloween pumpkin covered in snow

AI Winter Sets In

Could the flood of hype for artificial intelligence lead to a catastrophic collapse in funding?

The fear: AI will fail to deliver on promises inflated by businesses and researchers. Investors will migrate to greener pastures, and AI Winter will descend. Funding will dry up, research will sputter, and progress will stall.

What could go wrong: Enthusiasm surrounding even modest advances in AI is driving an investment bonanza: Venture funds put $9.3 billion into AI startups in 2018, up over 70 percent from the prior year, according to a joint study by PricewaterhouseCoopers and CB Insights. Some critics believe that deep learning has soaked up more than its fair share of investment, draining funds from other approaches that are more likely to lead to fundamental progress. Could funders lose patience?

  • If major AI companies were to experience severe shortfalls in earnings, it could cause press coverage to flip from cheery optimism about AI’s potential to relentless criticism. Public sentiment would turn negative.
  • Ethical lapses by companies making AI-driven products could further darken the horizon.
  • Limits of current technology — for instance, deep learning’s inability to distinguish causation from correlation and autonomous driving’s challenges with image classification and decision making — could become indictments of the entire field.

Behind the worries: AI history is dotted with setbacks brought about by spikes in public skepticism. Two prolonged periods — one lasting for much of the 1970s, the other from the late 80s to early 90s — were dark and cold enough to have earned the name AI Winter.

  • Key agencies in the UK cut AI funding in the wake of James Lighthill’s 1973 report on the lack of progress in the field. In the U.S. around the same time, disillusioned officials terminated Darpa’s multi-institute Speech Understanding Research program. The cuts fueled skepticism among commercial ventures and dried up the pipeline of basic research.
  • By the early 1980s, AI had rebounded. The technology of the day mostly ran on high-powered computers using the LISP operating system. But the late-’80s personal computer revolution gutted the market for these expensive machines, stalling AI’s commercial growth. Again, AI funding retreated.

How scared should you be: It’s true, AI has received enough hype to make P.T. Barnum blush. Yet the current climate shows little sign of impending winter. Earlier this year, Alphabet reported that DeepMind, its deep learning subsidiary, had cost its parent company $570 million as of 2018. Some observers warned that the expense could portend an industry-wide loss in confidence. Yet technical leaders in the field say they’re well aware of deep learning’s shortcomings, and the march of new research is dedicated to surmounting them. Moreover, AI is generating significant revenue, creating a sustainable economic model for continued investment, while AI research is less reliant than ever on government and institutional funding. Established companies, startups, and research labs all have their eyes open for pitfalls and blind alleys.

What to do: As AI practitioners, we should strive to present our work honestly, criticize one another fairly and openly, and promote projects that demonstrate clear value. Genuine progress in improving peoples’ lives is the best way to ensure that AI enjoys perpetual springtime.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox