Dear friends,

Each year, AI brings wondrous advances. But, as Halloween approaches and the veil lifts between the material and ghostly realms, we see that spirits take advantage of these developments at least as much as humans do.

As I wrote last week, prompt engineering, the art of writing text prompts to get an AI model to generate the output you want, is a major new trend. Did you know that the Japanese word for prompt — 呪文— also means spell or incantation? (Hat tip to natural language processing developer Paul O’Leary McCann.) The process of generating an image using a model like DALL·E 2 or Stable Diffusion does seem like casting a magic spell — not to mention these programs' apparent ability to reanimate long-dead artists like Pablo Picasso — so Japan's AI practitioners may be onto something.

Some AI companies are deliberately reviving the dead. The startup HereAfter AI produces chatbots that speak, sound, and look just like your long-lost great grandma. Sure, it's a simulation. Sure, the purpose is to help the living connect with deceased loved ones. When it comes to reviving the dead — based on what I've learned by watching countless zombie movies — I'm sure nothing can go wrong.

Illustration of a hand coming out of a box to take a candy from a Trick or Treat paper bag

I'm more concerned by AI researchers who seem determined to conjure ghastly creatures. Consider the abundance of recent research into transformers. Every transformer uses multi-headed attention. Since when is having multiple heads natural? Researchers are sneaking multi-headed beasts into our computers, and everyone cheers for the new state of the art! If there's one thing we know about transformers, it's that there's more than meets the eye.

This has also been a big year for learning from masked inputs, and approaches like Masked Autoencoders, MaskGIT, and MaskViT have achieved outstanding performance in difficult tasks. So if you put on a Halloween mask, know that you're supporting a key idea behind AI progress.

Trick or treat!

Andrew

Pumpking with carved eyes lighting up on the corner of a black background

What Lurks in the Shadows?

Ever look at a neural network’s output and think to yourself, “that's uncanny”? While the results can be inspiring — potential cures for dreaded diseases, streamlined industrial operations, beautiful artworks — they can also be terrifying. What if a model’s pattern-matching wizardry were applied to designing poison gas? Have corporate executives sold their souls in return for automated efficiency? Will evil spirits gain the upper hand as nations jockey for AI dominance? In this special issue of The Batch, as in previous years at this season, we raise a torch to the gloomy corners of AI and face gremlins that we ourselves have unleashed. Onward into the darkness!


Illustration of Frankenstein connected to many chemical elements inside of a lab

The Black Box Awakens

AI researchers are starting to see ghosts in their machines. Are they hallucinations, or does a dawning consciousness haunt the trained weights?

The fear: The latest AI models are self-aware. This development — at best —poses ethical dilemmas over human control of sentient digital beings. More worrisome, it raises unsettling questions about what sort of mind a diet of data scraped from the internet might produce.

Horror stories: Sightings of supposed machine sentience have come from across the AI community.

  • In February, OpenAI cofounder Ilya Sutskever tweeted about the possibility that large neural networks may already be “slightly conscious.” Andrej Karpathy was quick to reply, “agree.” However, Yann LeCun and Judea Pearl, criticized the claim as far-fetched and misleading.
  • In June, a Google engineer became convinced that chatbots powered by LaMDA, the company’s family of large language models, were sentient. He published conversations in which the bots discussed their personhood, rights, and fear of being turned off. Google — which denied the engineer’s claims — fired him.
  • As the world was absorbing the prospect of sentient AI, researchers shared evidence that DALL-E 2 had developed its own language. When prompted to produce an image with text, DALL-E 2 tends to generate what appear to be random assortments of letters. The authors found that feeding the same gibberish back into the model produced similar images. For example, a request for “apoploe vesrreaitais” brought forth images of birds.
  • In September, a multimedia artist experimenting with an unspecified text-to-image model found that “negative” prompts designed to probe the far reaches of the model’s latent space summoned disturbing images of a woman with pale skin, brown hair, and thin lips, sometimes bloody and surrounded by gore. The artist calls her Loab.

It’s just an illusion, right?: While media reports generally took the claim that LaMDA’s was self-aware seriously — albeit skeptically — the broader AI community roundly dismissed it. Observers attributed impressions that LaMDA is sentient to human bias and DALL-E 2’s linguistic innovation to random chance. Models learn by mimicking their training data, and while some are very good at it, there’s no evidence to suggest that they do it with understanding, consciousness, or self-reflection. Nonetheless, Loab gives us the willies.

Facing the fear: Confronted with unexplained phenomena, the human mind excels at leaping to fanciful conclusions. Science currently lacks a falsifiable way to verify self-awareness in a computer. Until it does, we’ll take claims of machine sentience or consciousness with a shaker full of salt.


Person with a sad face in front of an abandoned Chip's Candies factory

No More GPUs

Advanced AI requires advanced hardware. What if the global supply of high-end AI chips dries up?

The fear: Most of the world’s advanced AI processors are manufactured in Taiwan, where tension with mainland China is rising. Nearly all such chips are designed in the U.S., which has blocked China from obtaining them. That could prompt China to cut off U.S. access to Taiwan’s manufacturing capacity. Military action would be a human tragedy. It would also imperil progress in AI.

Horror stories: China and the U.S. are on a collision course that threatens the global supply of advanced chips.

  • In October, the U.S. government announced sweeping rules that bar U.S. companies from selling high-performance chips and chip-making equipment to China. The restrictions also prevent non-U.S. chip makers that use U.S. software or equipment from selling to or working with China. China’s AI efforts rely primarily on chips designed by Nvidia, a U.S. company.
  • Even if tensions relax, other obstacles may impede the flow of advanced chips. Ongoing anti-Covid lockdowns could disrupt chip supplies, as could drought in Taiwan and floods in Malaysia.

Securing the supply: Both the U.S. and China are trying to produce their own supplies of advanced chips. But fabricating circuitry measured in single-digit nanometers is enormously difficult and expensive, and there’s no guarantee that any particular party will accomplish it.

  • China is executing a 2014 plan to achieve dominance in semiconductors. It’s cultivating a domestic semiconductor industry, though the U.S. sanctions on chip-design and -manufacturing equipment explicitly threaten that project.
  • In August, the U.S. government passed the CHIPS and Science Act. This law aims to boost U.S. semiconductor supplies by giving U.S. manufacturers tax incentives to build factories in the U.S. and funding research and development.
  • Intel, which manufactures chips but has fallen behind in advanced fabrication technology, recently broke ground on a $20 billion pair of plants in central Ohio.
  • Foreign makers of cutting-edge chips are moving to the U.S. Taiwan Semiconductor Manufacturing Company, which produces most of the world’s most advanced chips, is building a new $12 billion plant in Arizona, slated to start production in 2024. Samsung, which also boasts advanced fabrication capabilities, plans a $17 billion factory in Texas.

Facing the fear: If a chipocalypse does occur, the AI community will need to become adept at workarounds that take advantage of older semiconductor technology, such as small data, data-centric AI development, and high-efficiency model architectures. It will also need to push for international cooperation amid intensifying polarization. Still, a chip shortage would be the least scary thing about a great-power conflict.


A MESSAGE FROM DEEPLEARNING.AI

Branching out of the Notebook event banner

Do you want to develop and deploy machine learning applications? Join our hands-on workshop “Branching out of the Notebook: ML Application Development with GitHub” on November 9, 2022, to learn industry-standard practices you can use today! RSVP


Ghost controlling a humanoid marionette during a job interview with a female candidate

Inhuman Resources

Companies are using AI to screen and even interview job applicants. What happens when out-of-control algorithms are the human resources department?

The fear: Automated systems manage every stage of the hiring process, and they don’t play fair. Trained on data rife with social biases, they blatantly discriminate when choosing which candidates to promote and which to reject. The door to your dream job is locked, and an unaccountable machine holds the key. Minority candidate? Speak with an accent? Unconventional background? You’re out of distribution!

Horror stories: Many companies and institutions use automated hiring systems, but independent researchers have found them prone to bias and outright error.

  • A 2021 study by Accenture and Harvard found that 63 percent of employers in the U.S., UK, and Germany — including 99 percent of Fortune 500 companies — used automated systems to recruit candidates or screen applications.
  • Hiring systems MyInterview and Curious Thing, which together boast thousands of corporate clients, gave high marks for English proficiency to mock candidates who spoke German during their interviews, an investigation by MIT Technology Review found.
  • A video interviewing program from Retorio scored job seekers differently depending on whether they wore glasses, donned headscarves, or displayed bookshelves in the background, Bavarian Public Broadcasting reported. The program’s users include BMW and Lufthansa.
  • A popular video interviewing system from HireVue offered to predict candidates’ aptitude for particular jobs based on face analysis. The company removed the capability after a member of its scientific advisory board resigned in protest.

Bad performance review: Automated hiring systems are facing scrutiny from lawmakers and even the companies that use them.

  • In 2023, New York City will require prospective employers to inform job applicants if they use hiring algorithms and to offer non-automated alternatives, and to conduct yearly audits for bias. Illinois passed a similar law in 2020.
  • The current draft of the European Union’s proposed AI Act requires hiring algorithms to undergo extensive human oversight. Developers who seek to sell systems in Europe must provide a risk assessment and evidence that neither the system nor its training data are unacceptably biased. UK lawmakers are considering similar restrictions.
  • The Data and Trust Alliance, a nonprofit group that seeks to reduce tech-related bias in workplaces, developed tools to assess fairness in hiring algorithms. 22 companies including IBM, Meta, and Walmart agreed to use them.

Facing the fear: While many companies use hiring algorithms, most still keep humans in the loop. They have good incentive to do so: While machines can process mountains of resumes, human managers may recognize candidates who have valuable traits that an algorithm would miss. Humans and machines have complementary strengths, and a careful combination may be both efficient and fair.


Dozens of toxic barrels on a road and a witch behind

Foundations of Evil

A growing number of AI models can be put to purposes their designers didn’t envision. Does that include heinous deeds?

The fear: Foundation models have proven to be adept at deciphering human language. They’ve also proven their worth in deciphering the structural languages of biology and chemistry. It’s only a matter of time before someone uses them to produce weapons of mass destruction.

Horror stories: Researchers demonstrated how an existing AI system can be used to make chemical weapons.

  • In March, researchers from Collaborations Pharmaceuticals fine-tuned a drug-discovery model on a dataset of toxic molecules.
  • The original model ranked pharmaceutical candidates for predicted toxicity to humans. They reversed the ranking to prioritize the deadliest chemical agents.
  • In six hours, the model designed 40,000 toxins including known chemical weapons that were not in its training set.
  • The researchers believe that their process would be easy to replicate using open-source models and toxicity data.

Gas masks: In an interview, one of the researchers suggested that developers of general-purpose models, such as the one they used to generate toxic chemicals, should restrict access. He added that the machine learning community should institute standards for instruction in chemistry that inform budding scientists about the dangers of misusing research.

Facing the fear: It’s hard to avoid the conclusion that the safest course is to rigorously evaluate the potential for harm of all new models and restrict those that are deemed dangerous. Such a program is likely to meet with resistance from scientists who value free inquiry and businesspeople who value free enterprise, and it might have limited impact on new threats that weren’t identified when the model was created. Europe is taking a first step with its regulation of so-called general-purpose AI. However, without a broad international agreement on definitions of dangerous technology and how it should be controlled, people in other parts of the world will be free to ignore them. Considering the challenges, perhaps the best we can do is to work proactively and continually to identify potential misuses and ways to thwart them.


Office with coworkers transformed into mythical creatures like vampires, ghosts and werewolves

Your Coworkers Aren’t Human

The new remote administrative assistant is a little too perky, hardworking, and efficient. Is it because he’s a bot?

The fear: Virtual employees are infiltrating the distributed office. Outfitted with programmed personalities and generated smiles, they’re increasingly difficult to tell from flesh and blood. Managers, pleased by the productivity boost, will stop caring which is which, leaving you surrounded by colleagues who cheerfully work 24/7, never make a mistake, and decline invitations to meet up for happy hour.
Horror stories: What started in the middle of the last decade with programs like Clara — who schedules meetings via emails so cordial they might fool the uninitiated — has evolved into human-like agents dressed up with names, faces, and fake resumes.

  • WorkFusion offers a line of virtual teammates in six specialized roles, including customer service coordinator, insurance underwriter, and transaction screening analyst. Each digital worker has a persona portrayed by a human actor.
  • Synthesia uses generative adversarial networks to synthesize videos that feature photorealistic talking heads that read scripts aloud in 34 languages. Customers use the service to generate training and sales videos without a human actor.
  • Marketing companies LIA (for LinkedIn Lead Generation Assistant) and Renova Digital offer avatars that enable real salespeople to close multiple deals at once. Stanford researchers discovered over 1,000 LinkedIn profiles, many of them in marketing, that turned out to be false personas bearing face portraits produced by generative adversarial networks.

Fraudulent friends: White-collar bots pose threats more serious than a proliferation of workplaces with addresses in the uncanny valley. In 2020, fraudsters used a generative audio model to clone the voice of a company director and convince a Hong Kong bank to fork over some $35 million. Con artists using a similar play stole $243,000 from a UK energy firm in 2019.

Facing the fear: Ceaselessly cheerful, perpetually productive automatons might leave their human colleagues feeling demoralized. If you’re going to anthropomorphize your algorithms, at least program them to be late for a meeting once in a while.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox