Robots designed to assist people with disabilities have become more capable, but they’ve also become harder to control. New research offers a way to operate such complex mechanical systems more intuitively.

What’s new: Researchers at Stanford enabled a joystick to control a seven-joined mechanical arm in a way that adapts automatically to different tasks. Their work could make it easier for people suffering from compromised mobility in a variety of common activities.

Key insight: An intuitive mapping of joystick motions to arm movements depends on context. Pushing a joystick downward to control a robot arm that holds a glass of water may be an instruction to pour, while the same motion applied to an empty hand may be a command to sweep the arm downward. Dylan P. Losey and colleagues used a conditional variational autoencoder to learn a vector, controllable by a joystick, that depends on the arm’s current position.

How it works: An autoencoder is a two-part network, consisting of an encoder and decoder, that learns a simplified representation of its input. The encoder maps an input vector to a smaller output vector. The decoder network tries to recreate the input from the encoder’s output. A variational autoencoder creates a distribution of latent vectors for a given input, and a conditional variational autoencoder changes that distribution depending on state information.

  • The model learns a simplified control representation from examples of the robotic arm achieving a task; for example, reaching to grab an item.
  • A joystick captures user input in the form of this simplified control representation. The decoder translates this input into motor controls that maneuver the arm. For instance, for reaching, up and down may control arm extension, while left and right open and close the hand’s grasp.
  • To prevent logical inconsistencies, such as large motor changes from small joystick movements, the encoder is penalized for having large variance in its simplified representations. However, the simplified representations don’t define the exact movements of each joint, so they sacrifice some precision.

Results: Among other experiments, the researchers had users control the arm to make an apple pie by mixing ingredients and disposing of containers. Participants used either the simplified controls or common controls that define the movement of each joint. Users of the new method produced their pies in half the time, on average, and reported much greater ease.

Why it matters: Nearly a million Americans face disabilities requiring robotic assistance in everyday tasks. A simple, intuitive control method could allow such people autonomy rather than having to delegate tasks to a caregiver.

We’re thinking: In this case, a conditional variational autoencoder made it easier to use a mechanical arm, but these networks could help simplify a plethora of human interactions with machines and computers.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox