The incessant rhythm of a heartbeat could be the key to distinguishing real videos from deepfakes.

What’s new: DeepRhythm detects deepfakes using an approach inspired by the science of measuring minute changes on the skin’s surface due to blood circulation. Hua Qi led teammates at Kyushu University in Japan, Nanyang Technological University in Singapore; Alibaba Group in the U.S., and Tianjin University in China.

Key insight: Current neural generative models don’t pick up on subtle variations in skin color caused by blood pulsing beneath the surface. Consequently, manipulated videos lack these rhythms. A model trained to spot them can detect fake videos.

How it works: DeepRhythm comprises two systems. The first consists of pretrained components that isolate faces in video frames and highlight areas affected by blood circulation. The second system examines the faces and classifies the video. It was trained and validated on FaceForensics++, a video dataset that collects output from deepfake models.

  • The first system cropped and centered faces based on earlier research into estimating heart rates from videos.
  • The authors drew on two motion magnification techniques to enhance subtle changes in face color.
  • The second system accepted motion-magnified face images mapped to a grid. A convolutional neural network learned to weight grid regions according to the effect of environmental variations such as lighting on face color. Then an LSTM and Meso-4 models worked together to weight the entire grid according to its degree of fakeness.
  • The authors fed the weighted frames into a Resnet-18 to classify videos as real or fake.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox