MobileNet

3 Posts

Graphs and data related to ImageNet performance
MobileNet

ImageNet Performance, No Panacea: ImageNet pretraining won't always improve computer vision.

It’s commonly assumed that models pretrained to achieve high performance on ImageNet will perform better on other visual tasks after fine-tuning. But is it always true? A new study reached surprising conclusions.
Information related to the Once-for-All (OFA) method
MobileNet

Build Once, Run Anywhere: The Once-For-All technique adapts AI models to edge devices.

From server to smartphone, devices with less processing speed and memory require smaller networks. Instead of building and training separate models to run on a variety of hardware, a new approach trains a single network that can be adapted to any device.
Image processing technique explained
MobileNet

Preserving Detail in Image Inputs: Better image compression for computer vision datasets

Given real-world constraints on memory and processing time, images are often downsampled before they’re fed into a neural network. But the process removes fine details, and that degrades accuracy. A new technique squeezes images with less compromise.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox