ResNet

53 Posts

Masked Pretraining for CNNs: ConvNeXt V2, the new model family that boosts ConvNet performance
ResNet

Masked Pretraining for CNNs: ConvNeXt V2, the new model family that boosts ConvNet performance

Vision transformers have bested convolutional neural networks (CNNs) in a number of key vision tasks. Have CNNs hit their limit? New research suggests otherwise.
Graph with difference in test error in keeping hard versus easy examples
ResNet

Unsupervised Data Pruning: New method removes useless machine learning data.

Large datasets often contain overly similar examples that consume training cycles without contributing to learning. A new paper identifies similar training examples, even if they’re not labeled.
Network architecture of Reasoner
ResNet

What the Missing Frames Showed: Machine Learning Describes Masked Video Events

Neural networks can describe in words what’s happening in pictures and videos — but can they make sensible guesses about things that happened before or will happen afterward? Researchers probed this ability.
Animated chart shows how AI can avoid mistaking an image's subject for its context.
ResNet

Taming Spurious Correlations: New Technique Helps AI Avoid Classification Mistakes

When a neural network learns image labels, it may confuse a background item for the labeled object. New research avoids such mistakes.
Animated flowcharts show how the ProtCNN AI model classifies proteins.
ResNet

Protein Families Deciphered: Machine Learning Categorizes Proteins Based on Their Functions

Convolutional neural networks separate proteins into functional families without considering their shapes.
Flowcharts show how a new contrastive learning approach uses metadata to improve AI image classifiers
ResNet

Learning From Metadata: Descriptive Text Improves Performance for AI Image Classification Systems

Images in the wild may not come with labels, but they often include metadata. A new training method takes advantage of this information to improve contrastive learning.
Two randomly cropped pictures
ResNet

Tradeoffs for Higher Accuracy: Data Augmentation Plus Weight Decay can Boost Some AI Models

Vision models can be improved by training them on several altered versions of the same image and also by encouraging their weights to be close to zero. Recent research showed that both can have adverse effects that may be difficult to detect.
The framework of Virtual Outlier Synthesis (VOS)
ResNet

Right-Sizing Confidence: Object Detector Lowers Confidence for Unfamiliar Inputs

An object detector trained exclusively on urban images might mistake a moose for a pedestrian and express high confidence in its poor judgment. New work enables object detectors, and potentially other neural networks, to lower their confidence when they encounter unfamiliar inputs.
Shifted Patch Tokenization (SPT) | Locality Self-Attention (LSA)
ResNet

Less Data for Vision Transformers: Boosting Vision Transformer Performance with Less Data

Vision Transformer (ViT) outperformed convolutional neural networks in image classification, but it required more training data. New work enabled ViT and its variants to outperform other architectures with less training data.
The performance of different downstream (DS)
ResNet

The Limits of Pretraining: More pretraining doesn't guarantee a better fine-tuned AI.

The higher the accuracy of a pretrained model, the better its performance after fine-tuning, right? Not necessarily. Researchers conducted a meta-analysis of image-recognition experiments and performed some of their own.
Transformer Architecture
ResNet

Transformers See in 3D: Using transformers to visualize depth in 2D images.

Visual robots typically perceive the three-dimensional world through sequences of two-dimensional images, but they don’t always know what they’re looking at. For instance, Tesla’s self-driving system has been known to mistake a full moon for a traffic light.
Animation showing how MERLOT is able to match contextualized captions with their corresponding video frames
ResNet

Richer Video Representations: Pretraining Method Improves AI's Ability to Understand Video

To understand a movie scene, viewers often must remember or infer previous events and extrapolate potential consequences. New work improved a model’s ability to do the same.
Animation showing Hierarchical Outlier Detection (HOD)
ResNet

Oddball Recognition: New Method Identifies Outliers in AI Training Data

Models trained using supervised learning struggle to classify inputs that differ substantially from most of their training data. A new method helps them recognize such outliers.
Animated charts showing how AI can learn from simple tasks to harder versions of the same task
ResNet

More Thinking Solves Harder Problems: AI Can Learn From Simple Tasks to Solve Hard Problems

In machine learning, an easy task and a more difficult version of the same task — say, a maze that covers a smaller or larger area — often are learned separately.
Information about a new unsupervised pretraining method called VICReg
ResNet

More Reliable Pretraining: Pretraining Method Helps AI Learn Useful Representations

Pretraining methods generate basic representations for later fine-tuning, but they’re prone to certain issues that can throw them off-kilter. New work proposes a solution.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox