How useful is pre-training on ImageNet, actually?
Pre-training on ImageNet has been a common step for developing ML models. But how useful is it, actually?
Thanks to deeplearning.ai and Sir Andrew for illustrating the concept of transfer learning, I understood this topic while taking the Deep Learning specialization.
Transfer learning saves a lot of time especially when we use an available pre-trained model, ImageNet's architecture is well structured in the convolutional and pooling and fully connected layers, and it's tremendously helpful in training various deep neural networks on different tasks.