What courses do you want to see the deeplearning.ai team build next?
Writing fully working code using any modern toolkit, that actually performs Andrew's Basic Recipe for Deep Learning. It sounds easy but coding this in TF is horribly time consuming and nearly impossible. Note: Homeworks do not do it. I completed the courses and they just used train and test sets.
The fuller DL Recipe requires 3+ dataset partitions: Train, Dev, and Test (and maybe more, but 3 is the minimum). You have to first train and find hyperparameters (e.g., learning rate) for low bias on training set. Next, you have to train and find the best but different hyperparameters for low variance (e.g., L2 regularization and dropout) while holding fixed the low-bias parameters found earlier, evaluate performance and cost on the DEV set not TRAIN set, while still learning the model (optimizer is still running). Remember the cost is affected by the regularization, so therefore the optimizer needs to run and change the weights during low-variance development. Finally run evaluations on TEST set with full set of hyperparameters found during low-bias and low-variance training, with optimizer NOT running now. Always use Random Grid search in low-bias and low-variance development, varying different HPs accordingly. Save best models found (lowest cost) during the random searches. Load the final winning model and test on test set for unbiased error estimate at the very end.
It's much harder to do the above than common tutorial code that cuts corners and skips steps. I'm talking about a course that does the whole thing I described above in a notebook. The TF code to load and save models is particularly hard to debug. And please do not try to pre-load into RAM the entire datasets, nobody every uses small data in the real world for deep learning. Don't use MNIST, my goodness. MNIST fits in RAM. Real images don't all fit in RAM all at once. Instead, please use batch-at-a-time TF.data.Dataset to read from the data files efficiently. The feed_dict is only for varying the hyperparameters in modern TF code, not for reading input files, please. Let us study realistic, real world code design in TF or PyTorch or CNTK or Julia, and not toy code any more. Thanks so much!
Please also use exclusively Eager mode if TF is the toolkit as the TF team has indicated in 2018 that all TF users should move away from graph code. TF is not necessary, just pick one modern toolkit and show us how, if you can. CNTK works well enough but Microsoft seemingly stopped development in 2017 for unknown reasons, and besides CNTK product management inexplicably refuse to support CSV input files effectively, which is horrendous for data science, since most open datasets are CSV or image files.
I finished the specialization and I must say, Andrew Ng is an outstanding teacher. You can tell he is very passionate about the subject, and wants others to get it. That's great. I also liked learning so much of the theory. I know fast.ai takes the opposite approach (practice then theory) but I believe both approaches can complement each other.
That being said, what was frustrating for me was that some of the later programming assignments did very little to drive the material home. I felt that many of them were more of a test of how well can I use Google. I can't tell you how many times I went through the entire week of material, only to be pulling my hair out because I understood the theory, but couldn't figure out the exact syntax of a line of code (and the official documentation is often quite dismal). As an experiment, I completed the programming assignments for one of the weeks without watching a single minute of the videos (though of course I did go back and finish the videos afterward). It was exactly the same level of challenge, and I felt it was the wrong kind of challenge. So I'd like to see fewer of these puzzles in future assignments, and more opportunities to explore and experiment with the material, what it means, and how the hyperparameters change things.
I've just begun the fast.ai courses and based on what I'm seeing, I'd like to see deeplearning.ai move into Pytorch, which seems to be getting more and more important these days. Lastly, while real world applications are great, I also appreciate the fun assignments like Dino Island and Happy House. Some of my favorite Twitter accounts are bots that use deep learning to generate nonsense sentences, news headlines, pictures of imaginary creatures and so forth. So please keep those up, and keep up the great work! Thanks.
Hi! Thank you very much for offering the previous courses on ML and DL (specialization). They helped me in my research in the field of wireless communications. Next, it would be great to see more on the following topics:
- Algorithms used in Alpha Go (Zero) like Reinforcement Learning and Monte Carlo tree search.
Also, I would suggest to introduce a full-fledged specialization on AI and Machine Learning itself covering much more than the initial ML course.
Xi'an Jiaotong University, P. R. China.
- Model stacking & ensembling
- Features gain measurement and pruning
- Statistical feature engineering: sparse matrices, etc.
- Organization of production systems: ML framework for large-scale learning
I am interested in learning more about ensemble methods such as bagging, boosting and decision trees. Several problems in kaggle are solved using these methods. Other courses on Machine Learning touch up on these concepts but I would really like to understand these methods the way we are taught other fundamental concepts in the Deep Learning course.
The next specialization should cover the following topics:
1) Reinforcement Learning (including deep reinforcement learning). Shoud include multiple jupyter notebook assigments using the OpenAI gym.
2) Advanced Computer Vision. This should include GAN, DGAN, etc.; also go in far more depth in object detection, face recongnition.
3) Advanced natural language processing using deep learning. This should have all the topics in http://cs224d.stanford.edu/.
4) Bayesian deep learning. This should include the topics in the workshop given here: http://bayesiandeeplearning.org/.