Community

What courses do you...
 
Share:

What courses do you want to see the deeplearning.ai team build next?  

Page 2 / 4
  RSS

GeoffreyA
(@geoffreya)
Active Member
Joined: 1 year ago
Posts: 10
01/10/2018 10:42 am  

Writing fully working code using any modern toolkit, that actually performs Andrew's Basic Recipe for Deep Learning.  It sounds easy but coding this in TF is horribly time consuming and nearly impossible.  Note: Homeworks do not do it. I completed the courses and they just used train and test sets. 

 

The fuller DL Recipe requires 3+ dataset partitions:  Train, Dev, and Test (and maybe more, but 3 is the minimum).  You have to first train and find hyperparameters (e.g., learning rate) for low bias on training set.  Next, you have to train and find the best but different hyperparameters for low variance (e.g., L2 regularization and dropout) while holding fixed the low-bias parameters found earlier, evaluate performance and cost on the DEV set not TRAIN set, while still learning the model (optimizer is still running).  Remember the cost is affected by the regularization, so therefore the optimizer needs to run and change the weights during low-variance development.  Finally run evaluations on TEST set with full set of hyperparameters found during low-bias and low-variance training, with optimizer NOT running now.  Always use Random Grid search in low-bias and low-variance development, varying different HPs accordingly.  Save best models found (lowest cost) during the random searches.  Load the final winning model and test on test set for unbiased error estimate at the very end.

 

It's much harder to do the above than common tutorial code that cuts corners and skips steps.  I'm talking about a course that does the whole thing I described above in a notebook. The TF code to load and save models is particularly hard to debug.  And please do not try to pre-load into RAM the entire datasets, nobody every uses small data in the real world for deep learning.  Don't use MNIST, my goodness.  MNIST fits in RAM. Real images don't all fit in RAM all at once.  Instead, please use batch-at-a-time TF.data.Dataset to read from the data files efficiently.  The feed_dict is only for varying the hyperparameters in modern TF code, not for reading input files, please.  Let us study realistic, real world code design in TF or PyTorch or CNTK or Julia, and not toy code any  more.  Thanks so much! 

Please also use exclusively Eager mode if TF is the toolkit as the TF team has indicated in 2018 that all TF users should move away from graph code.  TF is not necessary, just pick one modern toolkit and show us how, if you can. CNTK works well enough but Microsoft seemingly stopped development in 2017 for unknown reasons, and besides CNTK product management inexplicably refuse to support CSV input files effectively, which is horrendous for data science, since most open datasets are CSV or image files.


ReplyQuote
nithanaroy
(@nithanaroy)
New Member
Joined: 1 year ago
Posts: 3
03/10/2018 8:40 pm  
  1. Reinforcement Learning
  2. GAN
  3. Deep Learning for structured data (e.g. Graph Convolutional Networks)

ReplyQuote
nithanaroy
(@nithanaroy)
New Member
Joined: 1 year ago
Posts: 3
04/10/2018 8:37 am  
  1. Deep Learning for non-classification problems (e.g. Auto encoders)
  2. Class of problems solved by Tensorflow Probability (Deep Learning with less data)

ReplyQuote
ramin
(@ramin)
New Member
Joined: 1 year ago
Posts: 1
09/10/2018 3:12 pm  

any skills that we would need as an ML engineer, like teach us more about algorithms and its connection to ML. How to optimize our codes and GPU. How to do multiprocessing and multithreading for ML. 


ReplyQuote
Richmond.umagat
(@richmond-umagat)
New Member
Joined: 1 year ago
Posts: 1
09/10/2018 5:35 pm  

I’m about to finish the specialization at Coursera. I’ve checked Udacity’s offerings and their curriculum includes GANs and DeepRL. They also offer Self Driving Car nanodegree. 

Would be great if deeplearning.ai can offer similar courses.


ReplyQuote
crayoneater
(@crayoneater)
New Member
Joined: 1 year ago
Posts: 2
09/10/2018 5:48 pm  

I finished the specialization and I must say, Andrew Ng is an outstanding teacher. You can tell he is very passionate about the subject, and wants others to get it. That's great. I also liked learning so much of the theory. I know fast.ai takes the opposite approach (practice then theory) but I believe both approaches can complement each other.

That being said, what was frustrating for me was that some of the later programming assignments did very little to drive the material home. I felt that many of them were more of a test of how well can I use Google. I can't tell you how many times I went through the entire week of material, only to be pulling my hair out because I understood the theory, but couldn't figure out the exact syntax of a line of code (and the official documentation is often quite dismal). As an experiment, I completed the programming assignments for one of the weeks without watching a single minute of the videos (though of course I did go back and finish the videos afterward). It was exactly the same level of challenge, and I felt it was the wrong kind of challenge. So I'd like to see fewer of these puzzles in future assignments, and more opportunities to explore and experiment with the material, what it means, and how the hyperparameters change things.

I've just begun the fast.ai courses and based on what I'm seeing, I'd like to see deeplearning.ai move into Pytorch, which seems to be getting more and more important these days. Lastly, while real world applications are great, I also appreciate the fun assignments like Dino Island and Happy House. Some of my favorite Twitter accounts are bots that use deep learning to generate nonsense sentences, news headlines, pictures of imaginary creatures and so forth. So please keep those up, and keep up the great work! Thanks.


Orange liked
ReplyQuote
peacebilal
(@peacebilal)
New Member
Joined: 1 year ago
Posts: 1
09/10/2018 6:18 pm  

Hi! Thank you very much for offering the previous courses on ML and DL (specialization). They helped me in my research in the field of wireless communications. Next, it would be great to see more on the following topics:

- Algorithms used in Alpha Go (Zero) like Reinforcement Learning and Monte Carlo tree search.

Also, I would suggest to introduce a full-fledged specialization on AI and Machine Learning itself covering much more than the initial ML course.

Regards, 

Bilal Hussain

Xi'an Jiaotong University, P. R. China. 

 


ReplyQuote
Orange
(@sgdread)
New Member
Joined: 1 year ago
Posts: 1
10/10/2018 12:45 am  
  • Model stacking & ensembling
  • Features gain measurement and pruning
  • Statistical feature engineering: sparse matrices, etc.
  • Organization of production systems: ML framework for large-scale learning
This post was modified 1 year ago by Orange

ReplyQuote
dimitreOliveira
(@dimitreoliveira)
New Member
Joined: 1 year ago
Posts: 1
10/10/2018 5:23 am  

I would like to see something with reinforcement learning, maybe that and deep learning together would be good too, or maybe something about the whole process of machine learning, from collecting, prepare and feeding the data until evaluating and serving the model.


ReplyQuote
varsha.n.bhat
(@varsha-n-bhat)
New Member
Joined: 1 year ago
Posts: 2
10/10/2018 3:04 pm  

I am interested in learning more about ensemble methods such as bagging, boosting and decision trees. Several problems in kaggle are solved using these methods. Other courses on Machine Learning touch up on these concepts but I would really like to understand these methods the way we are taught other fundamental concepts in the Deep Learning course. 

 


ReplyQuote
fretchen1
(@fretchen1)
New Member
Joined: 1 year ago
Posts: 1
16/10/2018 4:51 am  

Certainly reinforcement learning. This could be very neat for control and it is hard to find a way through the literature.


big1 liked
ReplyQuote
itrat.rahman
(@itrat-rahman)
New Member
Joined: 1 year ago
Posts: 1
17/10/2018 8:05 pm  

The next specialization should cover the following topics:

1) Reinforcement Learning (including deep reinforcement learning). Shoud include multiple jupyter notebook assigments using the OpenAI gym.

2) Advanced Computer Vision. This should include GAN, DGAN, etc.; also go in far more depth in object detection, face recongnition.

3) Advanced natural language processing using deep learning. This should have all the topics in http://cs224d.stanford.edu/.

4) Bayesian deep learning. This should include the topics in the workshop given here: http://bayesiandeeplearning.org/.


big1 liked
ReplyQuote
daniel.kohlsdorf
(@daniel-kohlsdorf)
New Member
Joined: 1 year ago
Posts: 3
24/10/2018 5:59 am  

I would second reinforcement learning and maybe a more detailed course on unsupervised feature learning.

Daniel 

 

 


big1 liked
ReplyQuote
amarjeet__
(@amarjeet__)
New Member
Joined: 1 year ago
Posts: 1
29/10/2018 2:07 am  

Reinforcement learning for sure !


big1 liked
ReplyQuote
georgesh
(@georgesh)
New Member
Joined: 1 year ago
Posts: 1
31/10/2018 8:44 am  

Reinforcement learning :))


big1 liked
ReplyQuote
Page 2 / 4
Share:

We use cookies to collect information about our website and how users interact with it. We’ll use this information solely to improve the site. You are agreeing to consent to our use of cookies if you click ‘OK’. All information we collect using cookies will be subject to and protected by our Privacy Policy, which you can view here.

OK

Please Login or Register