Community

What courses do you want to see the deeplearning.ai team build next?  

Page 1 / 3 Next
  RSS

Andrea Lim
(@andrea)
Active Member Admin
Joined: 3 months ago
Posts: 16
18/09/2018 7:00 pm  

What future courses do you want the deeplearning.ai team to build? What do you want to see more of from deeplearning.ai?

This topic was modified 3 months ago by Andrea Lim

Andrea


nithanaroy liked
ReplyQuote
Topic Tags
Jan Zawadzki
(@janmzawa)
Active Member
Joined: 3 months ago
Posts: 6
19/09/2018 2:15 am  

Reinforcement Learning, please! I believe that it offers tremendous benefits and also very cool applications, which make it an ideal candidate for the Deep Learning Specialization. Also unsupervised learning using neural networks is really interesting, since labeling data is expensive. But I agree with Andrew when he says that the biggest industry impact will be achieved through supervised learning.


mvoitko, ablitstein, JohnyBeGood and 2 people liked
ReplyQuote
Njinchen
(@njinchen)
New Member
Joined: 3 months ago
Posts: 2
19/09/2018 10:52 am  

^^ To add on top of that, maybe help build structure on the advancements in Deep RL. It's difficult to navigate all the different types of RL alone, and understanding the strengths of certain algorithms (for example, a policy gradient method vs. a Q-Learning variant) would really help us understand the possible benefits.

 

However, I think the problem with RL is that it's still very empirical, and it's so fast paced that a course might be also outdated after a couple of years. But I do think there is some value in perhaps understanding how to work with the environments that OpenAI deals with, and how to establish baselines in your performance.


obiwit liked
ReplyQuote
boukary
(@boukary)
New Member
Joined: 3 months ago
Posts: 2
19/09/2018 1:15 pm  

Me too, Reinforcement Learning plzzz 😊  😊  😊 


mvoitko and ablitstein liked
ReplyQuote
atul
 atul
(@atul)
New Member
Joined: 3 months ago
Posts: 3
19/09/2018 3:38 pm  

I would love to see a few of these courses:

1. Reinforcement Learning (echoing a lot of other people here!). This could be structured into two courses: (a) basic RL (b) advanced topics in deep RL. Ideally, both should have several projects (3-5) that showcase practical usage and applications beyond just simulated games. Course could start with OpenAI's gym environment, and proceed to more advanced robotic projects and capabilities.

2. Advanced Computer Vision with Deep Learning. This would go deeper into latest research and topics in computer vision, and build on the topics taught in the CNN course. A few areas of interest:

  • Video scene analysis / Scene Understanding
  • Pose estimation, Action analysis
  • Semantic segmentation, etc.
  • Harmful content detection and classification

This could include latest research from FAIR, Google Brain, Uber AI Labs, etc.

3. Advanced NLP with Deep Learning. This course would go deeper than the RNN course, with advancements in:

  • Neural Machine Translation
  • Speech Recognition (near field, far field, etc)
  • New language models (i.e. beyond English, Spanish, French, Chinese, etc.)

 

4. Edge AI. This would cover specifically running AI/ML on low-powered devices, such as smartphones (iOS/Android), Raspberry PI, Arduino, and perhaps newer devices/platforms like AWS DeepLens, NVIDIA's Jetson X2 platform, etc. The course should cover specific challenges of deploying ML models on lower-powered/low-compute devices, with tutorials / projects on how to create and deploy with latest frameworks such as TensorFlow Lite, PyTorch, Caffe2. This could be combined with cloud-based platforms such as AWS, GCP and Azure. It could cover models such as MobileNet, XNor, ShuffleNet, etc. including new research and applications on iOS/Android platforms. 

Hope this is some food for thought! 


mvoitko, ablitstein, doufulai and 4 people liked
ReplyQuote
Tue Ngo
(@tue-ngo)
New Member
Joined: 3 months ago
Posts: 1
28/09/2018 4:40 pm  

Hi,

I am interested in learning more about TensorFlow and Keras.

I've really enjoyed the ML series taught by Prof. Ng.

Thank you.


ReplyQuote
curt.dodds
(@curt-dodds)
New Member
Joined: 3 months ago
Posts: 1
28/09/2018 5:13 pm  

A course on hyperparameter and architecture search would be a topic interesting enough to take. In particular, implementing GA for both hyperparameter and architecture search and using multiple GPUs for distributing search. Advanced HPC concepts for using GPU clusters to distribute training, using a parameter server, and performing distributed search are also interesting (but understandably expensive). However, a toy problem that runs quickly would still give hands on experience and build confidence for learners who want to utilize these resources.


ReplyQuote
airwoz
(@airwoz)
New Member
Joined: 3 months ago
Posts: 1
28/09/2018 6:02 pm  

A sixth course on generative adversarial networks (GANs) would be interesting. There have been numerous advances in this direction; and Andrew Ng does an impeccable job condensing a subject, and suggesting new routes for exploration. I also believe a course on reinforcement learning would garner intense interest.

Another possibility for a course is deep learning with complex analysis, i.e., models with complex weights, complex ReLu, etc. This course could build on the material from the fourth and fifth courses.   


ReplyQuote
Arun
 Arun
(@arun)
New Member
Joined: 3 months ago
Posts: 1
28/09/2018 7:23 pm  

Hi, I would love to see a course on Applied Artificial Intelligence, Advanced Computer Vision with Deep Learning.


ReplyQuote
mirko
(@mirko)
New Member
Joined: 3 months ago
Posts: 1
28/09/2018 8:59 pm  

Me too, Reinforcement Learning and Meta Learning as well. Thanks !


doufulai liked
ReplyQuote
adalmia
(@adalmia)
New Member
Joined: 3 months ago
Posts: 3
29/09/2018 1:02 am  

RL is definitely a good choice but also it would be exciting to have a few courses on theoretical understanding of deep learning, explainability, fairness, etc.


ReplyQuote
rdebbe
(@rdebbe)
New Member
Joined: 2 months ago
Posts: 1
29/09/2018 6:40 am  

The courses I have taken so far have allowed me to tackle several projects but in every case I used existing models or networks. I wonder if by now, there is already a systematic way their authors follow to achieve their goals.

I looked for courses or books that would summarize the model construction but without much success. I imagine such training is acquired in a full CS graduate program, but it would be great to have a course about model design and construction. It may also be a great opportunity to summarize the knowledge acquired so far.


obiwit liked
ReplyQuote
GeoffreyA
(@geoffreya)
Active Member
Joined: 2 months ago
Posts: 10
01/10/2018 10:11 am  

Please show us your best deep learning super-powers in two areas:

Recommender systems. Not the plain old ones though.  Better ones. There is too much room for improvement if my current YouTube feed is any indication. Why don't they remove the videos that I never click to watch?  And why don't they adjust for time of day, because I want something in the morning completely different than something after work.  I think of it this way, and generalize it:  Is your preferred breakfast food and drink, the same as your preferred dinner food?  Of course not!!!

Product and staffing demand estimation. Examples: predicting number of staff needed at any customer-facing businesses (retail, doctors offices, etc), and predicting inventory.  Amazon Research produced in 2017-2018 the only deep learning paper I have found to take a good modern deep learning approach to retail demand prediction.  I cannot understand the paper well enough to implement the Amazon deep learning model yet, sadly.  Consider questions like: How many cashiers does a grocery store need to staff for the next 7 days, given our knowledge of local, regional, national customs, sporting events like SuperBowl or Soccer/football world cup, school year vs summer vacation, national holidays, summer versus winter, and so on.   I think LSTMs are a part of a solution DL model, and will require very large models, with multiple potentially correlated time series.

What do you think?


ReplyQuote
GeoffreyA
(@geoffreya)
Active Member
Joined: 2 months ago
Posts: 10
01/10/2018 10:18 am  

Please show us your best deep learning super-powers in three areas:

1. Recommender systems. Not the plain old ones though.  Better ones. There is too much room for improvement if my current YouTube feed is any indication. Why don't they remove the videos that I never click to watch?  And why don't they adjust for time of day, because I want something in the morning completely different than something after work.  I think of it this way, and generalize it:  Is your preferred breakfast food and drink, the same as your preferred dinner food?  Of course not!!!

2. Product and staffing demand estimation. Examples: predicting number of staff needed at any customer-facing businesses (retail, doctors offices, etc), and predicting inventory.  Amazon Research produced in 2017-2018 the only deep learning paper I have found to take a good modern deep learning approach to retail demand prediction.  I cannot understand the paper well enough to implement the Amazon deep learning model yet, sadly.  Consider questions like: How many cashiers does a grocery store need to staff for the next 7 days, given our knowledge of local, regional, national customs, sporting events like SuperBowl or Soccer/football world cup, school year vs summer vacation, national holidays, summer versus winter, and so on.   I think LSTMs are a part of a solution DL model, and will require very large models, with multiple potentially correlated time series.

If I recall, a non-deep-learning model using XGBoost (boosted tree) was one of the winners of 2017 Kaggle competition for a grocery store dataset.  Surely we can do better!  It seems finding open data sets is particularly difficult with modern retail operations who don't share much of their customer traffic (automatic doors) and cash register activity.  This one Kaggle competition for grocery stores is about the only single dataset I've been able to get.  Sure, there is competitive nature to business.  Maybe we can get similar if not identical datasets from nonprofit retailers like Goodwill or city soup kitchens?

3. Water pollution and air pollution datasets and predictions.

What do you think?


mvoitko liked
ReplyQuote
GeoffreyA
(@geoffreya)
Active Member
Joined: 2 months ago
Posts: 10
01/10/2018 10:35 am  

Writing fully working code using any modern toolkit, that actually performs Andrew's Basic Recipe for Deep Learning.  It sounds easy but coding this in TF is horribly time consuming and nearly impossible.  Note: Homeworks do not do it. I completed the courses and they just used train and test sets. 

 

The fuller DL Recipe requires 3+ dataset partitions:  Train, Dev, and Test (and maybe more, but 3 is the minimum).  You have to first train and find hyperparameters (e.g., learning rate) for low bias on training set.  Next, you have to train and find the best but different hyperparameters for low variance (e.g., L2 regularization and dropout) while holding fixed the low-bias parameters found earlier, evaluate performance and cost on the DEV set not TRAIN set, while still learning the model (optimizer is still running).  Remember the cost is affected by the regularization, so therefore the optimizer needs to run and change the weights during low-variance development.  Finally run evaluations on TEST set with full set of hyperparameters found during low-bias and low-variance training, with optimizer NOT running now.  Always use Random Grid search in low-bias and low-variance development, varying different HPs accordingly.  Save best models found (lowest cost) during the random searches.  Load the final winning model and test on test set for unbiased error estimate at the very end.

 

It's much harder to do the above than common tutorial code that cuts corners and skips steps.  I'm talking about a course that does the whole thing I described above in a notebook. The TF code to load and save models is particularly hard to debug.  And please do not try to pre-load into RAM the entire datasets, nobody every uses small data in the real world for deep learning.  Don't use MNIST, my goodness.  MNIST fits in RAM. Real images don't all fit in RAM all at once.  Instead, please use batch-at-a-time TF.data.Dataset to read from the data files efficiently.  The feed_dict is only for varying the hyperparameters in modern TF code, not for reading input files, please.  Let us study realistic real world code design in TF or PyTorch or CNTK or Julia, not toy code any  more.  Thanks so much!  Please also use exclusively Eager mode if TF is the toolkit as the TF team has indicated all users should move away from graph code.  TF is not necessary, just pick one modern toolkit and show us how, if you can.

This post was modified 2 months ago by GeoffreyA

Soon and obiwit liked
ReplyQuote
Page 1 / 3 Next
Share:
Subscribe to our newsletter

We use cookies to collect information about our website and how users interact with it. We’ll use this information solely to improve the site. You are agreeing to consent to our use of cookies if you click ‘OK’. All information we collect using cookies will be subject to and protected by our Privacy Policy, which you can view here.

OK
  
Working

Please Login or Register