Community

Should we make the optimizer algorithm active when evaluating hyperparameters for low variance error against the dev set?  

  RSS

GeoffreyA
(@geoffreya)
Active Member
Joined: 2 months ago
Posts: 10
03/10/2018 9:49 am  

Should we make the optimizer algorithm active when evaluating hyperparameters for low variance error against the dev set?  Let's say we have a model which requires a Lambda value to adjust the relative influence of L2 regularization being applied to the cost computation.  Further, let's say we are doing a hyperparameter search in which some values of Lambda are being evaluated against a dev set.

Consider the following:

1. Training set​ — Which you run your learning algorithm on. Dev (development) set​ — Which you use to tune parameters, select features, and make other decisions regarding the learning algorithm. Sometimes also called the hold-out cross validation set​.(source: Andrew's book Machine Learning Yearning)

2. Cost computation is affected by Lambda value which is coefficient on the L2 regularization term of the cost.  

3. The model's learned weights will only be modified in response to cost when the optimizer is set to be active. 

 

I expect that the answer is yes, the optimizer needs to run if we hope to correctly recompute the cost which causes updates to the learned parameters during low-variance hyperparameter search for the best Lambda.

What do you think?  What do you think Andrew would say if he could answer in the forum?

THanks for reading.

 

This topic was modified 2 months ago by GeoffreyA

ReplyQuote
Share:
Subscribe to our newsletter

We use cookies to collect information about our website and how users interact with it. We’ll use this information solely to improve the site. You are agreeing to consent to our use of cookies if you click ‘OK’. All information we collect using cookies will be subject to and protected by our Privacy Policy, which you can view here.

OK
  
Working

Please Login or Register