Regularization

Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!

You will learn to: Use regularization in your deep learning models.

Let's first import the packages you are going to use.

Updates to Assignment

If you were working on a previous version

  • The current notebook filename is version "2a".
  • You can find your work in the file directory as version "2".
  • To see the file directory, click on the Coursera logo at the top left of the notebook.

List of Updates

  • Clarified explanation of 'keep_prob' in the text description.
  • Fixed a comment so that keep_prob and 1-keep_prob add up to 100%
  • Updated print statements and 'expected output' for easier visual comparisons.