Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intro to optimization in deep learning: Gradient Descent #98

Open
ngun7 opened this issue Sep 24, 2020 · 1 comment
Open

Intro to optimization in deep learning: Gradient Descent #98

ngun7 opened this issue Sep 24, 2020 · 1 comment

Comments

@ngun7
Copy link

ngun7 commented Sep 24, 2020

TL;DR

This article gives a basic but detailed understanding of optimization strategy in neural networks with beautiful visualizations.

Article Link

https://blog.paperspace.com/intro-to-optimization-in-deep-learning-gradient-descent/

Author

Ayoosh Kathuria

Key Takeaways

  • Loss function
  • high-level overview of local & global minima
  • Gradient descent & its challenges
  • Different types of gradient descent functions
@khuyentran1401
Copy link
Owner

I like the visualization! From the pictures, it is easier to see the disadvantage of gradient descent such as the risk of stucking in local minima and how Stochastic Gradient Descent can fix the problem by introducing randomness

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants