The Eighty Five Percent Rule for optimal learning

Event Details:

Wednesday, December 11, 2019
This Event Has Passed
6:00pm to 6:00pm PST
Event Sponsor
Stanford Center for Mind, Brain, Computation and Technology
Add to calendar:

Presented by Jay Bhasin and Max Gagnon. Graduate students and postdocs are welcome and encouraged to attend.

A common principle in learning new knowledge or acquiring a new skill is to start with something easy, and progressively challenge yourself with the harder stuff. While this might make sense intuitively, in a recent paper, Wilson et al. show mathematically that for classifier models, learning by gradient descent proceeds at an optimal rate when the difficulty of the task is such that the classifier is approximately 85% accurate. In this week's CNJC we will go through the approach used in the paper and discuss this result.