Presented by Jay Bhasin and Max Gagnon. Graduate students and postdocs are welcome and encouraged to attend.
A common principle in learning new knowledge or acquiring a new skill is to start with something easy, and progressively challenge yourself with the harder stuff. While this might make sense intuitively, in a recent paper, Wilson et al. show mathematically that for classifier models, learning by gradient descent proceeds at an optimal rate when the difficulty of the task is such that the classifier is approximately 85% accurate. In this week's CNJC we will go through the approach used in the paper and discuss this result.