The blue loss curves represent early memorization of the training set (overfitting), and the red curves show late generalization, with the learning of a modular addition algorithm that works with unseen inputs.
In machine learning (ML), grokking, or delayed generalization, is a phenomenon observed in some settings where a model abruptly transitions from overfitting (performing well only on training data) to generalizing (performing well on both training and test data), after many training iterations with little or no improvement on the held-out data.[2] This contrasts with what is typically observed in machine learning, where generalization occurs gradually alongside improved performance on training data.[3][4]
Grokking was introduced in January 2022 by OpenAI researchers who were studying generalization on small datasets. It is derived from the word grok coined by Robert Heinlein in his novel Stranger in a Strange Land.[1] In ML research, "grokking" is not used as a synonym for "generalization"; rather, it names a sometimes-observed delayed‑generalization training phenomenon in which training and held‑out performance do not improve in tandem, and in which held‑out performance rises abruptly later. Authors also analyze the "grokking time", the epoch or step at which this transition occurs in those scenarios.[5]
Interpretations
Grokking can be understood as a phase transition during the training process.[6] In particular, recent work has shown that grokking may be due to a complexity phase transition in the model during training.[7] While grokking has been thought of as largely a phenomenon of relatively shallow models, grokking has been observed in deep neural networks and non-neural models and is the subject of active research.[8][9][10][11]
One potential explanation is that the weight decay (a component of the loss function that penalizes higher values of the neural network parameters, also called regularization) slightly favors the general solution that involves lower weight values, but that is also harder to find. According to Neel Nanda, the process of learning the general solution may be gradual, even though the transition to the general solution occurs more suddenly later.[1]
Recent theories[12][13] have hypothesized that grokking occurs when neural networks transition from a "lazy training"[14] regime where the weights do not deviate far from initialization, to a "rich" regime where weights abruptly begin to move in task-relevant directions. Follow-up empirical and theoretical work[15] has accumulated evidence in support of this perspective, and it offers a unifying view of earlier work as the transition from lazy to rich training dynamics is known to arise from properties of adaptive optimizers,[16] weight decay,[17] initial parameter weight norm,[10] and more.
↑ Power, Alethea; Burda, Yuri; Edwards, Harri; Babuschkin, Igor; Misra, Vedant (2022-01-06). "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets". arXiv:2201.02177 [cs.LG]. long after severely overfitting, validation accuracy sometimes suddenly begins to increase from chance level toward perfect generalization. We call this phenomenon 'grokking'
↑ Minegishi, Gouki; Iwasawa, Yusuke; Matsuo, Yutaka (2024-05-09). "Bridging Lottery ticket and Grokking: Is Weight Norm Sufficient to Explain Delayed Generalization?". arXiv:2310.19470 [cs.LG].
↑ Power, Alethea; Burda, Yuri; Edwards, Harri; Babuschkin, Igor; Misra, Vedant (2022-01-06). "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets". arXiv:2201.02177 [cs.LG]. This is suggestive that grokking may only happen after the network's parameters are in flatter regions of the loss landscape
↑ Liu, Ziming; Kitouni, Ouail; Nolte, Niklas; Michaud, Eric J.; Tegmark, Max; Williams, Mike (2022). "Towards Understanding Grokking: An Effective Theory of Representation Learning". In Koyejo, Sanmi; Mohamed, S.; Agarwal, A.; Belgrave, Danielle; Cho, K.; Oh, A. (eds.). Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 – December 9, 2022. arXiv:2205.10343.
↑ Kumar, Tanishq; Bordelon, Blake; Gershman, Samuel J.; Pehlevan, Cengiz (2023). "Grokking as the Transition from Lazy to Rich Training Dynamics". arXiv:2310.06110 [stat.ML].
↑ Lyu, Kaifeng; Jin, Jikai; Li, Zhiyuan; Du, Simon S.; Lee, Jason D.; Hu, Wei (2023). "Dichotomy of Early and Late Phase Implicit Biases Can Provably Induce Grokking". arXiv:2311.18817 [cs.LG].
↑ Chizat, Lenaic; Oyallon, Edouard; Bach, Francis (2018). "On Lazy Training in Differentiable Programming". arXiv:1812.07956 [math.OC].
↑ Mohamad Amin Mohamadi; Li, Zhiyuan; Wu, Lei; Sutherland, Danica J. (2024). "Why do You Grok? A Theoretical Analysis of Grokking Modular Addition". arXiv:2407.12332 [cs.LG].
↑ Thilak, Vimal; Littwin, Etai; Zhai, Shuangfei; Saremi, Omid; Paiss, Roni; Susskind, Joshua (2022). "The Slingshot Mechanism: An Empirical Study of Adaptive Optimizers and the Grokking Phenomenon". arXiv:2206.04817 [cs.LG].
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.