all AI news
Towards Stability of Parameter-free Optimization
May 8, 2024, 4:42 a.m. | Yijiang Pang, Shuyang Yu, Bao Hoang, Jiayu Zhou
cs.LG updates on arXiv.org arxiv.org
Abstract: Hyperparameter tuning, particularly the selection of an appropriate learning rate in adaptive gradient training methods, remains a challenge. To tackle this challenge, in this paper, we propose a novel parameter-free optimizer, AdamG (Adam with the golden step size), designed to automatically adapt to diverse optimization problems without manual tuning. The core technique underlying AdamG is our golden step size derived for the AdaGrad-Norm algorithm, which is expected to help AdaGrad-Norm preserve the tuning-free convergence and …
abstract adam adapt arxiv challenge cs.lg diverse free gradient hyperparameter novel optimization paper rate stability training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Technical Program Manager, Expert AI Trainer Acquisition & Engagement
@ OpenAI | San Francisco, CA
Director, Data Engineering
@ PatientPoint | Cincinnati, Ohio, United States