all AI news
On the Fragility of Active Learners
March 26, 2024, 4:41 a.m. | Abhishek Ghose, Emma Nguyen
cs.LG updates on arXiv.org arxiv.org
Abstract: Active learning (AL) techniques aim to maximally utilize a labeling budget by iteratively selecting instances that are most likely to improve prediction accuracy. However, their benefit compared to random sampling has not been consistent across various setups, e.g., different datasets, classifiers. In this empirical study, we examine how a combination of different factors might obscure any gains from an AL technique.
Focusing on text classification, we rigorously evaluate AL techniques over around 1000 experiments that …
abstract accuracy active learning aim arxiv benefit budget classifiers consistent cs.cl cs.lg datasets however instances labeling prediction random sampling study type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Principal Autonomy Applications
@ BHP | Chile
Quant Analytics Associate - Data Visualization
@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India