PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

22,178 Research Papers
Multi-Domain Coverage
Real-time Updates
ArXivFeb 5, 2026

Fairness Under Group-Conditional Prior Probability Shift: Invariance, Drift, and Target-Aware Post-Processing

Amir Asiaee, Kaveh Aryan

TLDR: The paper addresses fairness in machine learning under group-conditional prior probability shift and introduces a method to maintain fairness when label prevalences change across demographic groups between training and deployment.

01,981
ArXivFeb 5, 2026

Path Sampling for Rare Events Boosted by Machine Learning

Porhouy Minh, Sapna Sarupria

TLDR: AIMMD is a new algorithm that uses machine learning to improve the efficiency of transition path sampling for studying molecular processes.

02,872
ArXivFeb 5, 2026

PACE: Defying the Scaling Hypothesis of Exploration in Iterative Alignment for Mathematical Reasoning

Jun Rao, Zixiong Yu et al.

TLDR: PACE introduces a more efficient method for mathematical reasoning in language models by using minimal exploration, outperforming traditional methods with less computational cost.

02,011
ArXivFeb 5, 2026

Private Prediction via Shrinkage

Chao Yan

TLDR: The paper presents a method to achieve differentially private prediction with reduced dependence on the number of queries, improving efficiency in streaming settings.

02,869
ArXivFeb 5, 2026

Logarithmic-time Schedules for Scaling Language Models with Momentum

Damien Ferbach, Courtney Paquette et al.

TLDR: ADANA, an optimizer with time-varying schedules for hyperparameters, improves large-scale language model training efficiency by up to 40% compared to AdamW.

05,727
ArXivFeb 5, 2026

A Short and Unified Convergence Analysis of the SAG, SAGA, and IAG Algorithms

Feng Zhu, Robert W. Heath et al.

TLDR: This paper presents a unified convergence analysis for the SAG, SAGA, and IAG algorithms, providing a simpler and more comprehensive understanding of their performance.

03,801
arXivFeb 5, 2026

Optimism Stabilizes Thompson Sampling for Adaptive Inference

Shunxing Yan, Han Zhong

TLDR: Optimism can stabilize Thompson sampling in multi-armed bandits, enabling valid asymptotic inference with minimal additional regret.

065
ArXivFeb 5, 2026

Faithful Bi-Directional Model Steering via Distribution Matching and Distributed Interchange Interventions

Yuntai Bao, Xuhong Zhang et al.

TLDR: The paper introduces Concept DAS (CDAS), a novel intervention-based model steering method that uses distribution matching to achieve more faithful and stable control compared to traditional preference-optimization methods.

03,001
ArXivFeb 5, 2026

Grammatical Error Correction Evaluation by Optimally Transporting Edit Representation

Takumi Goto, Yusuke Sakai et al.

TLDR: The paper introduces UOT-ERRANT, a new metric for evaluating grammatical error correction systems by optimally transporting edit vectors, showing improved performance and interpretability.

02,914
ArXivFeb 5, 2026

SHaSaM: Submodular Hard Sample Mining for Fair Facial Attribute Recognition

Anay Majee, Rishabh Iyer

TLDR: SHaSaM is a novel approach that improves fairness in facial attribute recognition by using submodular hard sample mining to address data imbalance and reduce bias from sensitive attributes.

0237
ArXivFeb 5, 2026

LongR: Unleashing Long-Context Reasoning via Reinforcement Learning with Dense Utility Rewards

Bowen Ping, Zijun Chen et al.

TLDR: LongR is a framework that improves long-context reasoning in reinforcement learning by using a dynamic 'Think-and-Read' mechanism and dense utility rewards, achieving significant gains on benchmarks like LongBench v2.

02,517
arXivFeb 5, 2026

Clifford Kolmogorov-Arnold Networks

Matthias Wolff, Francesco Alesiani et al.

TLDR: The Clifford Kolmogorov-Arnold Network (ClKAN) is a new architecture for approximating functions in Clifford algebra spaces, utilizing Randomized Quasi Monte Carlo methods and novel batch normalization strategies for improved scalability and efficiency.

02,971
ArXivFeb 5, 2026

Multi-Field Tool Retrieval

Yichen Tang, Weihang Su et al.

TLDR: The paper introduces a Multi-Field Tool Retrieval framework to improve how Large Language Models select external tools by addressing challenges in tool documentation and user query alignment.

01,130
ArXivFeb 5, 2026

Radon--Wasserstein Gradient Flows for Interacting-Particle Sampling in High Dimensions

Elias Hess-Childs, Dejan Slepčev et al.

TLDR: The paper introduces new Radon--Wasserstein gradient flows for efficient high-dimensional sampling using interacting particles with linear scaling costs.

05,277
ArXivFeb 5, 2026

Hinge Regression Tree: A Newton Method for Oblique Regression Tree Splitting

Hongyi Li, Han Lin et al.

TLDR: The Hinge Regression Tree (HRT) is a new method for creating oblique decision trees using a Newton method that improves split quality and convergence speed, outperforming traditional tree models.

02,861
ArXivFeb 5, 2026

Polyglots or Multitudes? Multilingual LLM Answers to Value-laden Multiple-Choice Questions

Léo Labat, Etienne Ollion et al.

TLDR: This study examines how multilingual large language models (LLMs) respond to value-laden multiple-choice questions across different languages, revealing variability in consistency and language-specific behaviors.

02,055
ArXivFeb 5, 2026

Finite-Particle Rates for Regularized Stein Variational Gradient Descent

Ye He, Krishnakumar Balasubramanian et al.

TLDR: The paper provides finite-particle convergence rates for the regularized Stein variational gradient descent (R-SVGD) algorithm, offering non-asymptotic bounds and guidance on parameter tuning.

0241
arXivFeb 5, 2026

Diamond Maps: Efficient Reward Alignment via Stochastic Flow Maps

Peter Holderrieth, Douglas Chen et al.

TLDR: Diamond Maps are a new model for generative tasks that efficiently align with user preferences by enabling quick adaptation to rewards during inference.

0835
arXivFeb 5, 2026

Parity, Sensitivity, and Transformers

Alexander Kozachinskiy, Tomasz Steifer et al.

TLDR: This paper presents a new construction of a transformer that can solve the PARITY problem using a single layer with practical features, and establishes a lower bound proving that a single-layer, single-head transformer cannot solve PARITY.

03,797
ArXivFeb 5, 2026

An Asymptotic Law of the Iterated Logarithm for $\mathrm{KL}_{\inf}$

Ashwin Ram, Aaditya Ramdas

TLDR: This paper establishes a tight law of the iterated logarithm for empirical KL-infinity statistics, applicable to very general data conditions.

01,671
Showing 1-20 of 22178 papers
per page
…
1 / 1109