PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

22,578 Research Papers
Multi-Domain Coverage
Real-time Updates
ArXivFeb 19, 2026

How AI Coding Agents Communicate: A Study of Pull Request Description Characteristics and Human Review Responses

Kan Watanabe, Rikuto Tsuchida et al.

TLDR: This study examines how AI coding agents' pull request descriptions differ and how these differences affect human reviewers' responses and merge outcomes on GitHub.

06
ArXivFeb 19, 2026

ALPS: A Diagnostic Challenge Set for Arabic Linguistic & Pragmatic Reasoning

Hussein S. Al-Olimat, Ahmad Alshareef

TLDR: ALPS is a diagnostic challenge set designed to test deep semantic and pragmatic understanding in Arabic, revealing current model limitations in morpho-syntactic dependencies despite high fluency scores.

026
ArXivFeb 19, 2026

Multi-Probe Zero Collision Hash (MPZCH): Mitigating Embedding Collisions and Enhancing Model Freshness in Large-Scale Recommenders

Ziliang Zhao, Bi Xue et al.

TLDR: The Multi-Probe Zero Collision Hash (MPZCH) effectively prevents embedding collisions in large-scale recommendation systems, improving model freshness and performance.

05
ArXivFeb 19, 2026

Fail-Closed Alignment for Large Language Models

Zachary Coalson, Beth Sohler et al.

TLDR: The paper introduces 'fail-closed alignment' for large language models to enhance safety by ensuring refusal mechanisms remain effective even if part of the system is compromised.

04
ArXivFeb 19, 2026

What Do LLMs Associate with Your Name? A Human-Centered Black-Box Audit of Personal Data

Dimitri Staufer, Kirsten Morehouse

TLDR: This study audits how large language models (LLMs) associate personal data with individuals, revealing the models' ability to accurately generate personal information and raising privacy concerns.

02,880
ArXivFeb 19, 2026

Improving LLM-based Recommendation with Self-Hard Negatives from Intermediate Layers

Bingqian Li, Bowen Zheng et al.

TLDR: ILRec improves LLM-based recommendation systems by using self-hard negatives from intermediate layers for better preference learning.

03,028
ArXivFeb 19, 2026

Universal Fine-Grained Symmetry Inference and Enforcement for Rigorous Crystal Structure Prediction

Shi Yin, Jinming Mu et al.

TLDR: This paper presents a novel approach to crystal structure prediction using large language models and constrained optimization to improve symmetry inference and enforce physical validity, achieving state-of-the-art results without relying on existing databases.

04,949
ArXivFeb 19, 2026

MeGU: Machine-Guided Unlearning with Target Feature Disentanglement

Haoyu Wang, Zhuo Huang et al.

TLDR: MeGU is a new framework for machine unlearning that uses multi-modal large language models to selectively erase target data influence while preserving model utility.

04
ArXivFeb 19, 2026

Arcee Trinity Large Technical Report

Varun Singh, Lucas Krauss et al.

TLDR: The Arcee Trinity Large is a 400B parameter sparse model using a novel MoE approach, with successful training on 17 trillion tokens and new load balancing strategies.

04
ArXivFeb 19, 2026

HQFS: Hybrid Quantum Classical Financial Security with VQC Forecasting, QUBO Annealing, and Audit-Ready Post-Quantum Signing

Srikumar Nayak

TLDR: HQFS is a hybrid quantum-classical system that improves financial forecasting and optimization by integrating quantum computing techniques, resulting in better prediction accuracy and decision-making efficiency.

01,379
ArXivFeb 19, 2026

A Unified Framework for Locality in Scalable MARL

Sourav Chakraborty, Amit Kiran Rege et al.

TLDR: The paper presents a unified framework for addressing locality in scalable multi-agent reinforcement learning (MARL) by introducing a policy-dependent approach to the exponential decay property (EDP) of value functions.

01,103
ArXivFeb 19, 2026

The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions

Uğur Genç, Heng Gu et al.

TLDR: Conversational agents' linguistic expressions of personality affect user perceptions and emotions, but not directly their decisions, in charitable giving contexts.

0731
ArXivFeb 19, 2026

Asymptotically Optimal Sequential Testing with Markovian Data

Alhad Sethi, Kavali Sofia Sagar et al.

TLDR: The paper establishes an optimal sequential hypothesis testing framework for data from ergodic Markov chains with improved lower bounds on expected stopping times.

066
ArXivFeb 19, 2026

Learning with Boolean threshold functions

Veit Elser, Manish Krishan Lal

TLDR: The paper introduces a method for training neural networks on Boolean data using Boolean threshold functions, achieving sparse and interpretable models with exact or strong generalization on tasks where traditional methods struggle.

03,144
ArXivFeb 19, 2026

genriesz: A Python Package for Automatic Debiased Machine Learning with Generalized Riesz Regression

Masahiro Kato

TLDR: genriesz is a Python package that automates debiased machine learning for estimating causal and structural parameters using generalized Riesz regression.

0148
ArXivFeb 19, 2026

IntentCUA: Learning Intent-level Representations for Skill Abstraction and Multi-Agent Planning in Computer-Use Agents

Seoyoung Lee, Seobin Yoon et al.

TLDR: IntentCUA is a framework that improves computer-use agents' task success and efficiency by using intent-level representations and shared plan memory for skill abstraction and multi-agent planning.

01,084
ArXivFeb 19, 2026

Learning a Latent Pulse Shape Interface for Photoinjector Laser Systems

Alexander Klemps, Denis Ilia et al.

TLDR: The study introduces a generative model using Wasserstein Autoencoders to efficiently explore laser pulse shapes in photoinjectors, reducing reliance on costly simulations.

05
ArXivFeb 19, 2026

A Theoretical Framework for Modular Learning of Robust Generative Models

Corinna Cortes, Mehryar Mohri et al.

TLDR: The paper proposes a theoretical framework for modularly training generative models using domain-specific experts and a robust gating mechanism, showing this approach can outperform traditional monolithic models.

06
ArXivFeb 19, 2026

Quantum Scrambling Born Machine

Marcin Płodzień

TLDR: The Quantum Scrambling Born Machine uses fixed entangling unitaries and optimized single-qubit rotations to effectively model probability distributions, demonstrating competitive performance with classical models.

05
ArXivFeb 19, 2026

Continual uncertainty learning

Heisei Yonezawa, Ansei Yonezawa et al.

TLDR: This study introduces a curriculum-based continual learning framework to improve robust control of mechanical systems with multiple uncertainties, enhancing learning efficiency and sim-to-real transfer in applications like automotive powertrains.

06
Showing 1-20 of 22578 papers
per page
…
1 / 1129