PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

22,198 Research Papers
Multi-Domain Coverage
Real-time Updates
ArXivFeb 19, 2026

A Privacy by Design Framework for Large Language Model-Based Applications for Children

Diana Addae, Diana Rogachova et al.

TLDR: This paper proposes a Privacy-by-Design framework for developing AI applications for children that integrates privacy regulations to ensure data protection and legal compliance.

02
ArXivFeb 19, 2026

Pareto Optimal Benchmarking of AI Models on ARM Cortex Processors for Sustainable Embedded Systems

Pranay Jain, Maximilian Kasper et al.

TLDR: The study presents a benchmarking framework for optimizing AI models on ARM Cortex processors, balancing energy efficiency and performance for sustainable embedded systems.

02
ArXivFeb 19, 2026

Beyond Pipelines: A Fundamental Study on the Rise of Generative-Retrieval Architectures in Web Research

Amirereza Abbasi, Mohsen Hooshmand

TLDR: This paper explores how large language models are transforming web research by integrating generative-retrieval architectures, impacting various applications like information retrieval and web analytics.

02
ArXivFeb 19, 2026

Learning with Boolean threshold functions

Veit Elser, Manish Krishan Lal

TLDR: The paper introduces a method for training neural networks on Boolean data using Boolean threshold functions, achieving sparse and interpretable models with exact or strong generalization on tasks where traditional methods struggle.

02
ArXivFeb 19, 2026

Improving LLM-based Recommendation with Self-Hard Negatives from Intermediate Layers

Bingqian Li, Bowen Zheng et al.

TLDR: ILRec improves LLM-based recommendation systems by using self-hard negatives from intermediate layers for better preference learning.

01
ArXivFeb 19, 2026

Toward a Fully Autonomous, AI-Native Particle Accelerator

Chris Tennant

TLDR: The paper envisions AI-native particle accelerators that operate autonomously with minimal human input, focusing on AI co-design from the outset to optimize performance and reliability.

01
ArXivFeb 19, 2026

Enhancing Large Language Models (LLMs) for Telecom using Dynamic Knowledge Graphs and Explainable Retrieval-Augmented Generation

Dun Yuan, Hao Zhou et al.

TLDR: The KG-RAG framework enhances large language models for telecom tasks by integrating knowledge graphs with retrieval-augmented generation, improving accuracy and reducing hallucinations.

02
ArXivFeb 19, 2026

What Do LLMs Associate with Your Name? A Human-Centered Black-Box Audit of Personal Data

Dimitri Staufer, Kirsten Morehouse

TLDR: This study audits how large language models (LLMs) associate personal data with individuals, revealing the models' ability to accurately generate personal information and raising privacy concerns.

01
ArXivFeb 19, 2026

WarpRec: Unifying Academic Rigor and Industrial Scale for Responsible, Reproducible, and Efficient Recommendation

Marco Avolio, Potito Aghilar et al.

TLDR: WarpRec is a new framework that unifies academic and industrial approaches to recommender systems, offering a scalable, sustainable, and efficient solution with over 50 algorithms and real-time energy tracking.

02
ArXivFeb 19, 2026

Tracing Copied Pixels and Regularizing Patch Affinity in Copy Detection

Yichen Lu, Siwei Nie et al.

TLDR: The paper introduces PixTrace and CopyNCE to improve image copy detection by enhancing pixel-level traceability and patch-level similarity learning, achieving state-of-the-art performance on the DISC21 dataset.

02
ArXivFeb 19, 2026

Fine-Grained Uncertainty Quantification for Long-Form Language Model Outputs: A Comparative Study

Dylan Bouchard, Mohit Singh Chauhan et al.

TLDR: This study introduces a taxonomy for fine-grained uncertainty quantification in long-form language model outputs, revealing that claim-level scoring and uncertainty-aware decoding improve factuality in generated content.

01
ArXivFeb 19, 2026

Position: Evaluation of ECG Representations Must Be Fixed

Zachary Berger, Daniel Prakah-Asante et al.

TLDR: Current ECG benchmarking practices are flawed and need to be expanded to include broader clinical evaluations for reliable progress in the field.

01
ArXivFeb 19, 2026

KLong: Training LLM Agent for Extremely Long-horizon Tasks

Yue Liu, Zhiyuan Hu et al.

TLDR: KLong is a new LLM agent designed to tackle long-horizon tasks using a novel training method combining trajectory-splitting SFT and progressive RL, outperforming existing models on various benchmarks.

01
ArXivFeb 19, 2026

Convergence Analysis of Two-Layer Neural Networks under Gaussian Input Masking

Afroditi Kolomvaki, Fangshuo Liao et al.

TLDR: The paper analyzes the convergence of two-layer neural networks trained with Gaussian-masked inputs, finding linear convergence up to an error determined by the mask's variance.

02
ArXivFeb 19, 2026

MASPO: Unifying Gradient Utilization, Probability Mass, and Signal Reliability for Robust and Sample-Efficient LLM Reasoning

Xiaoliang Fu, Jiaye Lin et al.

TLDR: MASPO is a new framework that overcomes limitations in existing RLVR algorithms for large language models by optimizing gradient use, probability mass, and signal reliability, achieving better performance than current methods.

02
ArXivFeb 19, 2026

Evaluating Chain-of-Thought Reasoning through Reusability and Verifiability

Shashank Aggarwal, Ram Vikas Mishra et al.

TLDR: This paper introduces reusability and verifiability as new metrics to evaluate the quality of Chain-of-Thought reasoning in multi-agent IR pipelines, revealing that these metrics are not correlated with traditional accuracy measures.

01
ArXivFeb 19, 2026

The Anxiety of Influence: Bloom Filters in Transformer Attention Heads

Peter Balogh

TLDR: Certain transformer attention heads in language models act as membership testers, identifying repeated tokens with high precision, similar to Bloom filters.

02
ArXivFeb 19, 2026

LORA-CRAFT: Cross-layer Rank Adaptation via Frozen Tucker Decomposition of Pre-trained Attention Weights

Kasun Dewage, Marianna Pensky et al.

TLDR: CRAFT is a parameter-efficient fine-tuning method using Tucker decomposition on pre-trained attention weights, achieving competitive performance with minimal adaptation parameters.

01
ArXivFeb 19, 2026

Jolt Atlas: Verifiable Inference via Lookup Arguments in Zero Knowledge

Wyatt Benno, Alberto Centelles et al.

TLDR: Jolt Atlas introduces a zero-knowledge machine learning framework that efficiently verifies model inference using a lookup-centric approach, supporting privacy and security in various applications.

02
ArXivFeb 19, 2026

Systematic Evaluation of Single-Cell Foundation Model Interpretability Reveals Attention Captures Co-Expression Rather Than Unique Regulatory Signal

Ihor Kendiukhov

TLDR: A systematic evaluation of single-cell foundation models reveals that attention mechanisms capture co-expression patterns rather than unique regulatory signals, with gene-level baselines outperforming attention-based predictions.

02
Showing 1-20 of 22198 papers
per page
…
1 / 1110