PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

22,578 Research Papers
Multi-Domain Coverage
Real-time Updates
ArXivFeb 19, 2026

Learning with Boolean threshold functions

Veit Elser, Manish Krishan Lal

TLDR: The paper introduces a method for training neural networks on Boolean data using Boolean threshold functions, achieving sparse and interpretable models with exact or strong generalization on tasks where traditional methods struggle.

0811
ArXivFeb 19, 2026

Deeper detection limits in astronomical imaging using self-supervised spatiotemporal denoising

Yuduo Guo, Hao Zhang et al.

TLDR: ASTERIS, a self-supervised denoising algorithm, enhances astronomical imaging detection limits by leveraging spatiotemporal data, improving detection by 1 magnitude and identifying previously undetectable features in deep space images.

02
ArXivFeb 19, 2026

Enhancing Large Language Models (LLMs) for Telecom using Dynamic Knowledge Graphs and Explainable Retrieval-Augmented Generation

Dun Yuan, Hao Zhou et al.

TLDR: The KG-RAG framework enhances large language models for telecom tasks by integrating knowledge graphs with retrieval-augmented generation, improving accuracy and reducing hallucinations.

0828
ArXivFeb 19, 2026

KLong: Training LLM Agent for Extremely Long-horizon Tasks

Yue Liu, Zhiyuan Hu et al.

TLDR: KLong is a new LLM agent designed to tackle long-horizon tasks using a novel training method combining trajectory-splitting SFT and progressive RL, outperforming existing models on various benchmarks.

0805
ArXivFeb 19, 2026

MASPO: Unifying Gradient Utilization, Probability Mass, and Signal Reliability for Robust and Sample-Efficient LLM Reasoning

Xiaoliang Fu, Jiaye Lin et al.

TLDR: MASPO is a new framework that overcomes limitations in existing RLVR algorithms for large language models by optimizing gradient use, probability mass, and signal reliability, achieving better performance than current methods.

0777
ArXivFeb 19, 2026

From Labor to Collaboration: A Methodological Experiment Using AI Agents to Augment Research Perspectives in Taiwan's Humanities and Social Sciences

Yi-Chih Huang

TLDR: This study proposes a collaborative AI workflow for humanities and social sciences research, using Taiwan's Claude.ai data to validate its feasibility and effectiveness.

02
ArXivFeb 19, 2026

A Privacy by Design Framework for Large Language Model-Based Applications for Children

Diana Addae, Diana Rogachova et al.

TLDR: This paper proposes a Privacy-by-Design framework for developing AI applications for children that integrates privacy regulations to ensure data protection and legal compliance.

0890
ArXivFeb 19, 2026

Systematic Evaluation of Single-Cell Foundation Model Interpretability Reveals Attention Captures Co-Expression Rather Than Unique Regulatory Signal

Ihor Kendiukhov

TLDR: A systematic evaluation of single-cell foundation models reveals that attention mechanisms capture co-expression patterns rather than unique regulatory signals, with gene-level baselines outperforming attention-based predictions.

0832
ArXivFeb 19, 2026

What Do LLMs Associate with Your Name? A Human-Centered Black-Box Audit of Personal Data

Dimitri Staufer, Kirsten Morehouse

TLDR: This study audits how large language models (LLMs) associate personal data with individuals, revealing the models' ability to accurately generate personal information and raising privacy concerns.

0791
ArXivFeb 19, 2026

Tracing Copied Pixels and Regularizing Patch Affinity in Copy Detection

Yichen Lu, Siwei Nie et al.

TLDR: The paper introduces PixTrace and CopyNCE to improve image copy detection by enhancing pixel-level traceability and patch-level similarity learning, achieving state-of-the-art performance on the DISC21 dataset.

0820
ArXivFeb 19, 2026

Beyond Pipelines: A Fundamental Study on the Rise of Generative-Retrieval Architectures in Web Research

Amirereza Abbasi, Mohsen Hooshmand

TLDR: This paper explores how large language models are transforming web research by integrating generative-retrieval architectures, impacting various applications like information retrieval and web analytics.

0880
ArXivFeb 19, 2026

Improving LLM-based Recommendation with Self-Hard Negatives from Intermediate Layers

Bingqian Li, Bowen Zheng et al.

TLDR: ILRec improves LLM-based recommendation systems by using self-hard negatives from intermediate layers for better preference learning.

0860
ArXivFeb 19, 2026

WarpRec: Unifying Academic Rigor and Industrial Scale for Responsible, Reproducible, and Efficient Recommendation

Marco Avolio, Potito Aghilar et al.

TLDR: WarpRec is a new framework that unifies academic and industrial approaches to recommender systems, offering a scalable, sustainable, and efficient solution with over 50 algorithms and real-time energy tracking.

0883
ArXivFeb 19, 2026

Instructor-Aligned Knowledge Graphs for Personalized Learning

Abdulrahman AlRabah, Priyanka Kargupta et al.

TLDR: InstructKG is a framework that automatically constructs knowledge graphs from course materials to capture learning dependencies and aid personalized learning.

02
ArXivFeb 19, 2026

Evaluating Chain-of-Thought Reasoning through Reusability and Verifiability

Shashank Aggarwal, Ram Vikas Mishra et al.

TLDR: This paper introduces reusability and verifiability as new metrics to evaluate the quality of Chain-of-Thought reasoning in multi-agent IR pipelines, revealing that these metrics are not correlated with traditional accuracy measures.

0794
ArXivFeb 19, 2026

The Anxiety of Influence: Bloom Filters in Transformer Attention Heads

Peter Balogh

TLDR: Certain transformer attention heads in language models act as membership testers, identifying repeated tokens with high precision, similar to Bloom filters.

0745
ArXivFeb 19, 2026

LORA-CRAFT: Cross-layer Rank Adaptation via Frozen Tucker Decomposition of Pre-trained Attention Weights

Kasun Dewage, Marianna Pensky et al.

TLDR: CRAFT is a parameter-efficient fine-tuning method using Tucker decomposition on pre-trained attention weights, achieving competitive performance with minimal adaptation parameters.

0821
ArXivFeb 19, 2026

Fine-Grained Uncertainty Quantification for Long-Form Language Model Outputs: A Comparative Study

Dylan Bouchard, Mohit Singh Chauhan et al.

TLDR: This study introduces a taxonomy for fine-grained uncertainty quantification in long-form language model outputs, revealing that claim-level scoring and uncertainty-aware decoding improve factuality in generated content.

0874
ArXivFeb 19, 2026

Pareto Optimal Benchmarking of AI Models on ARM Cortex Processors for Sustainable Embedded Systems

Pranay Jain, Maximilian Kasper et al.

TLDR: The study presents a benchmarking framework for optimizing AI models on ARM Cortex processors, balancing energy efficiency and performance for sustainable embedded systems.

0773
ArXivFeb 19, 2026

The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions

Uğur Genç, Heng Gu et al.

TLDR: Conversational agents' linguistic expressions of personality affect user perceptions and emotions, but not directly their decisions, in charitable giving contexts.

01
Showing 1-20 of 22578 papers
per page
…
1 / 1129