PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

21,055 Research Papers
Multi-Domain Coverage
Real-time Updates
ArXivJan 16, 2026

Temporal Complexity and Self-Organization in an Exponential Dense Associative Memory Model

Marco Cafiso, Paolo Paradisi

TLDR: The study explores the self-organizing dynamics and temporal complexity of a stochastic exponential dense associative memory model, showing how noise intensity and memory load influence these behaviors.

02,217
ArXivJan 16, 2026

How Long Is a Piece of String? A Brief Empirical Analysis of Tokenizers

Jonathan Roberts, Kai Han et al.

TLDR: This study analyzes the variability in tokenization across different models and text domains, revealing that token length heuristics are often overly simplistic.

02,237
arXivJan 16, 2026

Building Production-Ready Probes For Gemini

János Kramár, Joshua Engels et al.

TLDR: New probe architectures improve misuse mitigation for language models like Gemini by handling long-context inputs and adapting to distribution shifts, enhancing safety and efficiency.

01,603
ArXivJan 16, 2026

Split-and-Conquer: Distributed Factor Modeling for High-Dimensional Matrix-Variate Time Series

Hangjin Jiang, Yuzhou Li et al.

TLDR: This paper introduces a distributed framework for dimensionality reduction in high-dimensional matrix-variate time series, improving computational efficiency and information utilization through a split-and-conquer approach using tensor PCA.

02,161
ArXivJan 16, 2026

AJAR: Adaptive Jailbreak Architecture for Red-teaming

Yipu Dou, Wang Yang

TLDR: AJAR is a new framework for testing AI safety by simulating complex attacks on autonomous language models, bridging gaps in current red-teaming approaches.

02,283
ArXivJan 16, 2026

AdaMARP: An Adaptive Multi-Agent Interaction Framework for General Immersive Role-Playing

Zhenhua Xu, Dongsheng Chen et al.

TLDR: AdaMARP is an adaptive multi-agent role-playing framework that improves immersion and adaptability in interactive narratives by integrating dynamic scene management and character interactions.

01,902
ArXivJan 16, 2026

IDDR-NGP: Incorporating Detectors for Distractor Removal with Instant Neural Radiance Field

Xianliang Huang, Jiajie Gou et al.

TLDR: IDDR-NGP is a novel method that efficiently removes a variety of 3D scene distractors using a combination of 3D representations and 2D detectors, outperforming existing solutions in versatility and robustness.

01,701
ArXivJan 16, 2026

Do We Always Need Query-Level Workflows? Rethinking Agentic Workflow Generation for Multi-Agent Systems

Zixu Wang, Bingbing Xu et al.

TLDR: Query-level workflow generation in Multi-Agent Systems may not be necessary, as task-level workflows can be equally effective and more efficient using the proposed SCALE framework.

02,034
arXivJan 16, 2026

Relational Linearity is a Predictor of Hallucinations

Yuetian Lu, Yihong Liu et al.

TLDR: Relational linearity in language models is strongly correlated with hallucination rates, suggesting that how models store relational data affects their ability to self-assess knowledge accuracy.

01,927
ArXivJan 16, 2026

Self-learned representation-guided latent diffusion model for breast cancer classification in deep ultraviolet whole surface images

Pouya Afshin, David Helminiak et al.

TLDR: A self-supervised learning approach using a latent diffusion model significantly improves breast cancer classification accuracy in deep ultraviolet images by generating high-quality synthetic training data.

0499
arXivJan 16, 2026

Do explanations generalize across large reasoning models?

Koyena Pal, David Bau et al.

TLDR: The study finds that explanations from large reasoning models (LRMs) often generalize across different models, enhancing consistency and aligning with human preferences.

01,700
ArXivJan 16, 2026

Context-aware Graph Causality Inference for Few-Shot Molecular Property Prediction

Van Thuy Hoang, O-Joun Lee

TLDR: CaMol, a context-aware graph causality inference framework, improves few-shot molecular property prediction by leveraging causal substructures and chemical knowledge.

08
arXivJan 16, 2026

MetaboNet: The Largest Publicly Available Consolidated Dataset for Type 1 Diabetes Management

Miriam K. Wolff, Peter Calhoun et al.

TLDR: MetaboNet is a large, unified dataset for Type 1 Diabetes research, consolidating multiple datasets to improve algorithm development and accessibility.

01,804
ArXivJan 16, 2026

When Personalization Misleads: Understanding and Mitigating Hallucinations in Personalized LLMs

Zhongxiang Sun, Yi Zhan et al.

TLDR: This paper identifies and addresses the issue of personalized language models generating incorrect answers due to personalization, proposing a solution to maintain factual accuracy while preserving personalization.

02,072
ArXivJan 16, 2026

Steering Language Models Before They Speak: Logit-Level Interventions

Hyeseon An, Shinwoo Park et al.

TLDR: This paper introduces a new method for steering language models using logit-level interventions, which improves control over generated text without requiring model retraining or deep access to internal layers.

02,122
ArXivJan 16, 2026

ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development

Jie Yang, Honglin Guo et al.

TLDR: ABC-Bench is a new benchmark designed to evaluate the ability of AI models to handle real-world backend development tasks, revealing that current models struggle with these comprehensive challenges.

0388
ArXivJan 16, 2026

Membership Inference on LLMs in the Wild

Jiatong Yi, Yanyang Li

TLDR: SimMIA is a new framework for membership inference attacks on large language models that excels in black-box settings using only generated text, achieving state-of-the-art results.

02,225
ArXivJan 16, 2026

Soft Bayesian Context Tree Models for Real-Valued Time Series

Shota Saito, Yuta Nakahara et al.

TLDR: The Soft-BCT model introduces a probabilistic approach to context tree models for real-valued time series, showing competitive performance with existing models.

02,137
ArXivJan 16, 2026

Backdoor Attacks on Multi-modal Contrastive Learning

Simi D Kuniyilh, Rita Machacy

TLDR: This paper reviews the vulnerabilities of contrastive learning to backdoor attacks and discusses potential defenses and future research directions.

01,659
ArXivJan 16, 2026

Toward Adaptive Grid Resilience: A Gradient-Free Meta-RL Framework for Critical Load Restoration

Zain ul Abdeen, Waris Gill et al.

TLDR: The paper introduces a meta-guided, gradient-free reinforcement learning framework that efficiently restores critical grid loads by adapting to new scenarios with minimal tuning, outperforming traditional methods in speed and reliability.

01,353
Showing 1-20 of 21055 papers
per page
…
1 / 1053