PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

LORA-CRAFT: Cross-layer Rank Adaptation via Frozen Tucker Decomposition of Pre-trained Attention Weights

ArXivSource

Kasun Dewage, Marianna Pensky, Suranadi De Silva, Shankadeep Mondal

cs.LG
cs.AI
|
Feb 19, 2026
3 views

One-line Summary

CRAFT is a parameter-efficient fine-tuning method using Tucker decomposition on pre-trained attention weights, achieving competitive performance with minimal adaptation parameters.

Plain-language Overview

The paper introduces a new method called CRAFT for fine-tuning large language models more efficiently. CRAFT uses a mathematical technique called Tucker decomposition to break down the pre-trained attention weights in a model into simpler parts. These parts are then adjusted using small, trainable matrices, allowing the model to be fine-tuned with fewer parameters. This method is shown to perform well on a standard set of language tasks, requiring fewer resources than traditional methods.

Technical Details