PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Sign Lock-In: Randomly Initialized Weight Signs Persist and Bottleneck Sub-Bit Model Compression

ArXivSource

Akira Sakai, Yuma Ichikawa

cs.LG
cs.AI
cs.CL
cs.CV
|
Feb 19, 2026
4 views

One-line Summary

The paper identifies that weight sign persistence is a bottleneck in sub-bit model compression and proposes methods to reduce sign flips while maintaining performance.

Plain-language Overview

In machine learning, compressing models to use less storage space is important for efficiency. When trying to compress models to use less than one bit per weight, the sign of the weight becomes a significant obstacle. The researchers found that the signs of weights tend to stay the same as their initial values, which limits further compression. They propose a new approach to initializing weights and a regularization technique to reduce unnecessary changes in sign, which helps maintain model performance while achieving better compression.

Technical Details