Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Top-nσ sampler #11057

Open
4 tasks done
FellowTraveler opened this issue Jan 3, 2025 · 1 comment · May be fixed by #11223
Open
4 tasks done

Feature Request: Top-nσ sampler #11057

FellowTraveler opened this issue Jan 3, 2025 · 1 comment · May be fixed by #11223
Labels
enhancement New feature or request

Comments

@FellowTraveler
Copy link

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Here are the key points:

The Problem: When LLMs generate text, they typically use either greedy decoding (always picking the most likely token) or temperature sampling. Current sampling methods often struggle to balance diversity with accuracy, especially for reasoning tasks.

The Innovation: The authors discovered that when LLMs generate tokens, the logits (pre-softmax scores) naturally separate into two regions:

  • A "noisy" region following a Gaussian distribution (background noise)

  • An "informative" region containing the actually relevant tokens

The Solution: Top-nσ works by:

  • Identifying the maximum logit value

  • Selecting tokens that are within n standard deviations (σ) of this maximum

  • Only sampling from these selected tokens

  • Using temperature to control sampling within this filtered set

Key Benefits:

  • Maintains consistent performance even at high temperatures, unlike other methods

  • Computationally efficient as it operates directly on logits

  • Outperforms both existing sampling methods and greedy decoding on reasoning tasks

  • Works particularly well for tasks requiring careful reasoning

Results: The method was tested on four reasoning-focused datasets and showed superior performance, especially at higher temperatures where other methods typically fail.

The paper essentially shows that by being more selective about which tokens to sample from based on their statistical properties, you can get better and more reliable results from language models, particularly for tasks that require careful reasoning.

Motivation

Looks to be the best sampler yet, and will be a clear differentiator for llama.cpp

Possible Implementation

See white paper: "Top-nσ Not All Logits Are You Need"

@FellowTraveler FellowTraveler added the enhancement New feature or request label Jan 3, 2025
@VJHack
Copy link
Contributor

VJHack commented Jan 9, 2025

Top-nσ shows very promising results in the paper! And it's cool to see a sampler maintain a stable sampling space even at high temperatures. I'm currently working on implementing this paper. However, since this sampling method isn't widely adopted and is still "in the early release phase" I'm not sure how likely it is to be accepted by llama.cpp maintainers. Regardless, I'll still put out an implementation for others to checkout.

@VJHack VJHack linked a pull request Jan 14, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants