Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can we store only quantized vectors to reduce disk footprint? #14007

Open
Rassyan opened this issue Nov 21, 2024 · 14 comments
Open

Can we store only quantized vectors to reduce disk footprint? #14007

Rassyan opened this issue Nov 21, 2024 · 14 comments

Comments

@Rassyan
Copy link

Rassyan commented Nov 21, 2024

Description

In light of optimizing disk usage for KNN vector searches in Lucene, I propose considering a new KnnVectorsFormat class in Lucene that handles only quantized vectors, eliminating the need to store original float32 vectors. This approach could significantly reduce disk usage, with potential reductions similar to the memory efficiency seen in int8 quantization scenarios, where usage can drop to about 25%. This figure is illustrative, emphasizing that actual savings could vary with different quantization methods and storage configurations.

I seek community feedback on:

  • The technical feasibility of this new storage model.
  • Potential impacts on search accuracy and performance.

Your insights will help determine the viability of this approach for enhancing Lucene's vector search capabilities.

@benwtrent
Copy link
Member

The technical feasibility of this new storage model.

Very feasible. However, it gets tricky on accuracy and the quantization methods will require updating to handle this. But all the current (and hopefully future) quantization methodologies can easily "re-hydrate" vectors so that iterating floats is still possible (folks give floats, we should be able to return floats if they want them).

Potential impacts on search accuracy and performance.

For higher fidelity quantization methods, I expect accuracy to be OK. I am not 100% sure with how things are now if we would ever want to quantize below int7 for storage. Int4 as it stands (when using the dynamic confidence interval), would suffer greatly without some more significant improvement to the algorithm.

@Rassyan
Copy link
Author

Rassyan commented Nov 21, 2024

Excuse my ignorance, but I was wondering...

quantization methodologies can easily "re-hydrate" vectors so that iterating floats is still possible

Could you elaborate on the computational costs associated with this? If the need to retrieve floats from users is not present, is it feasible to skip this rehydration step and directly use the quantized vectors for distance calculations?

higher fidelity quantization methods

So, would int7 be considered a higher fidelity quantization method? Based on your experience and insights, how would you rate the fidelity of int7, int4, and binary quantization methods? Where do they stand in terms of maintaining accuracy while optimizing storage efficiency?

Has the Lucene community already planned or discussed the implementation of a dedicated KnnVectorsFormat for handling only quantized vectors? Are there quick support mechanisms for users who are willing to compromise on accuracy for significant savings in disk space and do not require the original vectors?

@vigyasharma
Copy link
Contributor

If the need to retrieve floats from users is not present, is it feasible to skip this rehydration step and directly use the quantized vectors for distance calculations?

Quantized vectors are used directly for distance calculations today. We keep the full precision values around for recalculating quantiles during merging.
@benwtrent has a nice blog post that explains this – https://www.elastic.co/search-labs/blog/scalar-quantization-in-lucene

It could be useful to enable saving disk space at the cost of slower retrieval of float full precision vectors. Some use-cases might have uniform data distributions anyway.

@dungba88
Copy link
Contributor

Note that we would also need the original float32 vectors for re-ranking in case it's needed (e.g maybe int4) (see #13564). But if someone prefers less storage and is okay with low recall, then dropping the original vectors at searching side (if the indexing and searching are separated) could be fine. I think we still need for indexing and merging as vigyasharma@ comment.

There are also several related ideas:

@benwtrent
Copy link
Member

I think we still need for indexing and merging as vigyasharma@ comment.

I don't know if its strictly necessary to keep the raw vectors for merging. Once a certain limit is reached, especially for vectors that play well with quantization, you can likely throw the vectors away even on the indexing side. It would just require work, and I wouldn't want it to be the default behavior.

I will answer the wall of questions from @Rassyan. But, most of these would be discoverable through exploring the quantization methodology for yourself.

Could you elaborate on the computational costs associated with this?

Its just the inverse of the quantization method. For every vector, you iterate its components inverting the quantization step.

So, would int7 be considered a higher fidelity quantization method?

I would think so, unless you had a different one in mind?

Based on your experience and insights, how would you rate the fidelity of int7, int4, and binary quantization methods?

For pure score correlations, what we have seen:

  • int7 ~ 95-99%% score correlations
  • int4 ~ 85-90% score correlations
  • bit ~ 70-80+ score correlations

There are vectors that are aggressively AGAINST quantization (e.g. GIST & Glove) and perform way worse than all other vectors. But modern transformer architectures perform way better.

Where do they stand in terms of maintaining accuracy while optimizing storage efficiency?

Just do the math ;)

Has the Lucene community already planned or discussed the implementation of a dedicated KnnVectorsFormat for handling only quantized vectors?

There has been discussions around having the floating point removed for "search segments" but keeping them for "index segments"

Are there quick support mechanisms for users who are willing to compromise on accuracy for significant savings in disk space and do not require the original vectors?

I don't even know what this means.

@jpountz
Copy link
Contributor

jpountz commented Nov 22, 2024

In my opinion, we should not have lossy codecs. This creates weird situations where the errors could compound in weird ways over time, e.g. when you switch file formats.

I'd rather like it to be done on top of the codec API. E.g. computing a good scalar quantization for a given model offline, and then using it in the way in to index vectors directly as byte[].

@mikemccand
Copy link
Member

I'd rather like it to be done on top of the codec API. E.g. computing a good scalar quantization for a given model offline, and then using it in the way in to index vectors directly as byte[].

Lucene's HNSW KNN already supports this -- users can provide either byte[] or float[] input vectors.

So a user could pre-quantize their float[] vectors, with full knowledge of the model that produced those vectors and a "good" quantization approach (instead of Lucene's per-segment quantile based estimation), turn those into byte[] to send to Lucene.

@mikemccand
Copy link
Member

Also, note that, at least for the current scalar quantization (int7, int4), those full precision float[] vectors remain on disk during searching. They are only used during indexing (merging) to recompute the quantiles and re-quantize the vectors more accurately (possilby) for that newly merged segment.

For apps using near-real-time segment replication (the best/preferred replication approach IMO), it may be possible eventually to strip out the float[] from the Lucene index sent to searchers, but this has only been discussed so far -- no implementation -- I'll try to track down the issue (@benwtrent also referred to this above).

@mikemccand
Copy link
Member

Ahh sorry @dungba88 also referenced the issue above! #13158

@benwtrent
Copy link
Member

In my opinion, we should not have lossy codecs. This creates weird situations where the errors could compound in weird ways over time, e.g. when you switch file formats.

I do think we should consider adding support for half-floats. Regardless of them being able to utilize hard-ware accelerated comparisons or not, it would greatly reduce the disk footprint.

The contract with the user for half-floats would be pretty straight forward as they specify that they want to store them as such.

The segment based replication solution is a very interesting one. Though for higher compression ratios (e.g. 1 or 2 bits), you generally need the rescoring. Though, conceivably, the codec could quantize the vectors twice (int7, then to bit) and allows you to rerank bit quantized values with int7 quantized values...quantization all the way down

@dungba88
Copy link
Contributor

Though, conceivably, the codec could quantize the vectors twice (int7, then to bit) and allows you to rerank bit quantized values with int7 quantized values...quantization all the way down

I like this idea, it allows more granular trade-off between memory usage and recall. Although if it means we need to store all 3 levels (float32/int7/bit) in disk, or we need to sacrifice accuracy during merging by just relying on the intermediate int7.

@mikemccand
Copy link
Member

I do think we should consider adding support for half-floats.

+1

@mikemccand
Copy link
Member

Do we have a separate issue open already for half-floats? It seems like it deserves its own spinoff issue ...

@benwtrent
Copy link
Member

@mikemccand #12403

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants