Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can we remove compress option for quantized KNN vector indexing? #13768

Open
mikemccand opened this issue Sep 11, 2024 · 7 comments
Open

Can we remove compress option for quantized KNN vector indexing? #13768

mikemccand opened this issue Sep 11, 2024 · 7 comments

Comments

@mikemccand
Copy link
Member

Description

Spinoff from this comment.

This (compress=true) is a useful option when quantizing KNN vectors to 4 bits: it packs pairs of dimensions into a single byte, so the "hot working set" of your KNN/HNSW vectors at search time is half the already reduced (from float32 -> byte) size. When compress is false then it's wasteful, using only four bits for every byte.

But it comes with some penalty to decode the "packed" (compress=true) form during KNN search, which is why we give this choice to the user.

But then I think there was at least one opto to that path, so maybe the performance penalty isn't so bad now? In which case maybe we can just always hardwire compress=true when quantized bits=4?

(compress=true doesn't apply to 7 bit quantization)

@mikemccand
Copy link
Member Author

I'll see if I can benchmark at least this tradeoff using luceneutil's knnPerfTest.py, at least on my intel Linux dev box.

@benwtrent
Copy link
Member

I agree, this is worth digging into. In my benchmarking, compressed=true ended up being even marginally faster than compressed=false, but I didn't know why :/ which is why I was hesitant to remove the option before.

It would be good to benchmark this with and without panamavector enabled over intel & arm.

Note, there has been work to move all quantized comparisons off-heap: #13497 the results of which may or may not effect this decision (bytes no longer being copied on heap, thus the fewer bytes being copied no longer harm/help performance). But, I kept having weird slow downs that I cannot figure out and nobody else can replicate (@ChrisHegarty has tried and couldn't see why I keep seeing significant slow downs given different JDK versions).

@mikemccand
Copy link
Member Author

Well, I ran knnPerfTest.py on my Linux dev box (x86-64 Raptorlake i9-13900K). This CPU has crazy number of flags, but NOT AVX-512:

flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities

This is with Panama enabled (INFO: Java vector incubator API enabled; uses preferredBitSize=256; FMA enabled). I'll try with Panama disabled next.

Results:

recall  latency (ms)     nDoc  topK  fanout  maxConn  beamWidth  quantized  index s  force merge s  num segments  index size (MB)
 0.319         0.167  1500000    10       6       32         50     4 bits    86.01          64.49             1          5013.19
 0.326         0.187  1500000    10       6       32         50     4 bits    89.03          74.99             1          5562.51

Unfortunately the output doesn't state it, but the first row is compress=True and 2nd is compress=False. Indeed, latency (search time) got faster (187 usec -> 167 usec) with compress=True, and this is quite a reduction (~10% ish) in index size. Indexing and force merge time did get a bit slower ...

@mikemccand
Copy link
Member Author

OK I disabled Panama (via temporary code change in VectorizationProvider.java -- we don't accept sysprops to disable this anymore right?):

recall  latency (ms)     nDoc  topK  fanout  maxConn  beamWidth  quantized  index s  force merge s  num segments  index size (MB)
 0.318         0.312  1500000    10       6       32         50     4 bits   175.21         125.64             1          5013.21
 0.319         0.285  1500000    10       6       32         50     4 bits   174.54         124.80             1          5562.51

Indeed there is some performance penalty now (285 usec -> 312 usec, ~9.5%) ... recall also bounced around a bit, but prolly that's acceptable HNSW randomness noise. And wow look how much slower indexing / force merging got ... those SIMD instructions clearly help ;)

But I don't think we should block removing compress option due to non-SIMD results?

@ChrisHegarty
Copy link
Contributor

ChrisHegarty commented Sep 12, 2024

But I don't think we should block removing compress option due to non-SIMD results?

I agree. At this point we're just comparing the scalar and SIMD implementation of the distance functions. For vector operations, we really need SIMD, and I think we're ok with this approach. I'm +1 to remove compress, if there is no other reason to keep it.

@benwtrent
Copy link
Member

If we are ok with the perf hit on non-panama, I am cool with it :). It will definitely simplify the code.

@mikemccand
Copy link
Member Author

But I don't think we should block removing compress option due to non-SIMD results?

Actually, thinking about this more ... I'm changing my mind. I don't fully understand how poor our Panama/SIMD coverage is across CPU types/versions, "typically" in use by our users. E.g. for ARM CPUs (various versions of NEON instructions). What %tg of our users would hit the non-SIMD (non-Panama) path?

It's spooky that the likes of OpenSearch, Elasticsearch, Solr are needing to pull in their own Panama FMA wrappers around native code to better optimize for certain vectorized instruction cases (see discussion on #13572). Ideally such optimizations would be in Lucene so we could make decisions like this (remove compress option to simplify our API / reduce surface area) with more confidence.

I'd like to run benchmarks across many more CPUs before rushing to a decision here, and I think for now we should otherwise respect the non-SIMD results? I love our new aws-jmh dev tool (thank you @rmuir)! I looked at its playbook.yml to figure out if I could also add "go check out luceneutil, download this massive 95 GB .vec file, and run knnPerfTest.py and summarize the results" but I haven't made much progress so far ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants