-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DuckDB process killed when creating an R-Tree index #410
Comments
Hello! Have you tried setting a memory limit? All memory allocated by the R-Tree should be tracked by DuckDB and respect the memory limit parameter. Are you running with a disk-backed database file or entirely in memory? You may want to have a look at the spilling-to-disk section as well. |
Well, with
I'll try disk spilling. The DB is a file, not in memory. |
OK I get the same error with disk spilling (i.e `SET tmp_dir = '...'). |
Im unable to reproduce this on DuckDB v1.1.0 MacOS in disk-backed mode (although without a memory limit).
Ill see if I can mimic your setup and reproduce the error. |
Hm, ok, it seems to have finished just fine with memory limit as 15G. |
I'm on Linux btw. |
If I set a 10gb limit I get a proper out-of-memory error before I even get to the index creation. D SET memory_limit = '10gb';
D load spatial;
D CREATE TABLE nodes AS SELECT * FROM ST_ReadOSM('poland-latest.osm.pbf') WHERE kind = 'node';
100% ▕████████████████████████████████████████████████████████████▏
D ALTER TABLE nodes ADD COLUMN pt GEOMETRY;
D UPDATE nodes SET pt = ST_Point2D(lat, lon);
73% ▕███████████████████████████████████████████▊ ▏ Out of Memory Error: failed to pin block of size 256.0 KiB (9.3 GiB/9.3 GiB used) Nonetheless, a |
Yes, I guess even when you set |
Not necessarily, the index itself must still be able to fit in memory. With the |
Hm, ok, but shouldn't it handle data larger than RAM with temp disk space? |
Unfortunately, this is currently a limitation of all indexes in DuckDB. While they will be lazily loaded from disk once you open a database, they will not unload from memory, which is why you need to be able to keep the entire index in memory when you first create it. That said, their memory is still tracked by DuckDB, meaning that they should respect the I've pushed a fix for the |
I downloaded the OSM PBF file for Poland from here: https://download.geofabrik.de/europe.html
I created a duckdb table like this:
My DB size is around 6.5 GB. Now I try to do:
I see a progress bar, it goes up to around 28%, then I get a
killed
message, probably because Linux saw the process uses too much memory. Indeed, it raises to 18GB when looking at htop at the same time (I have 32 GB of RAM). I thought DuckDB is able to handle tasks larger than memory size?I'm using Duckdb v 1.1.1.
BTW, it seems I get the same thing with creating a full-text-search index (https://duckdb.org/docs/extensions/full_text_search.html), maybe it's an issue when creating an index overall?
The text was updated successfully, but these errors were encountered: