Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting frustrated due to infinite loops. #2633

Open
v3ss0n opened this issue Feb 15, 2024 · 21 comments
Open

Getting frustrated due to infinite loops. #2633

v3ss0n opened this issue Feb 15, 2024 · 21 comments
Labels
🧩 dependency resolution Resolution failures

Comments

@v3ss0n
Copy link

v3ss0n commented Feb 15, 2024

This had happen several time when i am using PDM .

Wasted 2 days trying to fix it. I am going to give up PDM soon at this rate.

Here are the depedencies

dependencies = [
    "litestar[cli,jinja,jwt,pydantic,sqlalchemy,standard]",
    "pydantic-settings>=2.0.3",
    "asyncpg>=0.28.0",
    "python-dotenv>=1.0.0",
    "passlib[argon2]>=1.7.4",
    "litestar-saq>=0.1.16",
    "litestar-vite>=0.1.4",
    "litestar-aiosql>=0.1.1",
    "boto3>=1.34.25",
    "python-ffmpeg>=2.0.10",
    "pyav>=12.0.2",
    "boto3-stubs[essential]>=1.34.27",
    "s3fs",
    "awscli>=1.32.28",
    "faster_whisper>=0.10.0",
    "pydub>=0.25.1",
    "whisperx",
    "numpy>=1.26.3"
]

pdm getting stucked in infinite resolution loop at s3transfers

@v3ss0n v3ss0n added the 🐛 bug Something isn't working label Feb 15, 2024
@pawamoy
Copy link
Contributor

pawamoy commented Feb 15, 2024

It's not PDM's fault: I tried installing these dependencies using pip and it takes a long time too. Especially with big packages such as torch (750MB), nvidia-cublas (410MB), nvidia-cudnn (730MB), etc.

A more constructive approach would be to try and reduce this set of dependencies to highlight the problematic ones, as an example of dependency tree that takes time to resolve. These examples can then be used to try and find optimizations in the libraries responsible for resolving dependencies. I have added the dependency-resolution label to this issue so it can be used later maybe.

Once a minimal set of dependencies has been identified, another constructive approach can be to reach out to the maintainers of the problematic dependencies, to kindly ask if it's possible to make their own dependency specifications less strict, or more strict depending on the situation, as to help resolvers find a solution more quickly.

@pawamoy pawamoy added 🧩 dependency resolution Resolution failures and removed 🐛 bug Something isn't working labels Feb 15, 2024
@v3ss0n
Copy link
Author

v3ss0n commented Feb 15, 2024

is there any way to build a dependency resolution system ( via pdm) without downloading and resolving to try and find a match?
The problem comes from s3fs <-> boto3 dependency mismatches.

@v3ss0n v3ss0n changed the title Getting faustratied due to infinite resolutin loops. Getting frustrated due to infinite loops. Apr 26, 2024
@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

This is happening again. I think we should have a recursive limit on how many tries till fail. I was expecting the system to be deployed properly and slept but when i woke up the deployment is broken. I would stop if it taking too long.
Since the maintainer of some packages aren't even replying your suggestion on informing them won't work. A

So I think an option how many retries on dependency resolution attempt should be good.

@frostming
Copy link
Collaborator

So I think an option how many retries on dependency resolution attempt should be good.

Why do you think there isn't? https://arc.net/l/quote/gdeikbxb

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

Thanks gonna try strategy.resolve_max_rounds . I think default should be around 1000 times . currently is too much.

@frostming
Copy link
Collaborator

Thanks gonna try strategy.resolve_max_rounds . I think default should be around 1000 times . currently is too much.

That would be too small for projects with more than 10 big dependencies, a round is smaller than you'd think

@frostming
Copy link
Collaborator

frostming commented Apr 27, 2024

BTW, boto3 and aws families are tough ones for dependency resolution, since they have a rather strict version range restricting each other. Try using a more accurate version range for these packages.

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

i removed them and now , stucking at prompthub-py

⠼ Resolving: new pin prompthub-py 4.0.0

Quite long now.

here is my pkgs , i removed all version restrictions too

dependencies = [
    "litestar[cli,jinja,jwt,pydantic,sqlalchemy,standard]",
    "asyncpg",
    "passlib[argon2]",
    "litestar-saq",
    "litestar-vite",
    "litestar-aiosql",
    "s3fs",
    "pyav",
    "whisperx", 
    "numpy",
    "ollama-haystack",
    "jiwer",
    "ollama",
    "gliner",
    "farm-haystack[faiss-gpu,inference]",
]

it comes form farm-haystack

Thank you very much for prompt replies.

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

still couldn't resolve . are there any way to know what is exactly screwing this up?

@frostming
Copy link
Collaborator

frostming commented Apr 27, 2024

still couldn't resolve . are there any way to know what is exactly screwing this up?

add -v to enable terminal logging and you will probably spot some packages trying to be resolved repeatedly and why they are rejected

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

Found and fix first problem it was due to linters with version being locked.

And now leads to another , this time weird

error

pdm.termui: Candidate rejected: [email protected] because it introduces a new requirement pydantic<2 that conflicts with other requirements:
    pydantic (from [email protected])    
  pydantic>=2.0.1 (from [email protected])

but:

https://github.com/deepset-ai/haystack/blob/8d04e530da24b5e5c8c11af29829714eeea47db2/pyproject.toml#L169

dosen't mention pydantic <=2 ... why its making it up.

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

looks like a bug , this definitely is an infinite loop.

@pawamoy
Copy link
Contributor

pawamoy commented Apr 27, 2024

dosen't mention pydantic <=2

It does in version 1.25.5: https://github.com/deepset-ai/haystack/blob/a8bc7551aeb2036f87cb2a33743f3c2f71b9be52/pyproject.toml#L51.

It looks like an infinite loop but it's probably actually trying to prune branches of a huge tree 😕 Always the same libs who are problematic 😅

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

ah , it is fixed in latest master then.
But case like this is common ... is there anyway we can solve this programatically? Or ease it to somewhat manageable way.
We will never know when a package owner/maintainer will do this ,

@pawamoy
Copy link
Contributor

pawamoy commented Apr 27, 2024

You can override the resolver to force specific versions of specific packages.

@pawamoy
Copy link
Contributor

pawamoy commented Apr 27, 2024

You could also disallow pydantic-settings v2, since its versions probably match pydantic's: pydantic-settings<2. This way you regain compatibility with farm-haystack, etc.

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

Thanks alot , gonna try overriding.

@dperetti
Copy link

Getting an infinite loop on my first try to pdm 😢.

dependencies = [
    "django==3.2.15",
    "psycopg2<3.0.0,>=2.9.9",
    "gunicorn==20.0.4",
    "pillow<10.0.0,>=9.5.0",
    "sorl-thumbnail<13.0.0,>=12.9.0",
    "django-postgres-extra<3.0.0,>=2.0.8",
    "djangorestframework==3.12.2",
    "djangorestframework-jwt==1.11.0",
    "channels==4.0.0",
    "daphne==4.0.0",
    "dj-database-url==1.0.0",
    "graphene-django==2.15.0",
    "six<2.0.0,>=1.16.0",
    "django-allauth==0.63.2",
    "hashids==1.3.1",
    "django-sitetree==1.16.0",
    "django-crispy-forms==1.11.2",
    "sgqlc==12.1",
    "django-fontawesome-5==1.0.18",
    "django-fsm==2.7.1",
    "django-fsm-admin==1.2.4",
    "django-jsoneditor==0.1.6",
    "dramatiq==1.13.0",
    "django-dramatiq==0.11.0",
    "django-dramatiq-pg==1.3.2",
    "rules==2.2",
    "django-anymail[mailgun]==8.2",
    "sentry-sdk<2.0.0,>=1.19.1",
    "ipython<9.0.0,>=8.12.0",
    "whitenoise<7.0.0,>=6.4.0",
    "django-hijack==2.3.0",
    "django-hijack-admin==2.1.10",
    "pydantic==1.8.1",
    "django-basicauth==0.5.3",
    "cryptography<41.0.0,>=40.0.2",
    "environs==9.3.2",
    "semantic-version<3.0.0,>=2.10.0",
    "boto3==1.17.48",
    "tenacity<9.0.0,>=8.2.2",
    "django-debug-toolbar==3.7.0",
    "channels-postgres>=1.0.4",
]

Now:

pdm add strawberry-graphql-django

Infinite loop.

@pawamoy
Copy link
Contributor

pawamoy commented Jun 11, 2024

Try relaxing some of your pins.

@pawamoy
Copy link
Contributor

pawamoy commented Jun 11, 2024

@frostming
Copy link
Collaborator

Sorry, I would consider this a well-designed bad case. You can find bad cases in every dependency resolver.

However, you can easily spot it out by looking at the locking log via -v option. So @pawamoy 's suggestion is worth considering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🧩 dependency resolution Resolution failures
Projects
None yet
Development

No branches or pull requests

4 participants