Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable offline use of embedding model #87

Closed
InAnYan opened this issue Jul 29, 2024 · 12 comments
Closed

Enable offline use of embedding model #87

InAnYan opened this issue Jul 29, 2024 · 12 comments

Comments

@InAnYan
Copy link
Owner

InAnYan commented Jul 29, 2024

When JabRef with AI PR is run first, djl will download files in background in order to work with embedding models (I guess the PyTorch backend and embedding model).

JabRef already has some issues that it takes too much time to open, now it will open 5 minutes later more 😢

I think this is only one time job, but it should run in background

@InAnYan InAnYan added this to the Week 1 milestone Jul 29, 2024
@koppor
Copy link
Collaborator

koppor commented Jul 30, 2024

The user has to be modified and has to agree to that. ("agree" means that the user can make an informed decision. They should know: What is downloaded, from where, and why)

Assume a secret research company using JabRef. No outside communication allowed. Each outside communication immediately reported to the manager - if not on the "allow-list". Think of agreeing to a privacy policy putting something to the "allow-list". -- Maybe, that helps to understand why JabRef should not "just communicate" to the outside world.

Put differently: Companies have cooperate proxies which JabRef does not use out of the box. Thus, when starting JabRef, there will be network exceptions in this case.

@InAnYan
Copy link
Owner Author

InAnYan commented Jul 30, 2024

Fixed with some shit code. Or maybe it's not that bad

@InAnYan InAnYan closed this as completed Jul 30, 2024
@koppor
Copy link
Collaborator

koppor commented Jul 30, 2024

Fixed with some shit code. Or maybe it's not that bad

You can link commits here. Otherwise, it is hard to follow.

Just pasting the commit id should work for linking

@koppor
Copy link
Collaborator

koppor commented Aug 6, 2024

JabRef downloads immediately when hitting "I agree".

See also: See #105 (comment).

@koppor
Copy link
Collaborator

koppor commented Aug 6, 2024

I think, I reopened, because I thought, I as suer was not informed, what's going on. I checked the help button

image

It lead me to craft the comment JabRef/user-documentation#491 (comment).

I don't know about the proferences for local providers. Will there be a checkbox for enable chatting in general and then the local providers? - If yes, thenn a downlaod should only be made if external providers are enabled.

I think, with a split to local and remote AI #"providers", and the link to the help, it is OK as is.

Advanced: Add text "This will download embedding models to your machine".

@koppor
Copy link
Collaborator

koppor commented Aug 6, 2024

Remember, I write somewhere about cooperate proxies. And machines not connected to the internet. How can an offline installation of JabRef work? How can users get these files?

@koppor koppor modified the milestones: Week 1, Week 6 Aug 6, 2024
@koppor koppor changed the title DJL downloads some files before opening JabRef Enable offline use Aug 6, 2024
@koppor
Copy link
Collaborator

koppor commented Aug 6, 2024

Since the issue provides some interesting discussions, I keep it and just move it to the Week for offline LLMs.

@InAnYan
Copy link
Owner Author

InAnYan commented Aug 7, 2024

DevCall: introduce online/offline mode

@koppor
Copy link
Collaborator

koppor commented Aug 7, 2024

Folder which should maybe be copied from one machine to another: C:\Users\vagrant\.djl.ai.

@koppor
Copy link
Collaborator

koppor commented Aug 7, 2024

Setting to try: Use Qubes OS as operating system.

Scroll through https://www.qubes-os.org/screenshots/ to get an idea. Each color represents a separate VM.

@ThiloteE ThiloteE changed the title Enable offline use Enable offline use of embedding model Oct 1, 2024
@koppor
Copy link
Collaborator

koppor commented Oct 14, 2024

At https://forum.image.sc/t/trouble-getting-gpu-to-work-with-instanseg-qupath/102042/26 I saw a nice download dialog. Maybe, we can get this, too?

7b995ffd686eceb8c4ecaafefa7111d18c740023

@InAnYan
Copy link
Owner Author

InAnYan commented Nov 26, 2024

Closing, as moved here: JabRef#12240

@InAnYan InAnYan closed this as completed Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants