-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How would you make an idiomatic idle loop? #89
Comments
For now I made a workaround by overriding the session variable in a loop with the idle returned instance. I'm going to try to refactor to make the code cleaner and see if I can override the struct instance with the |
I'd really prefer if the IDLE handle would only take a mutable borrow and have a lifetime associated that mutably borrows the session for that long. I store my sessions in a mutex for easy reuse and also box them with a trait for connection type agnosticity and this makes this it nearly impossible to use IDLE. |
In Delta Chat session is stored inside an This allows to |
I suppose that works :) I guess not using mutex is for the better anyways. Just mostly didn't expect to suddenly need an owned version of it, as everything else works fine without. |
An email client will most of the time use an IDLE loop to get the new messages/deleted messages events in real time.
What would be the best way to do this with async-imap as the session is borrowed by idle?
I have stored a session in a struct:
That session is initialized with a
try_new
method.Then I have a method that makes an idle wait. It returns the number of fetched messages.
cf this file
The caller is responsible to loop like:
I understand that the session moves to the idle Handler, to return it when idle is done.
How could I use the session instance with idle?
(@hpk42 I will push a doc PR with my findings, I just want to reach a working example)
Thank you for your answers.
What I tried:
Client
and recreating a session each time but that means re-authenticate at each idle loop step thus having an unnecessary network overheadThe text was updated successfully, but these errors were encountered: