Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

indicators for loading #454

Open
katestange opened this issue Aug 30, 2024 · 4 comments
Open

indicators for loading #454

katestange opened this issue Aug 30, 2024 · 4 comments
Labels
enhancement New feature or request ui Something having to do with the user interface

Comments

@katestange
Copy link
Member

On my machine right now on ui2, I'm getting a delay when I load numberscope specimens, long enough that I would like to see a "loading" icon. This is like, for example, the chaos gallery item. The screen stays white for ~ 1 sec as the specimen loads (no header or anything).

@katestange katestange added enhancement New feature or request ui Something having to do with the user interface labels Aug 30, 2024
@gwhitney
Copy link
Collaborator

Yeah, I am seeing a noticeable all-white period, not sure if it is a full second, but it is annoying. Not sure where all the time is going. We should see how many bytes is actually being downloaded at this point; could conceivably be the faw downloading time. I'm not quite sure how to profile where the time to page visibility is going -- anyone know of tools/methods for this? All-white during the delay is a bad sign -- likely means we don't have "control" yet of the browser environment, meaning we may have fewer options for how to speed things up. We may need to make a very lightweight "dummy" view that can hopefully load almost instantly, and the transfer from that to the "real" view once we know it is loaded. Sounds like a bit of a mess, but I agree we need to slog through it. Is this something you want on the alpha list or the beta list? (Personally I think it should definitely be significantly improved before beta, but I don't really care about it for alpha.)

@katestange
Copy link
Member Author

beta seems reasonable

@gwhitney
Copy link
Collaborator

gwhitney commented Sep 3, 2024

Well it turns out that #225, which I am in progress on for PR #420, has been marvelous for shaking out bugs/issues. Some of them are intimately related to this issue: some of the new tests of "weird" sequences vs various browsers just produced white images, essentially the analogue of this problem. They got locked up for long enough during the testing process that they never even drew the UI, let alone any visualization.

Here is a simple example I have boiled some of these things down to (run it at your peril): http://localhost:5173/?name=Formula&viz=Histogram&firstIndex=1020000&terms=2000&seq=Formula

This visualization wants the factorization of two thousand integers somewhat more than a million (it was chosen because our in-browser "simpleFactor" starts to sometimes give up at 1,018,001 = 1,009²). The cache fill loop for this specimen first factorizes all natural numbers less than a million before getting to the meat of the situation -- a very expensive operation in-browser (I haven't timed it except subjectively, but it is certainly >30 seconds). If that fill loop runs all in one go, the UI is frozen during that time, and if it happens before the UI is drawn, the visitor is staring at what seems to be the white screen of death (WSOD ;-).

Being somewhat new to web programming, I thought the full answer was to make the cache fill loop async. I thought that just let it run "in the background" and complete whenever the browser "got a chance to do it," thereby preserving the UI. But with such changes, the UI was still freezing up, even still sometimes with a WSOD.

It turns out my mental model was very naive. Just making the function async tells the browser "you don't have to do this right now", but it still has to schedule the cache fill loop at some point soon, and when it actually initiates the cache fill loop, that loop runs to completion without anything else being able to happen. Hence just making a long-running function async merely reschedules the freeze-up (somewhat unpredicatably); it does not avert the freeze-up.

Therefore, to make progress on #225 and hence #420, we will need to do something more serious in terms of preventing monolithic computations from locking the browser. Here are some options that come to mind; I would like to discuss at today's meeting. These are not mutually exclusive, and it may well be that no one of them is a complete solution for this sort of thing in general or the initialization delay noted in this issue.

  • Sequence Bounds: Work on Uniformize handling of sequence bounds? #411 (incidentally currently not even scheduled for beta) so that the index bounds are baked into the sequence object itself, so that it won't bother with the first million factorizations (they won't even be terms of the sequence).

  • Cooperation: Strategically implement "cooperative multitasking" into certain key parts of the frontscope code. That is to say, certain potentially long-running functions would be re-written to just do a chunk of the work, then schedule the next chunk for a little bit into the future and relinquish control back to the browser, and rinse and repeat.

  • Web Worker: Design a "cache-filling web worker" that will run in another thread, accept messages like "compute such and such entries of such and such sequence", and post messages like "here's some clump of the values you wanted for such and such sequence".

There of course may be other strategies I haven't contemplated and I am very open to ideas.

None of these is a quick band-aid. In terms of implementation effort to make some progress, they probably fall in the following order: Cooperation < Sequence Bounds < Web Worker. Web Worker would make Cooperation on cache filling unnecessary, but we might still need Cooperation or another Web Worker in a different compute-bound part of frontscope. Hence, I am very unclear on these options' order in terms of likely effectiveness/generality against future compute-binding of frontscope.

@gwhitney
Copy link
Collaborator

gwhitney commented Sep 3, 2024

Some notes on the Web Worker alternative that I wanted to capture while they were fresh in my mind. To be useful, a sequence-caching web worker will need to persist across multiple instances of opening the gallery, using the Sequence Switcher, and/or using the Visualizer Switcher. Currently, in all three cases the transition to a new specimen involves a bona fide page reload (as opposed to simply changing a param, which does not -- it reinitializes the visualization and it displays a new URL in the browser bar, but no actual page load occurs).

There are three types of Web Workers:

Of theses, Dedicated Workers are the simplest to set up, but they cannot persist across a page reload. Therefore, using a Dedicated Worker would necessitate implementing a new strategy for transitioning to an entirely new specimen. That's certainly plausible; we could even implement a Specimen Cache that would keep Specimens not currently displaying waiting in the wings, so that the second time you switched to Wait For It, say, it should be ready to display essentially instantaneously -- from the "point of view" of that Specimen object, the operation would be essentially equivalent to a resize (just that the "going away" part of resizing and "coming back" might be quite separated in time). In thinking about discontinuing the use of page reloading in frontscope, note that it's currently only used by SpecimenCard to switch to a new Specimen.

On the other hand, the visitor might do a manual reload. Do we want to try to prevent re-factoring a million integers on a manual reload, if the browser just did it a minute before? If so, or if we don't move away from programmatic reloading within frontscope, then we need a sort of worker that can persist across page reloads.

A Shared Worker is just a little more complicated to set up and has the ability to persist across reloads, but only if there is a continuity of pages/tabs/iframes using that shared worker. So in our case, we would need to add an iframe or a separate tab/window that does not reload at the same time as the "main" view. (One person on stack overflow describes opening a transitional popup during the switchover and then closing it, a strategy we could contemplate.)

Finally, Service Workers are the most complicated to set up, but are very long-lived -- essentially as long as the browser is itself open. So for those who don't generally close their browser program, say Firefox, they could close all numberscope windows and then a couple of days later, open one again and immediately have the first million numbers factored. Another caveat is that the interface to Service Workers is primarily URL-driven, so we would have to change our architecture a bit to imagine that there is a (fictional) "sequence server" that gives out values and factorizations of sequence as responses to URL fetches -- like the backscope api for OEIS sequences, but covering all sequences. I am guessing these URL fetches are still a bit slow, since they involve interprocess communication, but noticeably faster than going out over the actual network. So we would presumably still need some level of caching in the main app -- I do not think we would want to try to access the Service Worker for each element of a sequence as needed, but rather
to grab entire cache blocks of a sequence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request ui Something having to do with the user interface
Projects
None yet
Development

No branches or pull requests

2 participants