-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ToDos #62
Comments
Some more low-priority ToDo improvements:
Also possible but need to draw on a napkin and think about it:
{
"parameter": "exposure",
"plot_structure": "string visualization",
"plot_style": "something"
} for also any future potential thing that we want to plot in that way. It's not as trivial as plot structure-style flow, but should be possible to make the string-detector layout in a "plot structure" sort of function, and then for each detector call something that plots with the colors etc. what needs to be plotted be it "status" or exposure or some other performance. |
We finally arrived at the point discussed ever since it was asked to plot cal+phy... Loading large data. That means setting up the data loader has to happen first separately, saved in some We need to first sort out current PRs and ongoing things, update main with latest stable stuff, then I'll do that change, push to main, and then we work with the new structure for other updates... |
As a workaround we could implement a new part of code that separately studies each run included in the BIG time range chosen by a user, and then it separately attach each inspected run at a final stage. Then we call all the remnant plotting functions. Like, if you want to plot all p04 runs, there would be a first stage in which you inspect only r000 and saved it, then you do it again with r001 and r002. After this you concatenate produced dfs and switch to plotting/status. Same can be done within a HUGE run. We can like first inspect the size of data we are trying to load. If higher than a given threshold, we can proceed to cut it into multiple sub-runs, inspect them separately, concatenate them at the very end, and proceed in plotting. There's already the "append" feature that appends new data to an already existing dataframe, but it is not automatized for automatically do it for a large time interval or a heavy run. Maybe we can exploit that |
This would work for the Dashboard. It must be run and plot data is saved, then concatenated for plot - and it wouldn't be that much data because it's already trimmed through event selection etc. For HUGE run, that would mean re-shuffling parts of code anyway, we'd have to move figures out somewhere so that both chunks are still plotted on the same figure (e.g. create Subsystem for each chunk, then plot). But then, that's equivalent really to the next() function (create one subsystem, load data for each chunk, then plot), and instead of moving the figure creation out, the Subsystem and get_data() (and analysis data...) would move in. Definitely need napkin drawing :') |
High-level priority ToDos
implement spms plots: separate by barrel / fibers ->
plotting.plot_per_barrel_and_position
already works; need to implementplotting.plot_per_fiber_and_barrel
andplot_styles.plot_heatmap
(Michele WIP)fix when a detector is ON but NOT processable (right now the code crashes) - this happens for V05267B and V05612B starting from end of p07 (never happened before because everytime a detector was NOT processable was also OFF)
add test functions to improve the Codecov coverage
par1 vs par2: fix AUX entries
fix saving of FWHM values (here, values are constant for a selected time window; which means that if we inspect a new time window later on, we have 1) to update previous values, by evaluating FWHM in the sum of both two windows [previous one + new one] and 2) we have to save these as new values, so it's wrong to merge the old dataframe with the new one - CHANGE IT!)
new plot ideas #92
quality cut flags: we could think to add one (or more) column to the big dataframe, save it, and only later apply the cuts. Which means that for an user plot production you'll get just the entries where the QC is True. But for the Dashboard we will have the full complete object to work with. We can also think to append more QC columns to the same dataframe, and through the Dashboard we can later select which QC to look at (ie is baseline? is 0nu2b? ...). QUALITY CUTS COLUMNS ARE NEEDED TO EVALUATE RATE OF ACCEPTANCE OR REJECTION
resampled values: handle in a better way gaps and last resampled point
"cuts": "K lines"
when plotting K lines events,you don't get anything for it (at the moment the code runs, but it returns an empty plot) (Sofia)plotting.py
(and not inplot_styles.py
) (Sofia)Jason:
Medium-level priority ToDos
N/1
, while when we load geds isN/178
analysis_data.channel_mean()
#72vs ch
function) (Sofia)Low-level priority ToDos
seaborn
library used for maps. Indeed, backgrounds change only if status maps are created)The text was updated successfully, but these errors were encountered: