Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add pixi project configuration #227

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

matthewfeickert
Copy link
Member

@matthewfeickert matthewfeickert commented Oct 22, 2024


  • Add pixi manifest (pixi.toml) and pixi lockfile (pixi.lock) to fully specify the project dependencies. This provides a multi-environment multi-platform (Linux, macOS) lockfile.
  • In addition to the default feature, add latest, cms-open-data-ttbar, and local pixi features and corresponding environments composed from the features. The cms-open-data-ttbar feature is designed to be compatible with the Coffea Base image which uses SemVer coffea (Coffea-casa build with coffea 0.7.21/dask 2022.05.0/HTCondor and cheese).
    • The cms-open-data-ttbar feature has an install-ipykernel task that installs a kernel such that the pixi environment can be used on a coffea-casa instance from a notebook.
    • The local features have the canonical start task that will launch a jupyter lab session inside of the environment.

This will also be able to support the results of PR #225 after that PR is merged with just a few updates from pixi. 👍

Tip

Instructions for reviewers testing the PR:

  1. Navigate to https://coffea-opendata.casa/ and launch an "Coffea-casa build with coffea 0.7.21/dask 2022.05.0/HTCondor and cheese" instance
  2. Clone and checkout the PR branch
git clone https://github.com/matthewfeickert/analysis-grand-challenge.git analysis-grand-challenge-pr-227 && cd analysis-grand-challenge-pr-227 && git checkout origin/feat/add-pixi -b feat/add-pixi 
  1. Install pixi if you haven't already
curl -fsSL https://pixi.sh/install.sh | bash
. ~/.bashrc  # make pixi active in shell
  1. Install the ipykernel for the cms-open-data-ttbar environment
pixi run --environment cms-open-data-ttbar install-ipykernel
  1. In the Coffea-casa Jupyter Lab browser, navigate and open up the analyses/cms-open-data-ttbar/ttbar_analysis_pipeline.ipynb
  2. Change the kernel of the notebook to be cms-open-data-ttbar
  3. Run the notebook as you desire

@matthewfeickert
Copy link
Member Author

@alexander-held My guess is that the list that @eguiraud determined in #144 (comment) has changed since then, but this PR currently just implements the requirements described in #199 (comment) but I assume there will be more that we will need to test with.

@matthewfeickert matthewfeickert self-assigned this Oct 22, 2024
pixi.toml Outdated Show resolved Hide resolved
pixi.toml Outdated Show resolved Hide resolved
@matthewfeickert
Copy link
Member Author

matthewfeickert commented Oct 24, 2024

@alexander-held Can the analyses/cms-open-data-ttbar/requirements.txt be removed, or is that important to retain for some reason that won't use pixi?

@matthewfeickert
Copy link
Member Author

Okay, I'll want to rebase this to get it into a single commit before merge, but to run the analyses/cms-open-data-ttbar/ttbar_analysis_pipeline.ipynb on the CMS open data coffea-casa with the coffea v0.7 image you just need to do (post cloning this branch)

pixi run install-ipykernel

and then you're good to go as that will also properly install the environment you need (making sure that you select the cms-open-data-ttbar kernel in the notebook).

@alexander-held
Copy link
Member

@matthewfeickert yes let's remove the requirements.txt, I can't think of anything depending on it at the moment. If it causes problems down the line we can add something like that back again.

@matthewfeickert matthewfeickert force-pushed the feat/add-pixi branch 2 times, most recently from 94544e8 to 57df86b Compare October 24, 2024 12:01
pixi.toml Outdated Show resolved Hide resolved
pixi.toml Outdated Show resolved Hide resolved
@matthewfeickert
Copy link
Member Author

matthewfeickert commented Oct 25, 2024

@alexander-held @oshadura I've managed to get the environment to solve but I need help debugging some issues testing it:

  1. What ServiceX instance should I be targeting if I am running this on the UNL Open Data coffea-casa?
  2. When I run as is, even if I have USE_SERVICEX = False:
### GLOBAL CONFIGURATION
# input files per process, set to e.g. 10 (smaller number = faster)
N_FILES_MAX_PER_SAMPLE = 5

# enable Dask
USE_DASK = True

# enable ServiceX
USE_SERVICEX = False

### ML-INFERENCE SETTINGS

# enable ML inference
USE_INFERENCE = True

# enable inference using NVIDIA Triton server
USE_TRITON = False

during the "`Execute the data delivery pipeline" cell of the notebook things fail with the following

Traceback:
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[7], line 29
     27 t0 = time.monotonic()
     28 # processing
---> 29 all_histograms, metrics = run(
     30     fileset,
     31     treename,
     32     processor_instance=TtbarAnalysis(USE_INFERENCE, USE_TRITON)
     33 )
     34 exec_time = time.monotonic() - t0
     36 print(f"\nexecution took {exec_time:.2f} seconds")

File ~/analysis-grand-challenge-debug/.pixi/envs/cms-open-data-ttbar/lib/python3.9/site-packages/coffea/processor/executor.py:1700, in Runner.__call__(self, fileset, treename, processor_instance)
   1679 def __call__(
   1680     self,
   1681     fileset: Dict,
   1682     treename: str,
   1683     processor_instance: ProcessorABC,
   1684 ) -> Accumulatable:
   1685     """Run the processor_instance on a given fileset
   1686 
   1687     Parameters
   (...)
   1697             An instance of a class deriving from ProcessorABC
   1698     """
-> 1700     wrapped_out = self.run(fileset, processor_instance, treename)
   1701     if self.use_dataframes:
   1702         return wrapped_out  # not wrapped anymore

File ~/analysis-grand-challenge-debug/.pixi/envs/cms-open-data-ttbar/lib/python3.9/site-packages/coffea/processor/executor.py:1848, in Runner.run(self, fileset, processor_instance, treename)
   1843 closure = partial(
   1844     self.automatic_retries, self.retries, self.skipbadfiles, closure
   1845 )
   1847 executor = self.executor.copy(**exe_args)
-> 1848 wrapped_out, e = executor(chunks, closure, None)
   1849 if wrapped_out is None:
   1850     raise ValueError(
   1851         "No chunks returned results, verify ``processor`` instance structure.\n\
   1852         if you used skipbadfiles=True, it is possible all your files are bad."
   1853     )

File ~/analysis-grand-challenge-debug/.pixi/envs/cms-open-data-ttbar/lib/python3.9/site-packages/coffea/processor/executor.py:974, in DaskExecutor.__call__(self, items, function, accumulator)
    967         # FIXME: fancy widget doesn't appear, have to live with boring pbar
    968         progress(work, multi=True, notebook=False)
    969     return (
    970         accumulate(
    971             [
    972                 work.result()
    973                 if self.compression is None
--> 974                 else _decompress(work.result())
    975             ],
    976             accumulator,
    977         ),
    978         0,
    979     )
    980 except KilledWorker as ex:
    981     baditem = key_to_item[ex.task]

File ~/analysis-grand-challenge-debug/.pixi/envs/cms-open-data-ttbar/lib/python3.9/site-packages/distributed/client.py:322, in Future.result(self, timeout)
    320 self._verify_initialized()
    321 with shorten_traceback():
--> 322     return self.client.sync(self._result, callback_timeout=timeout)

File /opt/conda/lib/python3.9/site-packages/coffea/processor/executor.py:221, in __call__()
    220 def __call__(self, *args, **kwargs):
--> 221     out = self.function(*args, **kwargs)
    222     return _compress(out, self.level)

File /opt/conda/lib/python3.9/site-packages/coffea/processor/executor.py:1367, in automatic_retries()
   1361         break
   1362     if (
   1363         not skipbadfiles
   1364         or any("Auth failed" in str(c) for c in chain)
   1365         or retries == retry_count
   1366     ):
-> 1367         raise e
   1368     warnings.warn("Attempt %d of %d." % (retry_count + 1, retries + 1))
   1369 retry_count += 1

File /opt/conda/lib/python3.9/site-packages/coffea/processor/executor.py:1336, in automatic_retries()
   1334 while retry_count <= retries:
   1335     try:
-> 1336         return func(*args, **kwargs)
   1337     # catch xrootd errors and optionally skip
   1338     # or retry to read the file
   1339     except Exception as e:

File /opt/conda/lib/python3.9/site-packages/coffea/processor/executor.py:1572, in _work_function()
   1570     item, processor_instance = item
   1571 if not isinstance(processor_instance, ProcessorABC):
-> 1572     processor_instance = cloudpickle.loads(lz4f.decompress(processor_instance))
   1574 if format == "root":
   1575     filecontext = uproot.open(
   1576         {item.filename: None},
   1577         timeout=xrootdtimeout,
   (...)
   1580         else uproot.MultithreadedFileSource,
   1581     )

ModuleNotFoundError: No module named 'servicex'

which seems to indicate that the existence of the servicex library being installed in my environment is causing other problems regardless of what the steering shell variables are (if I uninstall servicex and leave everything else in the environment the same, then I'm able to run without errors).

@matthewfeickert
Copy link
Member Author

matthewfeickert commented Oct 28, 2024

A follow up question: Is there an analysis facility where the CMS ttbar open data workflow has been run with USE_SERVICEX=True and things worked? If so, I can try to diff the environment there in comparison to what I have given that

$ git grep --name-only "USE_SERVICEX"
analyses/cms-open-data-ttbar/ttbar_analysis_pipeline.ipynb
analyses/cms-open-data-ttbar/ttbar_analysis_pipeline.py
analyses/cms-open-data-ttbar/utils/metrics.py
docs/facilityinstructions.rst

isn't particularly deep.

@alexander-held
Copy link
Member

Now that #225 is merged, we can target the v3 API of the ServiceX frontend.

What ServiceX instance should I be targeting if I am running this on the UNL Open Data coffea-casa?

Should be https://opendataaf-servicex.servicex.coffea-opendata.casa/.

As for the other question about importing, that's with your own environment? Not sure what causes this but perhaps we can update to v3 and then debug that one.

@oshadura
Copy link
Member

@matthewfeickert The ServiceX instance was upgraded during the last couple of days, and now it back works.
You have a config file generated for you at the facility, so you should just run the current version of a notebook without any issues.

@oshadura
Copy link
Member

22k lines of changes are coming from pixie.lock?

@oshadura
Copy link
Member

I am not sure why we need to remove requirements.txt? Andrea Sciaba for example was using it for his test setup, and maybe we should keep it as a backward compatibility for such a case?

@sciaba
Copy link

sciaba commented Oct 29, 2024

I used requirements.txt to create from scratch a conda environment to run my I/O tests. I'm not familiar with pixi, but if it can be used for the exact same use case, it should be fine. Otherwise, keeping a requirements.txt might be handy.

@oshadura
Copy link
Member

oshadura commented Oct 29, 2024

@sciaba I agree with you :) and I was just telling Alex about your use-case

@oshadura
Copy link
Member

@matthewfeickert can we keep in sync both environments? prefix-dev/pixi#1410

@matthewfeickert
Copy link
Member Author

matthewfeickert commented Oct 30, 2024

Now that #225 is merged, we can target the v3 API of the ServiceX frontend.

Okay, let me refactor this to use v3. That will be easier.

As for the other question about importing, that's with your own environment? Not sure what causes this but perhaps we can update to v3 and then debug that one.

@alexander-held Yes. I don't think that having a different version of the library will matter, but we'll see.

You have a config file generated for you at the facility, so you should just run the current version of a notebook without any issues.

Merci @oshadura! 🙏

22k lines of changes are coming from pixi.lock?

@oshadura Yes, lock files are long to begin with and this is a multi-platform and multi-environment lock file.

I would suggest not trying to keep around the old requirements.txt as it is not something that humans are going to be able to keep updated manually (there's no information encoded RE: the respective dependencies and requirements/constraints). Installing pixi is a pretty small ask IMO, and you can even do so on LXPLUS. Of course if this is really needed we can keep it, but I would view it as a legacy file that we don't try to maintain.

I used requirements.txt to create from scratch a conda environment to run my I/O tests. I'm not familiar with pixi, but if it can be used for the exact same use case, it should be fine.

@sciaba Yes, pixi will just skip steps here and get you a conda-like environment immediately. Check out https://pixi.sh/ to get started and feel free to ping me if you have questions.

can we keep in sync both environments? prefix-dev/pixi#1410

The suggested idea in that issue is going the wrong direction (requirements.txt -> pixi.toml) for what we want. The pixi manifest and lock files are multi-platfrom and multi-environment and so can not be generated by a single high-level environment file (like a requirements.txt or environment.yml).

@matthewfeickert
Copy link
Member Author

When I rebase my PR I'll not remove the requirements.txt and let people do that in a follow up PR.

@oshadura
Copy link
Member

I am suggesting to remove jupyterlab environment or make it optional. This is very confusing for users, especially for power users who want to test notebook / python script in the facility or particular environment where is not needed jupyterlab.

@matthewfeickert matthewfeickert force-pushed the feat/add-pixi branch 2 times, most recently from 5f00329 to 021c741 Compare October 31, 2024 14:36
@matthewfeickert
Copy link
Member Author

I am suggesting to remove jupyterlab environment or make it optional. This is very confusing for users, especially for power users who want to test notebook / python script in the facility or particular environment where is not needed jupyterlab.

Okay, I can refactor this into another feature + environment. Why is this confusing for users though? I would think they should be unaware of its existence.

@oshadura
Copy link
Member

oshadura commented Oct 31, 2024

I tried to test, and pixie run automatically starts for me jupyterlab session in the same terminal I was running command. If you have your custom jupyterlab setup (e.g. another facility) or you just want to run .py, this is not what you expect to have a result.

@matthewfeickert
Copy link
Member Author

I tried to test, and pixie run automatically starts for me jupyterlab session in the same terminal I was running command. If you have your custom jupyterlab setup (e.g. another facility) or you just want to run .py, this is not what you expect to have a result.

Oh yeah. You wouldn't use pixi run start unless you were running locally.

* Add pixi manifest (pixi.toml) and pixi lockfile (pixi.lock) to fully
  specify the project dependencies. This provides a multi-environment
  multi-platform (Linux, macOS) lockfile.
* In addition to the default feature, add 'latest', 'cms-open-data-ttbar', and
  'local' features and corresponding environments composed from the features.
  The 'cms-open-data-ttbar' feature is designed to be compatible with the
  Coffea Base image which uses SemVer coffea
  (Coffea-casa build with coffea 0.7.21/dask 2022.05.0/HTCondor and cheese).
   - The cms-open-data-ttbar feature has a 'install-ipykernel' task that
     installs a kernel such that the pixi environment can be used on a
     coffea-casa instance from a notebook.
   - The local features have the canonical 'start' task that will launch a
     jupyter lab session inside of the environment.
@matthewfeickert
Copy link
Member Author

matthewfeickert commented Oct 31, 2024

@alexander-held @oshadura I've moved this out of draft and this is now ready for review. I've added notes for reviewers in the PR body, but all information should be clear from the additions to the README. If not, then I need to revise it.

(sorry, last force-with-lease pushes were fixing typos)

@@ -20,11 +20,41 @@ This directory is focused on running the CMS Open Data $t\bar{t}$ analysis throu
| utils/config.py | This is a general config file to handle different options for running the analysis. |
| utils/hepdata.py | Function to create tables for submission to the [HEP_DATA website](https://www.hepdata.net) (use `HEP_DATA = True`) |

#### Setting up the environment
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Later on in this README there are instructions to install hepdata_lib and hepdata-cli

https://github.com/iris-hep/analysis-grand-challenge/blob/main/analyses/cms-open-data-ttbar/README.md?plain=1#L59-L61

At the moment these are not in the pixi manifest and lock file, but they can be added if it would be useful.

Copy link
Member Author

@matthewfeickert matthewfeickert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some high level guiding notes if you're new to how pixi manifest files work. Feel free to ignore.


[tasks]

[dependencies]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are the dependencies of the the "default" pixi "feature" (basically a composable environment chunk) that all other defined features will extend.

pip = ">=24.3.1"
uv = ">=0.4.27"

[feature.latest.dependencies]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "latest" feature assumes no restrictions from run facility and has no upper bounds. I think @alexander-held had suggested that having something like this would be useful.

[feature.latest.target.osx-arm64.pypi-dependencies]
servicex = ">=3.0.0"

[feature.cms-open-data-ttbar.tasks]
Copy link
Member Author

@matthewfeickert matthewfeickert Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "cms-open-data-ttbar" feature is designed to work with the SemVer coffea Coffea-casa deployment, and so has dependencies tuned to be compatible with the current deployment.

func-adl-servicex = ">=2.2, <3"
tcut-to-qastle = ">=0.7, <0.8"

[feature.local.dependencies]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "local" feature has the interactive Jupyter components factored out into it so that these are not installed as part of the environments that will use the features designed for coffea-casa.

[feature.local.tasks]
start = "jupyter lab"

[environments]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Environments are the actual environments that are installed. They are composed from the "default" feature (unless excluded) and then all features given in the environment's list.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants