Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tickects/PREOPS-4646: Support addition of opsim data to an archive for use by schedview-prenight #60

Merged
merged 16 commits into from
Jan 31, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,3 +63,34 @@ Building the documentation requires the installation of ``documenteer[guide]``:
$ package-docs build

The root of the local documentation will then be ``docs/_build/html/index.html``.

Using the schedview S3 bucket
-----------------------------

If a user has appropriate credentials, ``schedview`` can read data from an
``S3`` bucket. To have the ``prenight`` dashboard read data from as ``S3``
bucket, a few steps are needed to prepare the environment in which the
dashboard will be run.

First, the bucket credentials with access to the the endpoint and bucket
in which the archive resides need to be added to ``.lsst/aws-credentials.ini``
file in the account that will be running the dashboard.

For the pre-night ``S3`` bucket at the USDF, the endpoint is
``https://s3dfrgw.slac.stanford.edu/`` and the bucket name is
``rubin-scheduler-prenight``. Access to this bucket must be
coordinated with the USDF administrators and the Rubin Observatory
survey scheduling team.

For example, if the USDF ``S3`` bucket is to be used anth the section with
the ``aws_access_key_id`` and ``aws_secret_access_key`` with access to this
endpoint and bucket is ``prenight``, then the following environment variables
need to be set in the process running the dashboard:

::

$ export S3_ENDPOINT_URL='https://s3dfrgw.slac.stanford.edu/'
$ export AWS_PROFILE=prenight
rhiannonlynne marked this conversation as resolved.
Show resolved Hide resolved

The first of these (``S3_ENDPOINT_URL``) might have been set up automatically
for you if you are running on the USDF.
27 changes: 25 additions & 2 deletions docs/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Activate the conda environment and start the app:

The app will then give you the URL at which you can find the app.

By default, the app will allow the user to select ``opsim`` databas, pickles of
By default, the app will allow the user to select ``opsim`` database, pickles of
scheduler instances, and rewards data from ``/sdf/group/rubin/web_data/sim-data/schedview``
(if it is being run at the USDF) or the samples directory (elsewhere).
The data directory from which a user can select files can be set on startup:
Expand All @@ -34,7 +34,30 @@ The data directory from which a user can select files can be set on startup:

$ prenight --data_dir /path/to/data/files

Alternately, the user can be allowed to enter arbitrary URLs for these files.
Alternately, ``prenight`` can be set to look at an archive of simulation
output in an S3 bucket:

::

$ export S3_ENDPOINT_URL='https://s3dfrgw.slac.stanford.edu/'
$ export AWS_PROFILE=prenight_aws_profile
$ prenight --resource_uri='s3://rubin-scheduler-prenight/opsim/' --data_from_archive

where ``prenight_aws_profile`` should be replaced by whatever section of
the ``~/.lsst/aws-credentials.ini`` file has the credentials needed for
access to the ``rubin-scheduler-prenight`` bucket.

The ``resources-uri`` can also be set to a local directory tree with the same
layout as the above S3 bucket, in which case filesystem access is needed to
that directory tree, but the environment variables above are not. For example:

::

$ prenight --resource-uri='file:///where/my/data/is/' --data_from_archive

Note that the trailing ``/`` in the ``resource-uri`` value is required.

Finally, the user can be allowed to enter arbitrary URLs for these files.
(Note that this is not secure, because it will allow the user to upload
malicious pickles. So, it should only be done when public access to the
dashboard is not possible.) Such a dashboard can be started thus:
Expand Down
Loading
Loading