You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, REANA administrators can use reana-admin command-line tool to execute some DB operations, such as grant tokens:
$ kubectl exec -i -t deployment/reana-server -c rest-api -- flask reana-admin --helpUsage: flask reana-admin [OPTIONS] COMMAND [ARGS]... REANA administration commands.Options: --help Show this message and exit.Commands: create-admin-user Create default user. status-report Get a status report of the REANA system. token-grant Grant a token to the selected user. token-revoke Revoke selected user's token. user-create Create a new user. user-export Export all users in current REANA cluster. user-import Import users from file. user-list List users according to the search criteria.
It would be good to enrich this reana-admin tool with other sometimes-needed admin operations.
For example, it may happen that due to exhausted disk space, or problematic nodes, there can be a job that has no pod:
$ kubectl get jobs| grep a60921c3 reana-run-job-a60921c3-68fe-4d7d-927c-b9a34f33f591 0/1 2d22h 2d22h
$ kubectl get pods | grep a60921c3
In cases like that, it is necessary to find more information about given job ID. Which workflow ID is this part of? Which user launched it? When?
The cluster administrators can use psql or ipython and connect to the database.
However, including such as tool in reana-admin directly would help to debug these production problems much faster.
Proposed behaviour
Let us propose a new command for reana-admin such as:
i.e. the input would be the job name (or only job ID).
Based on this input, the command would print all the necessary context information about current job, such as:
Example output:
Job-ID: foo
Workflow-ID: bar
Workflow-Status: running
User-ID: baz
User-Name: john doe
...
Notes
Pros: Having this in reana-admin makes it nicely coupled with REANA 0.7, 0.8, 0.9 etc master series. So when the REANA version generation changes, the command can also change, nad it will be always up to date and corresponding to the given cluster that the admin is administrating.
Cons: One would have to make a new release of reana-server in order to have a new such command or new such functionality. While if the command would live elsewhere, e.g. in scripts outside the cluster, one could just write it there and apply to running cluster without having to deploy it.
The text was updated successfully, but these errors were encountered:
Current behaviour
Currently, REANA administrators can use
reana-admin
command-line tool to execute some DB operations, such as grant tokens:It would be good to enrich this
reana-admin
tool with other sometimes-needed admin operations.For example, it may happen that due to exhausted disk space, or problematic nodes, there can be a job that has no pod:
In cases like that, it is necessary to find more information about given job ID. Which workflow ID is this part of? Which user launched it? When?
The cluster administrators can use
psql
oripython
and connect to the database.However, including such as tool in
reana-admin
directly would help to debug these production problems much faster.Proposed behaviour
Let us propose a new command for
reana-admin
such as:$ kubectl exec -i -t deployment/reana-server -c rest-api -- flask reana-admin get-job-information reana-run-job-a60921c3-68fe-4d7d-927c-b9a34f33f591
i.e. the input would be the job name (or only job ID).
Based on this input, the command would print all the necessary context information about current job, such as:
Example output:
Notes
Pros: Having this in
reana-admin
makes it nicely coupled with REANA 0.7, 0.8, 0.9 etc master series. So when the REANA version generation changes, the command can also change, nad it will be always up to date and corresponding to the given cluster that the admin is administrating.Cons: One would have to make a new release of
reana-server
in order to have a new such command or new such functionality. While if the command would live elsewhere, e.g. in scripts outside the cluster, one could just write it there and apply to running cluster without having to deploy it.The text was updated successfully, but these errors were encountered: