Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interface to set inference options #33

Open
gmertes opened this issue Nov 21, 2024 · 1 comment
Open

Interface to set inference options #33

gmertes opened this issue Nov 21, 2024 · 1 comment
Assignees
Labels
enhancement New feature or request models

Comments

@gmertes
Copy link
Member

gmertes commented Nov 21, 2024

Is your feature request related to a problem? Please describe.

We are starting to have some options in the model that are set during inference, like the mapper chunking. Right now, we pass them via environment variable.

Describe the solution you'd like

An interface to pass a dictionary of inference options that the model can read from.

So we can do in inference something like:

inference_options = {
    "num_chunks": 2,
    "do_something": True
}

model = torch.load('checkpoint.ckpt')
model.set_inference_options(inference_options)

model.predict_step(...)

The downside of this is that this only allows for runtime options, not for any options that are needed in initialisation (we don't have any of those right now, but we might in the future?). To do the latter we could do:

from anemoi.models.inference_options import set_inference_options

set_inference_options({...}) 
model = torch.load('checkpoint.ckpt')
...

And in models we can do something like this, so it is accessible from everywhere in the module:

from anemoi.models.inference_options import inference_options

num_chunks = inference_options("num_chunks")

Describe alternatives you've considered

No response

Additional context

No response

Organisation

No response

@gmertes gmertes added the enhancement New feature or request label Nov 21, 2024
@gmertes gmertes self-assigned this Nov 21, 2024
@JesperDramsch JesperDramsch transferred this issue from ecmwf/anemoi-models Dec 19, 2024
@floriankrb
Copy link
Member

I like the idea of being able to set option for inference.

I would avoid a global set_inference_options though. To go toward this instead:

Either set the option after loading the model:

from anemoi.models.load_model import load_model

model = load_model('checkpoint.ckpt')
model.set_inference_options(inference_options)

Or before/during loading:

from anemoi.models.load_model import load_model

model = load_model('checkpoint.ckpt', inference_options=inference_options)

And we may want to separate more clearly the functions load_inference_model and load_training_model, and then use directly load_inference_model('checkpoint.ckpt', num_chunks=2, do_something= True)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request models
Projects
None yet
Development

No branches or pull requests

3 participants