Skip to content

Predictions

Markus Fleischhacker edited this page Nov 14, 2020 · 3 revisions

Torch Serve setup

Bounding Box Editor allows you to connect your own Torch Serve inference endpoint to perform bounding box predictions for images loaded within the application. For example this allows the following general workflow using PyTorch models to assist you in your annotation work:

  1. Load the images you want to annotate in Bounding Box Editor.
  2. Manually create annotations for a number of images.
  3. Once you have created a sufficient number of annotations, export them in the format appropriate for your PyTorch model.
  4. Train the model using the manually created ground-truth annotations.
  5. Serve the model using Torch Serve.
  6. Connect Bounding Box Editor to your inference endpoint.
  7. Perform predictions on the remaining (not manually annotated) images and use them as hints to speed up the annotation procedure.

To set up a local Torch Serve server please refer to the instructions at the Torch Serve Github repository.

Configuration

Configuring the Torch Serve server connection and prediction settings in Bounding Box Editor is done using the Inference category in the Settings window:


Inference Settings

  • First, provide the address and ports of your Torch Serve inference and management servers. With a local Torch Serve setup, by default, the inference API listens on localhost port 8080, the management server on port 8081. As a convenience, the respective fields in the Inference category pane are pre-filled with these defaults. After having specified the server addresses, click on the Select button in the Model section. This will fetch and (if possible) list all models registered with the management server. From the list of models you can select the model you want to use for the predictions.

  • Furthermore, there are settings related to the predictions such as setting a minimum required score for predictions to be included in Bounding Box Editor and specifying if categories should be merged in a case-sensitive or case-insensitive manner.

  • Finally, you can set up an image resizing preprocessing step applied to images before the prediction is performed.

Once you have successfully applied your chosen settings, a Prediction button will appear on the right side of the image toolbar (after an image folder was loaded). Clicking on this button will - after preprocessing - stream the currently loaded image to the inference server at which point the prediction is performed using the chosen model. If the prediction was successful, the predicted bounding boxes will be displayed in Bounding Box Editor.