Skip to content

Latest commit

 

History

History
520 lines (339 loc) · 15.6 KB

documentation.md

File metadata and controls

520 lines (339 loc) · 15.6 KB

Alpha Release Reference

This is a temporary API reference for the next generation ml5 library version 0.20.0-alpha.4. The project is currently under development and not stable, and final API will likely be different from the current version. Please feel free to reach out to us if you have any questions.

You can access the library by including the following script tag in your HTML file:

<script src="https://unpkg.com/[email protected]/dist/ml5.min.js"></script>

ml5.bodySegmentation

Description

BodySegmentation divides an image input into the people and the background.

Methods

ml5.bodySegmentation()

This method is used to initialize the bodySegmentation object.

const bodySegmentation = ml5.bodySegmentation(?modelName, ?options, ?callback);

Parameters:

Returns:
The bodySegmentation object.

bodySegmentation.detectStart()

This method repeatedly outputs segmentation masks on an image media through a callback function.

bodySegmentation.detectStart(media, callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the segmentation on.

  • callback(output, error): A function to handle the output of bodySegmentation.detectStart(). Likely a function to do something with the segmented image. See below for the output passed into the callback function:

    {
      mask: {},//A p5 Image object, can be directly passed into p5 image() function
      maskImageData: {}//A ImageData object
    }

bodySegmentation.detectStop()

This method can be called after a call to bodySegmentation.detectStart to stop the repeating pose estimation.

bodySegmentation.detectStop();

bodySegmentation.detect()

This method asynchronously outputs a single segmentation mask on an image media when called.

bodySegmentation.detect(media, ?callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the segmentation on.
  • callback(output, error): OPTIONAL. A callback function to handle the output of the estimation, see output example above.

Returns:
A promise that resolves to the segmentation output.

Examples

TODO (link p5 web editor examples once uploaded)


ml5.bodyPose

Description

BodyPose can be used for real-time human pose Estimation.

Methods

ml5.bodyPose()

This method is used to initialize the bodyPose object.

const bodyPose = ml5.bodyPose(?modelName, ?options, ?callback);

Parameters:

  • modelName: OPTIONAL: A string specifying which model to use, "BlazePose" or "MoveNet". MoveNet is an ultra fast and accurate model that detects 17 keypoints of a body. BlazePose can detect 33 keypoints and provides 3D tracking.

  • options: OPTIONAL. An object to change the default configuration of the model. The default and available options for MoveNet model are:

    {
      modelType: "MULTIPOSE_LIGHTNING", // "MULTIPOSE_LIGHTNING", "SINGLEPOSE_LIGHTNING", or "SINGLEPOSE_THUNDER"
      enableSmoothing: true,
      minPoseScore: 0.25,
      multiPoseMaxDimension: 256,
      enableTracking: true,
      trackerType: "boundingBox", // "keypoint" or "boundingBox"
      trackerConfig: {},
      modelUrl: undefined,
    }

    More info on options for the MoveNet model.

    The default and available options for BlazePose model are:

    {
      runtime: "mediapipe", // "mediapipe" or "tfjs"
      enableSmoothing: true,
      modelType: "full", // "lite", "full", or "heavy"
      detectorModelUrl: undefined, //default to use the tf.hub model
      landmarkModelUrl: undefined, //default to use the tf.hub model
      solutionPath: "https://cdn.jsdelivr.net/npm/@mediapipe/pose",
    }

More info on options for BlazePose model with mediapipe runtime.

More info on options for BlazePose model with tfjs runtime.

  • callback(bodyPose, error): OPTIONAL. A function to run once the model has been loaded. Alternatively, call ml5.bodyPose() within the p5 preload function.

Returns:
The bodyPose object.

bodyPose.detectStart()

This method repeatedly outputs pose estimations on an image media through a callback function.

bodyPose.detectStart(media, callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the estimation on.

  • callback(output, error): A callback function to handle the output of the estimation. See below for an example output passed into the callback function:

    [
      {
        box: { width, height, xMax, xMin, yMax, yMin },
        id: 1,
        keypoints: [{ x, y, score, name }, ...],
        left_ankle: { x, y, confidence },
        left_ear: { x, y, confidence },
        left_elbow: { x, y, confidence },
        ...
        score: 0.28,
      },
      ...
    ];

    See the diagram below for the position of each keypoint.

    For MoveNet model:

    Keypoint diagram for MoveNet model

    For BlazePose model: Keypoint diagram for BlazePose model

bodyPose.detectStop()

This method can be called after a call to bodyPose.detectStart to stop the repeating pose estimation.

bodyPose.detectStop();

bodyPose.detect()

This method asynchronously outputs a single pose estimation on an image media when called.

bodyPose.detect(media, ?callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the estimation on.
  • callback(output, error): OPTIONAL. A callback function to handle the output of the estimation, see output example above.

Returns:
A promise that resolves to the estimation output.

Examples

TODO (link p5 web editor examples once uploaded)


ml5.faceMesh

Description

FaceMesh can be used for real-time face landmark Estimation.

Methods

ml5.faceMesh()

This method is used to initialize the faceMesh object.

const faceMesh = ml5.faceMesh(?options, ?callback);

Parameters:

  • options: OPTIONAL. An object to change the default configuration of the model. The default and available options are:

    {
        maxFaces: 1,
        refineLandmarks: false,
        flipHorizontal: false
    }

    More info on options here.

  • callback(faceMesh, error): OPTIONAL. A function to run once the model has been loaded. Alternatively, call ml5.faceMesh() within the p5 preload function.

Returns:
The faceMesh object.

faceMesh.detectStart()

This method repeatedly outputs face estimations on an image media through a callback function.

faceMesh.detectStart(media, callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the estimation on.

  • callback(output, error): A callback function to handle the output of the estimation. See below for an example output passed into the callback function:

    [
      {
        box: { width, height, xMax, xMin, yMax, yMin },
        keypoints: [{x, y, z, name}, ... ],
        faceOval: [{x, y, z}, ...],
        leftEye: [{x, y, z}, ...],
        ...
      },
      ...
    ]

    Here is a diagram for the position of each keypoint (download and zoom in to see the index numbers).

faceMesh.detectStop()

This method can be called after a call to faceMesh.detectStart to stop the repeating face estimation.

faceMesh.detectStop();

faceMesh.detect()

This method asynchronously outputs a single face estimation on an image media when called.

faceMesh.detect(media, ?callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the estimation on.
  • callback(output, error): OPTIONAL. A callback function to handle the output of the estimation, see output example above.

Returns:
A promise that resolves to the estimation output.

Examples

TODO (link p5 web editor examples once uploaded)


ml5.handPose

Description

HandPose can be used for real-time hand Estimation.

Methods

ml5.handPose()

This method is used to initialize the handPose object.

const handPose = ml5.handPose(?options, ?callback);

Parameters:

  • options: OPTIONAL. An object to change the default configuration of the model. The default and available options are:

    {
      maxHands: 2,
      runtime: "mediapipe",
      modelType: "full",
      solutionPath: "https://cdn.jsdelivr.net/npm/@mediapipe/hands",
      detectorModelUrl: undefined, //default to use the tf.hub model
      landmarkModelUrl: undefined, //default to use the tf.hub model
    }

    More info on options for mediapipe runtime.

    More info on options for tfjs runtime.

  • callback(handPose, error): OPTIONAL. A function to run once the model has been loaded. Alternatively, call ml5.handPose() within the p5 preload function.

Returns:
The handPose object.

handPose.detectStart()

This method repeatedly outputs hand estimations on an image media through a callback function.

handPose.detectStart(media, callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the estimation on.

  • callback(output, error): A callback function to handle the output of the estimation. See below for an example output passed into the callback function:

    [
      {
        score,
        handedness,
        keypoints: [{ x, y, score, name }, ...],
        keypoints3D: [{ x, y, z, score, name }, ...],
        index_finger_dip: { x, y, x3D, y3D, z3D },
        index_finger_mcp: { x, y, x3D, y3D, z3D },
        ...
      }
      ...
    ]

    See the diagram below for the position of each keypoint.

    Keypoint Diagram

handPose.detectStop()

This method can be called after a call to handPose.detectStart to stop the repeating hand estimation.

handPose.detectStop();

handPose.detect()

This method asynchronously outputs a single hand estimation on an image media when called.

handPose.detect(media, ?callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the estimation on.
  • callback(output, error): OPTIONAL. A callback function to handle the output of the estimation, see output example above.

Returns:
A promise that resolves to the estimation output.

Examples

TODO (link p5 web editor examples once uploaded)


ml5.imageClassifier

Description

ImageClassifier can be used to label images.

Methods

ml5.imageClassifier()

This method is used to initialize the imageClassifer object.

const imageClassifier = ml5.imageClassifier(?modelName, ?options, ?callback);

Parameters:

  • modelName: OPTIONAL. Name of the underlying model to use. Current available models are MobileNet, Darknet, Darknet-tiny and Doodlenet. It is also possible to use a custom Teachable Machine model using its model.json url. Defaults to Mobilenet.

  • options: OPTIONAL. An object to change the default configuration of the model.

  • callback(handPose, error): OPTIONAL. A function to run once the model has been loaded. Alternatively, call ml5.imageClassifier() within the p5 preload function.

Returns:
The imageClassifier object.

imageClassifier.classifyStart()

This method repeatedly outputs classification labels on an image media through a callback function.

imageClassifier.classifyStart(media, ?kNumber, callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the classification on.

  • kNumber: The number of labels returned by the image classification.

  • callback(output, error): A callback function to handle the output of the classification. See below for an example output passed into the callback function:

    [
      {
        label: '...',
        confidence: ...
      }
      ...
    ]

imageClassifier.classifyStop()

This method can be called after a call to imageClassifier.classifyStart to stop the repeating classifications.

imageClassifier.classifyStop();

imageClassifier.classify()

This method asynchronously outputs a single image classification on an image media when called.

imageClassifier.classify(media, ?kNumber, ?callback);

Parameters:

  • media: An HTML or p5.js image, video, or canvas element to run the classification on.
  • kNumber: The number of labels returned by the image classification.
  • callback(output, error): OPTIONAL. A callback function to handle the output of the classification.

Returns:
A promise that resolves to the estimation output.

Examples

TODO (link p5 web editor examples once uploaded)


ml5.neuralNetwork

See old reference and Nature of Code Chapter 10: Neural Networks and Chapter 11: Neuroevolution


ml5.sentiment

See old reference.


More models and features coming soon!