This documentation aims to provide a clear structure for users to easily understand and implement the Avatar SDK functionalities.
To create an instance of the AvatarClient
, use the following code:
const client = new AvatarClient({
apiKey: 'YOUR_API_KEY', // Required: Your API key for authentication.
});
apiKey
(required): Your API key for authentication.baseUrl
(optional): URL for the staging environment. Defaults to the production URL.
-
init
: Init the avatar client, it's required aclient.init({ videoElement: videoElement }, audioElement)
-
background
: A URL of the background (can be an image or a video) to be applied on the avatar (only work with avatars that has green screen). -
avatarConfig
: With the avatarConfig you can configure the position and the dimension of the avatar inside the video. -
layers
: You can apply layers into the video by passing an ,client.init({ videoElement: videoElement, background: 'https://example.com/image.jpg', avatarConfig: { videoX: 60, videoY: 80, videoWidth: 254, videoHeight: 254, }, layers: [ { element: imageElement, x: 20, y: 20, height: 64, width: 64 } ] })
-
-
connect
: Connect to the room.client.connect(); // Optional: pass the avatar id to connect to a specific avatar.
-
say
: Makes the avatar say what you want with various options:-
voiceName
: Specify the voice name. -
voiceStyle
: Specify the voice style.client.say('Hello, World!', { voiceName: 'en-US-DavisNeural', voiceStyle: 'angry', });
-
multilingualLang
: To use a language other than English, ensure thevoiceName
supports multilingual and specify the language.client.say('Hello, World!', { voiceName: 'en-US-AndrewMultilingualNeural', multilingualLang: 'es-ES', });
-
prosody
: Configure pitch, contour, range, rate, and volume for text-to-speech output. Refer to Azure documentation for possible values.client.sendMessage('Hello, World!', { voiceName: 'en-US-AndrewMultilingualNeural', prosody: { contour: '(0%, 20Hz) (10%,-2st) (40%, 10Hz)', pitch: 'high', range: '50%', rate: 'x-fast', volume: 'loud', }, });
-
ssmlVoiceConfig
: Allows for comprehensive SSMLvoice
element configuration, including math, pauses, and silence.client.sendMessage('', { multilingualLang: 'en-US', ssmlVoiceConfig: "<voice name='en-US-AndrewMultilingualNeural'><mstts:express-as style='angry'><mstts:viseme type='FacialExpression'>Hello, World!</mstts:viseme></mstts:express-as></voice>", });
-
-
stop
: Interrupts the avatar from speaking.client.stop();
-
switchAvatar
: Switch to a different avatar available to your API Key.client.switchAvatar(2);
-
disconnect
: Disconnect the avatar.
You can create a more immersive experience by manipulating the video using the VideoPlayer instance, the section bellow will explain how to it's methods.
To interact with the instance you'll need to initialize the AvatarClient first.
-
setAvatarDimensions
: Change the width and height of the avatar inside the video. You can set any dimension toauto
to make it fill the available space.client.videoPlayer.setAvatarDimensions( 254, // width 254, // height );
-
setAvatarPosition
: Change the position of the avatar inside the video.client.videoPlayer.setAvatarPosition( 60, // X 60, // Y );
-
setBackground
: Update the background of the video, works only with avatars with green screen, it supports image and video.client.videoPlayer.setBackground( 'https://example/video.mp4' );
-
removeBackground
: You can remove a background using this methodclient.videoPlayer.removeBackground();
-
addLayer
: Add a layer above the video, it can be a<img>
,<video>
or<canvas>
.client.videoPlayer.addLayer({ element: imageElement, x: 60, y: 60, height: 64, width: 64 });
-
updateLayer
: You can update a single layer passing hisindex
, it's useful if you want to move layers around without using css.client.videoPlayer.updateLayer(0, { element: imageElement, x: 60, y: 60, height: 64, width: 64 });
-
removeLayer
: You can remove a single layer by passing hisindex
.client.videoPlayer.removeLayer(0);
layers
: Get the array of the active layers, you can use to know the index to remove and update a layer.