diff --git a/.github/styles/Vocab/HMSVocab/accept.txt b/.github/styles/Vocab/HMSVocab/accept.txt index c8dc3d1ce9..d82bba3642 100644 --- a/.github/styles/Vocab/HMSVocab/accept.txt +++ b/.github/styles/Vocab/HMSVocab/accept.txt @@ -139,3 +139,4 @@ presign_duration single_file_per_layer asset_types audio_only +hls diff --git a/common/faq.md b/common/faq.md index c536f93a7e..ae8247200a 100644 --- a/common/faq.md +++ b/common/faq.md @@ -176,10 +176,6 @@ You can continue using the existing routes (room_id/role) or set up your own rou ## Recording -#### What is the difference between the Beam recording vs. SFU recording? - -Beam recording is the browser recording, built to give users a participant-first recording experience. SFU recording is a composite recording which gets created after recording each of the individual peers and merging it. Please check this [guide](/javascript/v2/foundation/recordings) for more information. - #### After a live stream ends, how long does it take (for both Beam recording and SFU) to show up in our s3 bucket? Beam recording should be available within 15-20 minutes after the call ends. SFU recording will take ~1.5 times the call duration, after the call ends. For example, if the call duration is 30 minutes, then SFU recording will be available in 45 minutes. diff --git a/common/recordings.md b/common/recordings.md deleted file mode 100644 index b960401b84..0000000000 --- a/common/recordings.md +++ /dev/null @@ -1,198 +0,0 @@ -Recordings are an important part of the live video stack as they convert live, ephemeral content into a long-term asset. But the use of this asset varies from business to business depending on their respective use case. - -For example, one of the common use cases for recording is for archival purposes versus, for some, its content to be publicized. - -Based on your end goal, you can choose one of the recording types and its implementation. You can understand some key differences using the comparison table below. - -## Recording types - -- [Recording types](#recording-types) - - [Quick Comparison](#quick-comparison) - - [Browser Recording \[Recommended\]](#browser-recording-recommended) - - [SFU Recording \[Advanced\]](#sfu-recording-advanced) - - [Recordings for Live Streaming Use-cases](#recordings-for-live-streaming-use-cases) - - [Video-on-demand Recording](#video-on-demand-recording) - - [Multiresolution Recording](#multiresolution-recording) -- [Configure storage](#configure-storage) - - [Configure recording storage with 100ms Dashboard](#configure-recording-storage-with-100ms-dashboard) - - [Configure recording storage with 100ms API](#configure-recording-storage-with-100ms-api) -- [Storage path for recordings](#storage-path-for-recordings) -- [Chat Recording](#chat-recording) - -### Quick Comparison - -| Recording Features | Browser Recording [Recommended] | SFU Recording [Advanced] | -| ------------------------------------ | ------------------------------- | -------------------------------- | -| Resolution | Upto 1080p | Only 720p | -| Participant-level Audio/Video Tracks | Not Available | Available | -| Portrait/Landscape Mode | Available | Not Available | -| Start/Stop Recording | On-demand | Auto start/stop with the session | -| Custom Layout | Available | Not Available | -| Role-Specific Recording | Available | Not Available | -| Recording Output | MP4 | MP4, WebM | - -### Browser Recording [Recommended] - -Browser recording is built to give users a participant-first recording experience. When enabled, our browser-based bot Beam joins a room to record the viewport like any other participant. The output is an MP4 file that captures the room's published audio/video tracks together into one single file. This option removes the complexity of syncing various audio/video tracks and offers an intuitive, participant-first recording experience. An example use case is to record a sales meeting for later usage. - -**Resources** - -- [How to implement Browser Recording](/server-side/v2/how-to-guides/recordings/overview) - -### SFU Recording [Advanced] - -SFU recording is built for advanced use cases, which require individual audio and video tracks for each participant. This recording option allows you to isolate recording at a participant level. Track recording allows you to record audio and video streams separately, making it easier to edit, layer, or reuse each of them. An example use case is to record a live podcast and later edit it for publishing. - -You can get track recordings in two forms: - -- Individual: Media for each peer is provided as a separate mp4 file. This file will have both audio and video of the peer. These files can be used for offline review or in implementing custom composition. - -- Composite [currently in beta]: Audio and video of all peers are composed as per their joining/leaving the meeting and provided as a single mp4. This file can be used for offline viewing of the meeting. - -**Resources** - -- [How to implement SFU Recording](/server-side/v2/Destinations/recording) - -### Recordings for Live Streaming Use-cases - -These are the types of live streaming recordings: - -#### Video-on-demand Recording - -Video-on-demand recording is available for our Interactive Live Streaming capability. This recording will be a file with an M3U8 file (same playback format as HLS), which can be used for replaying your HLS stream. This option is more suitable for Video-on-Demand use cases. For the implementation of this type of recording, please [contact us](https://www.100ms.live/contact). - -#### Multiresolution Recording - -A multi-resolution recording is available for Interactive Live Streaming capability. This type of recording will have a multi-file structure for all available resolutions of the stream. The output will be multiple MP4 files with these resolutions: 240p, 480p, 720p, and 1080p. For an implementation of this type of recording, please [contact us](https://www.100ms.live/contact). - -## Configure storage - -You can store your recordings on a cloud storage provider through the destination settings of your template. If you haven't configured a cloud storage service, then a recording will be stored temporarily (15 days) in a 100ms bucket. Our platform supports popular cloud storage platforms like: - -- Amazon Simple Storage Service (AWS S3) -- Google Cloud Storage (GCP) -- Alibaba Object Storage (OSS) - -Recording links can be accessed through the [Sessions](https://dashboard.100ms.live/sessions) page on 100ms dashboard. - -### Configure recording storage with 100ms Dashboard - -You can setup cloud recording storage in your template's destination settings on the 100ms Dashboard. At present, the dashboard supports AWS S3 for storage configuration, with Google Cloud Storage and Alibaba OSS coming soon (already accessible through API). - -1. Generate your credentials; for this example, you can check out a [guide from AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). You can skip this step if you already have credentials. Please note that if you are running a Browser recording, you need to give upload permission to your key, but if you are running an SFU recording, you need to give both upload and download permission. - -2. Go to 100ms Dashboard and go to template **configuration by selecting the configure icon**. - -![Create your first app](/docs/v2/recording-storage-settings-step2.png) - -3. Head over to the **Destinations** tab. - -![Destinations](/docs/v2/recording-storage-settings-step3.png) - -1. Key in your credentials (using an example of an S3 bucket here): - - - Access Key: Access Key generated from AWS IAM Console - - Secret Key: Secret Key generated from AWS IAM Console - - Bucket: Name of the bucket in S3 - - Region: Name of the region, for example, ap-south-1 - - Prefix for Upload Path: Define the directory name (optional) - -![Destinations](/docs/v2/recording-storage-settings-step4.png) - -5. Use the **Validate Config** button to test your storage setup. - -![Destinations](/docs/v2/recording-storage-settings-step5.png) - -6. You will see a message that the AWS **configuration was successfully validated**. - -![Destinations](/docs/v2/recording-storage-settings-step6.png) - -The above message ensures that your configuration is successful now, and all your recordings will start collecting in your configured destination. - - -### Configure recording storage with 100ms API - -Recording storage for cloud providers like Amazon S3, Google Cloud and Alibaba OSS storage is currently supported through [Policy](https://www.100ms.live/docs/server-side/v2/api-reference/policy/create-template-via-api) API. You can configure the **`type`** field of recording object to `s3` for AWS, `oss` for Alibaba Object Storage Service and `gs` for Google Cloud Storage with the following details: - -- Access Key: Access Key for your OSS/GCP Bucket -- Secret Key: Secret Key for your OSS/GCP Bucket -- Bucket: Name of the bucket -- Region: Name of the region where your bucket is located in -- Prefix for Upload Path: Define the directory name (optional) - -## Storage path for recordings - -If a storage destination is not configured for recordings and if you choose to record that room then such recordings are stored for **72 hours** in an internal 100ms bucket. You can access these recordings through [Sessions](https://dashboard.100ms.live/sessions). - -![Recording Links](/docs/v2/recording-links-session.png) - -**Storage recording path is available in following webhook responses:** - -- Browser Recording: [beam.recording.success](/server-side/v2/introduction/webhook#beamrecordingsuccess) (attribute: `recording_path`) -- SFU Recording: [recording.success](/server-side/v2/introduction/webhook#sfu-recording-events) (attribute: `recording_path`) -- Multiresolution Recording: [hls.recording.success](/server-side/v2/introduction/webhook#hlsrecordingsuccess) (attribute: `recording_single_files` ; `recording_path`) -- VOD Recording: [hls.recording.success](/server-side/v2/introduction/webhook#hlsrecordingsuccess) (attribute: `hls_vod_recording_path`) - -**The recording path for these respective recordings will look like follows:** - -1. Browser Recording: `s3:////beam///Rec--.mp4` - -2. SFU Recording: - - 1. Composite: `s3:///////Rec--.mp4` - - 2. Individual: `s3://///////.webm` - -3. Multiresolution Recording: `s3:////hls////file-recording/Rec---.mp4` - -4. VOD Recording: `s3:////hls////vod/Rec--.zip` - -**The breakdown of the aforementioned tags is as follows:** - -| Tag Name | Description | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Location | Name of the bucket where recordings are stored | -| Prefix | Prefix for upload path which is configured in storage settings of your template. If not configured, the default value for this will be your Customer ID | -| Room ID | The identifier for the room which was recorded | -| Start Date | Start date of the session | -| Epoch | Start time of the recorder in the session | -| Peer ID | Unique identifier of a peer in a room | -| Stream ID | Unique identifier for a particular stream of a room (audio-video/screenshare) | -| Track ID | Unique identifier for a particular track (audio or video) of a stream | -| Layer Index | Layer index values show descending HLS resolutions - 0(1080p), 1(720p), 2(480p), 3(360p) and 4(240p). If highest resolution of template is 720p, then 0(720p), 1(480p), 2(360p) and 3(240p) | - -## Chat Recording - -Chat recording is a feature through which you will receive all chats messages sent by peers during the SFU/browser recording. Chat recording is available for both SFU recording and browser recording. Only chats sent to some or all roles will be recorded. The `.csv` file will be uploaded to the recording bucket configured for your video recordings. The file header will be: `SenderPeerID,SenderName,SenderUserID,Roles,SentAt,Type,Message` - -**Header information** - -| Header | Description | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| SenderPeerID |Sender's peer id | -| SenderName | Sender's name | -| SenderUserID | Sender's user id | -| Roles | Roles to which the message is sent; `[]` in case of all roles | -| SentAt| SentAt in RFC.3339 format | -| Type| Message type - `chat`| -| Message| Message that was sent | - -**Chat recording path is available in following webhook responses:** - -- Browser Recording: [beam.recording.success](/server-side/v2/introduction/webhook#beamrecordingsuccess) (attribute: `chat_recording_path` ; `chat_recording_presigned_url`) -- SFU Recording: [recording.success](/server-side/v2/introduction/webhook#sfu-recording-events) (attribute: `chat_recording_path` ; `chat_recording_presigned_url`) -- Multiresolution Recording: [hls.recording.success](/server-side/v2/introduction/webhook#hlsrecordingsuccess) (attribute: `chat_recording_path` ; `chat_recording_presigned_url`) - -**The recording path for these respective recordings will look like follows:** - -`s3:////chat///Rec--.csv` - -**The breakdown of the aforementioned tags is as follows:** - -| Tag Name | Description | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Location | Name of the bucket where recordings are stored | -| Prefix | Prefix for upload path which is configured in storage settings of your template. If not configured, the default value for this will be your Customer ID | -| Room ID | The identifier for the room which was recorded | -| Start Date | Start date of the session | -| Epoch | Start time of the recorder in the session | diff --git a/docs/get-started/v2/get-started/features/recordings.mdx b/docs/get-started/v2/get-started/features/recordings.mdx deleted file mode 100755 index d89ad15c35..0000000000 --- a/docs/get-started/v2/get-started/features/recordings.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Recording -nav: 3.1 ---- - -import Recordings from '@/common/recordings.md'; - - diff --git a/docs/get-started/v2/get-started/features/recordings/chat-recording.mdx b/docs/get-started/v2/get-started/features/recordings/chat-recording.mdx new file mode 100644 index 0000000000..61662db7ab --- /dev/null +++ b/docs/get-started/v2/get-started/features/recordings/chat-recording.mdx @@ -0,0 +1,61 @@ +--- +title: Chat recording +nav: 3.12 +--- + +100ms can record chat messages sent in a room when a [video recording](./overview) is used. Chat messages that are broadcasted to the room or sent to a role are recorded (direct messages are not recorded). + + +Chat recording generates a `.csv` file, which will be uploaded to the [storage bucket](./storage) configured for your video recordings. + +## File structure + +The file header will be: `SenderPeerID,SenderName,SenderUserID,Roles,SentAt,Type,Message` + +**Header information** + +| Header | Description | +| ----------- | ------ | +| SenderPeerID |Sender's peer id | +| SenderName | Sender's name | +| SenderUserID | Sender's user id | +| Roles | Roles to which the message is sent; `[]` in case of all roles | +| SentAt| SentAt in RFC.3339 format | +| Type| Message type - `chat`| +| Message| Message that was sent | + +## Fetch chat recording + +### On the 100ms Dashboard + +You can access your chat recordings on the [sessions page](https://dashboard.100ms.live/sessions) in the 100ms Dashboard. + +![Recording Links](/docs/v2/recording-links-session.png) + +### With the REST API + +Recordings generate [recording assets](/server-side/v2/api-reference/recording-assets/overview) that can be fetched with the REST API. + +### With webhooks + +100ms can send webhooks when the recording has stopped and is available for download. The recording path is available in following webhook responses: + +- Browser Recording: [beam.recording.success](/server-side/v2/introduction/webhook#beamrecordingsuccess) (attribute: `chat_recording_path` ; `chat_recording_presigned_url`) +- SFU Recording: [recording.success](/server-side/v2/introduction/webhook#sfu-recording-events) (attribute: `chat_recording_path` ; `chat_recording_presigned_url`) +- Multiresolution Recording: [hls.recording.success](/server-side/v2/introduction/webhook#hlsrecordingsuccess) (attribute: `chat_recording_path` ; `chat_recording_presigned_url`) + +### Path structure + +The recording path will look like: + +`s3:////chat///Rec--.csv` + +**The breakdown of the aforementioned tags is as follows:** + +| Tag Name | Description | +| ----------- | ---------------------------------------------- | +| Location | Name of the bucket where recordings are stored | +| Prefix | Prefix for upload path which is configured in storage settings of your template. If not configured, the default value for this will be your Customer ID | +| Room ID | The identifier for the room which was recorded | +| Start Date | Start date of the session | +| Epoch | Start time of the recorder in the session | diff --git a/docs/get-started/v2/get-started/features/recordings/migrating.mdx b/docs/get-started/v2/get-started/features/recordings/migrating.mdx new file mode 100644 index 0000000000..9e38fa2785 --- /dev/null +++ b/docs/get-started/v2/get-started/features/recordings/migrating.mdx @@ -0,0 +1,27 @@ +--- +title: Migrating from SFU recording +nav: 3.14 +--- + +There are 2 methods to get composite recordings in 100ms. If you have been using "SFU recording", we recommend you move over to using "Room composite recordings". + +### Quick comparison + +| Features | Composite recordings | SFU recording (legacy) | +| ----------------------- | -------------- | --------------------- | +| Asset type generated | room-composite | room-composite-legacy | +| Recording method | Browser | SFU | +| Composition quality | Higher | Lower | +| Portrait/landscape mode | Available | Not available | +| Start method | Auto-start and on-demand | Auto start only | +| UI customization | Available | Not available | +| Role-specific recording | Available | Not available | +| Resolution | Customizable up to 1080p | 720p | + +### How-to migrate + +Migrating to room composite recordings is just a few toggles of work on the 100ms Dashboard. + +![Quick migration](/docs/v2/sfu-migration.png) + +Go through our [recordings overview](./overview) to learn more. diff --git a/docs/get-started/v2/get-started/features/recordings/overview.mdx b/docs/get-started/v2/get-started/features/recordings/overview.mdx new file mode 100755 index 0000000000..0ba3080f3c --- /dev/null +++ b/docs/get-started/v2/get-started/features/recordings/overview.mdx @@ -0,0 +1,119 @@ +--- +title: Video recording +nav: 3.1 +--- + +Recordings enable you to convert live video from 100ms rooms into long-lived video assets. 100ms can generate different types of recordings + +- [Composite recordings](#composite-recordings): One video file, composed with tracks of all peers in the room +- [Track recordings](#track-recordings): Separate media files for audio, video and screen-share tracks + +## Composite recordings + +> Previously called browser or beam recording. + +100ms can record a room to capture the perspective of a participant, who can hear and see other participants in the room. These recordings are a mix of multiple audio/video tracks, composed together in one video file—with a layout that is similar to the UI of the room. The output is a single MP4 file. + +Composite recordings can be customized: you can define which roles to record, which video resolution to record and, other modifications to the UI. + +Internally, these recordings are created using an automated web browser that joins the room as a hidden peer (called "beam")—which records the screen to generate a composite recording. + +### Start and stop recording + +Enable "room composite recordings" on any template created on the [100ms Dashboard](https://dashboard.100ms.live). This enables the feature for your template, and exposes options to customize the recording. You can control when to **start and stop** recording, based on your use-case. + +![Composite recording on 100ms Dashboard](/docs/v2/composite-recording-dashboard.png) + +There are 3 methods to start a recording. + + + + +Start recording automatically when the first peer joins a room. +
+ +[Enable on dashboard →](/docs/v2/auto-start-dashboard.png) + +
+ + +Start recording by calling the 100ms server-side REST API. This is suitable to start/stop based on business logic. +
+ +[REST API docs →](/server-side/v2/how-to-guides/recordings/overview) + +
+ + +Start recording with the client-side SDK. This is suitable to start/stop based on user action (click a button). +
+ +[Web docs →](/javascript/v2/how-to-guides/record-and-live-stream/rtmp-recording) + +[iOS docs →](/ios/v2/how-to-guides/record-and-live-stream/rtmp-recording) + +[Android docs →](/android/v2/how-to-guides/record-and-live-stream/rtmp-recording) + +[Flutter docs →](/flutter/v2/how-to-guides/record-and-live-stream/recording) + +[React Native docs →](/react-native/v2/how-to-guides/record-and-live-stream/recording) + +
+ +
+ +### Fetch the recording + +After the recording is stopped and processed, a [recording asset](/server-side/v2/api-reference/recording-assets/overview) is generated. Composite generates an asset of type `room-composite`. You can fetch this asset through multiple methods: + +* [Fetch on 100ms Dashboard](./storage#using-the-100ms-dashboard) +* Auto-send to your [cloud storage location](./storage) +* REST API to [get recording asset](/server-side/v2/api-reference/recording-assets/get-asset) +* Server-side webhooks: [use the `beam.recording.success` webhook](/server-side/v2/how-to-guides/configure-webhooks/webhook#beamrecordingsuccess) + +### Customizing the recording + +- No-code customization on the 100ms Dashboard: Modify the video resolution, which roles to be recorded +- Get audio-only recording: possible via the [REST API to start recording](/server-side/v2/how-to-guides/recordings/customize) +- [Customize the composition UI](./../ui-composition) + +### Legacy composite through SFU recording + +100ms has an alternative method to generate a composite video. This method is not recommended since above described method generates a higher quality video. + +SFU recording generates the asset type called `room-composite-legacy`. + +See [migration doc](./migrating) to compare these 2 methods and pick a preferred approach. + +### Recording with live streaming + +If you using live streaming or external streaming, you can enable recording while starting the stream. For live stream with HLS, 100ms can generate 2 types of recordings. + +#### Video-on-demand (VOD) Recording + +This recording is an M3U8 file (same playback format as HLS), which can be used for replaying your HLS stream. This option is more suitable for Video-on-Demand use-cases as it has adaptive bitrate (ABR). + +- Enable recording while starting a live stream: through the [server-side REST API](/server-side/v2/api-reference/live-streams/start-live-stream-for-room) or via a client SDK +- This generates a recording asset with `room-vod` type. Use the server-side [REST API to fetch the asset](/server-side/v2/api-reference/recording-assets/get-asset) + +#### Single file per layer + +This type of recording will generate multiple video files: one for each resolution layer of the live stream. + +- Enable recording while starting a live stream: through the [server-side REST API](/server-side/v2/api-reference/live-streams/start-live-stream-for-room) or via a client SDK +- This generates multiple recording assets with `room-composite` type. Use the server-side [REST API to fetch the assets](/server-side/v2/api-reference/recording-assets/get-asset) + +## Track recordings + +Some use-cases require 1 file per 1 track (audio, video or screen-share). This can be generated through the SFU recording. + +Track recordings can be enabled for your template through the 100ms Dashboard. Once enabled, the recording **starts automatically** when the first peer joins a room, and stops automatically when the last peer leaves the room. + +![SFU recording on 100ms Dashboard](/docs/v2/sfu-recording-dashboard.png) + +### Fetch the recording + +You can fetch track recordings through two methods: + +* Auto-send to your [cloud storage location](./storage) +* Server-side webhooks: use the `track.recording.success` webhook diff --git a/docs/get-started/v2/get-started/features/recordings/storage.mdx b/docs/get-started/v2/get-started/features/recordings/storage.mdx new file mode 100644 index 0000000000..1a8e063743 --- /dev/null +++ b/docs/get-started/v2/get-started/features/recordings/storage.mdx @@ -0,0 +1,108 @@ +--- +title: Storage configuration +nav: 3.13 +--- + +100ms can upload recordings to your preferred cloud storage location. We support these providers: + +- Amazon Simple Storage Service (AWS S3) +- Google Cloud Storage (GCP) +- Alibaba Object Storage (OSS) + +If you don't configure a cloud storage service, then recordings will be stored temporarily (for 15 days) in a storage location managed by 100ms. Post a successful recording, the recording asset can be accessed on the [100ms dashboard](https://dashboard.100ms.live/sessions) or [through the REST API](/server-side/v2/api-reference/recording-assets/get-asset). + +## Configure storage + +### On the 100ms Dashboard + +You can configure storage in your template's recording settings on the 100ms Dashboard. As an example, this is how you would configure an S3 location: + +1. Generate your credentials; for this example, you can check out a [guide from AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). You can skip this step if you already have credentials. Please note that if you are running a Browser recording, you need to give upload permission to your key, but if you are running an SFU recording, you need to give both upload and download permission. + +2. Go to 100ms Dashboard and go to template **configuration by selecting the configure icon**. + +![Create your first app](/docs/v2/recording-storage-settings-step2.png) + +3. Head over to the **Destinations** tab. + +![Destinations](/docs/v2/recording-storage-settings-step3.png) + +1. Key in your credentials (using an example of an S3 bucket here): + + - Access Key: Access Key generated from AWS IAM Console + - Secret Key: Secret Key generated from AWS IAM Console + - Bucket: Name of the bucket in S3 + - Region: Name of the region, for example, ap-south-1 + - Prefix for Upload Path: Define the directory name (optional) + +![Destinations](/docs/v2/recording-storage-settings-step4.png) + +5. Use the **Validate Config** button to test your storage setup. + +![Destinations](/docs/v2/recording-storage-settings-step5.png) + +6. You will see a message that the AWS **configuration was successfully validated**. + +![Destinations](/docs/v2/recording-storage-settings-step6.png) + +The above message ensures that your configuration is successful now, and all your recordings will start collecting in your configured destination. + +### With the REST API + +Use the [Policy API](https://www.100ms.live/docs/server-side/v2/api-reference/policy/create-template-via-api) to programmatically configure your storage location. + +You can configure the **`type`** field of recording object to `s3` for AWS, `oss` for Alibaba Object Storage Service and `gs` for Google Cloud Storage with the following details: + +- Access Key: Access Key for your OSS/GCP Bucket +- Secret Key: Secret Key for your OSS/GCP Bucket +- Bucket: Name of the bucket +- Region: Name of the region where your bucket is located in +- Prefix for Upload Path: Define the directory name (optional) + +## Fetching the asset + +### Using the 100ms Dashboard + +You can access your recordings on the [sessions page](https://dashboard.100ms.live/sessions) in the 100ms Dashboard. + +![Recording Links](/docs/v2/recording-links-session.png) + +### Using the REST API + +Recordings generate [recording assets](/server-side/v2/api-reference/recording-assets/overview) that can be fetched with the REST API. + +### Get webhooks + +100ms can send webhooks when the recording has stopped and is available for download. The recording path is available in following webhook responses: + +- Room composite (also called browser) recording: [beam.recording.success](/server-side/v2/introduction/webhook#beamrecordingsuccess) (attribute: `recording_path`) +- Room composite - legacy (also called SFU) recording: [recording.success](/server-side/v2/introduction/webhook#sfu-recording-events) (attribute: `recording_path`) +- Recordings enabled with live streaming (HLS): [hls.recording.success](/server-side/v2/introduction/webhook#hlsrecordingsuccess) + +### Path formats + +The recording path for these respective recordings will look like: + +1. Room composite (also called browser) recording: `s3:////beam///Rec--.mp4` + +2. Room composite - legacy (also called SFU) recording: + + 1. Composite: `s3:///////Rec--.mp4` + + 2. Individual: `s3://///////.webm` + +3. Multi-resolution recording (available in HLS): `s3:////hls////file-recording/Rec---.mp4` + +4. VOD recording (available in HLS): `s3:////hls////vod/Rec--.zip` + +| Name | Description | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Location | Name of the bucket where recordings are stored | +| Prefix | Prefix for upload path which is configured in storage settings of your template. If not configured, the default value for this will be your Customer ID | +| Room ID | The identifier for the room which was recorded | +| Start Date | Start date of the session | +| Epoch | Start time of the recorder in the session | +| Peer ID | Unique identifier of a peer in a room | +| Stream ID | Unique identifier for a particular stream of a room (audio-video/screenshare) | +| Track ID | Unique identifier for a particular track (audio or video) of a stream | +| Layer Index | Layer index values show descending HLS resolutions - 0(1080p), 1(720p), 2(480p), 3(360p) and 4(240p). If highest resolution of template is 720p, then 0(720p), 1(480p), 2(360p) and 3(240p) | diff --git a/docs/get-started/v2/get-started/features/ui-composition.mdx b/docs/get-started/v2/get-started/features/ui-composition.mdx index 6132500fe3..c2394bb3c6 100644 --- a/docs/get-started/v2/get-started/features/ui-composition.mdx +++ b/docs/get-started/v2/get-started/features/ui-composition.mdx @@ -3,7 +3,7 @@ title: Custom UI composition nav: 3.21 --- -When a 100ms room is being [live streamed](./live-streaming) or [recorded](./recordings), the video output is a **composition** of tracks from all peers in the room. In 100ms, this is made possible through "Beam", which is an internal component that combines a web browser and a video encoder that composes video. +When a 100ms room is being [live streamed](./live-streaming) or [recorded](./recordings/overview), the video output is a **composition** of tracks from all peers in the room. In 100ms, this is made possible through "Beam", which is an internal component that combines a web browser and a video encoder that composes video. ## How it works diff --git a/docs/get-started/v2/get-started/overview.mdx b/docs/get-started/v2/get-started/overview.mdx index 47cd9e8c1a..c2591f2c6b 100644 --- a/docs/get-started/v2/get-started/overview.mdx +++ b/docs/get-started/v2/get-started/overview.mdx @@ -67,5 +67,5 @@ Build a custom UI with our headless client SDKs for web, Android, iOS, React Nat ## Feature guides -- [Recordings](/get-started/v2/get-started/features/recordings) +- [Recordings](/get-started/v2/get-started/features/recordings/overview) - [Live streaming](/get-started/v2/get-started/features/live-streaming) diff --git a/docs/server-side/v2/api-reference/external-streams/start-external-stream-for-room.mdx b/docs/server-side/v2/api-reference/external-streams/start-external-stream-for-room.mdx index 4b81965d79..13bc1f2336 100644 --- a/docs/server-side/v2/api-reference/external-streams/start-external-stream-for-room.mdx +++ b/docs/server-side/v2/api-reference/external-streams/start-external-stream-for-room.mdx @@ -55,13 +55,24 @@ curl --location --request POST 'https://api.100ms.live/v2/external-streams/room/ ## Parameters -| Name | Type | Description | Required | -|--------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------- |----------| -| meeting_url | `string` | Single click meeting URL for the stream | No | -| rtmp_urls | `array` | List of RTMP output URLs to stream to (up to 3 `rtmp://` / `rtmps://` URLs supported) | Yes | -| recording | `bool` | Flag to enable recording | No | -| resolution | `object` | Video resolution for stream | No | -| destination | `string` | Name of destination from template to pick up configuration. If more than one RTMP destination present in the template then destination is mandatory | No | +| Name | Type | Description | Required | +|--------------|----------|-------------------------------- |----------| +| meeting_url | `string` | Single click meeting URL for the stream | No[1] | +| rtmp_urls | `array` | List of RTMP output URLs to stream to (up to 3 `rtmp://` / `rtmps://` URLs supported) | Yes | +| recording | `bool` | Flag to enable recording | No | +| resolution | `object` | Video resolution for stream | No | +| destination | `string` | Name of destination from template to pick up configuration | No[2] | + + + +> [1] `meeting_url` is **required** when +> - External streaming is not enabled on the dashboard +> - The template of this room does not have a subdomain (templates created through the REST API don't have subdomains) + + +> [2] (Advanced usage only) `destination` is **required** when +> - There are multiple destinations of this type on the template + ##### meeting_url diff --git a/docs/server-side/v2/api-reference/live-streams/start-live-stream-for-room.mdx b/docs/server-side/v2/api-reference/live-streams/start-live-stream-for-room.mdx index 2dacb73bcf..4495f0fcc0 100644 --- a/docs/server-side/v2/api-reference/live-streams/start-live-stream-for-room.mdx +++ b/docs/server-side/v2/api-reference/live-streams/start-live-stream-for-room.mdx @@ -58,13 +58,20 @@ curl --location --request POST 'https://api.100ms.live/v2/live-streams/room/ [1] `meeting_url` is **required** when +> - Live streaming is not enabled on the dashboard +> - The template of this room does not have a subdomain (templates created through the REST API don't have subdomains) + + +> [2] (Advanced usage only) `destination` is **required** when +> - There are multiple destinations of this type on the template ##### meeting_url diff --git a/docs/server-side/v2/api-reference/policy/create-template-via-api.mdx b/docs/server-side/v2/api-reference/policy/create-template-via-api.mdx index bfac2af73a..74b7aa7f77 100644 --- a/docs/server-side/v2/api-reference/policy/create-template-via-api.mdx +++ b/docs/server-side/v2/api-reference/policy/create-template-via-api.mdx @@ -804,5 +804,5 @@ Minimum between `width` and `height` should be in range [144, 1080] and Maximum | Name | Type | Description | Required | | ------ | -------- | ---------------------------------------------------------------------------------------------------------------------- | -------- | -| title | `string` | Title of the section eg. `Agenda`, `Short Summary`, `Follow Up Action Items`, `Short Summary`. (limit: 100 characters) | Yes | +| title | `string` | Title of the section `Agenda`, `Short Summary`, `Follow Up Action Items`, `Short Summary`. (limit: 100 characters) | Yes | | format | `string` | Format of the section. Valid values: [`bullets`, `paragraph`] | Yes | diff --git a/docs/server-side/v2/api-reference/recording-assets/overview.mdx b/docs/server-side/v2/api-reference/recording-assets/overview.mdx index c324972991..89528636d9 100644 --- a/docs/server-side/v2/api-reference/recording-assets/overview.mdx +++ b/docs/server-side/v2/api-reference/recording-assets/overview.mdx @@ -11,7 +11,8 @@ If a cloud storage location (for example, AWS S3 or GCS) is configured on the te ## Asset types -* `room-composite`: This type composes a single video (mp4) out of all peers in the room. +* `room-composite`: This asset type is a single video file (mp4) that composes audio/video for all peers in the room. The composition is higher quality (similar to the perspective of a peer in the room) than `room-composite-legacy` ([learn more](/get-started/v2/get-started/features/recordings/overview)). +* `room-composite-legacy`: This asset type is a single video file (mp4) that composes audio/video for all peers in the room. * `room-vod`: This is also a composition of all peers in the room and is available in HLS (or m3u8) format for on-demand playback. * `chat`: This type captures chat messages that were exchanged while recording was running. * `transcript`: This type is composed of generated transcripts of the recording, if transcription was enabled. The `metadata` will contain the `output_mode` (`txt`, `srt`, `json`). diff --git a/docs/server-side/v2/api-reference/recordings/start-recording-for-room.mdx b/docs/server-side/v2/api-reference/recordings/start-recording-for-room.mdx index 6ad40d4ca5..52d577f248 100644 --- a/docs/server-side/v2/api-reference/recordings/start-recording-for-room.mdx +++ b/docs/server-side/v2/api-reference/recordings/start-recording-for-room.mdx @@ -85,16 +85,28 @@ curl --location --request POST 'https://api.100ms.live/v2/recordings/room/ - ## Parameters -| Name | Type | Description | Required | -| ------------- | --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -| meeting_url | `string` | Single click meeting URL for the stream. | No | -| resolution | `object` | Video resolution for stream. | No | -| audio_only | `boolean` | Pass `true` to get an audio-only recording asset. | No | -| destination | `string` | Name of destination from template to pick up configuration. If more than one recording destination present in the template then destination is mandatory | No | -| transcription | `object` | Post call transcription configuration. | No | +| Name | Type | Description | Required | +| ------------- | --------- | --------------- | -------- | +| meeting_url | `string` | Single click meeting URL for the stream | No[1] | +| resolution | `object` | Video resolution for stream | No | +| destination | `string` | Name of destination from template to pick up configuration | No[2] | +| audio_only | `boolean` | Pass `true` to get an audio-only recording | No | +| transcription | `object` | Post call transcription configuration | No | + + +> [1] `meeting_url` is **required** when +> - Composite recordings is not enabled on the dashboard ([see how](/get-started/v2/get-started/features/recordings/overview)) +> - The template of this room does not have a subdomain (templates created through the REST API don't have subdomains) + + +> [2] (Advanced usage only) `destination` is **required** when +> - There are multiple destinations of this type on the template + +### meeting_url + + ### transcription @@ -118,9 +130,5 @@ curl --location --request POST 'https://api.100ms.live/v2/recordings/room/ diff --git a/docs/server-side/v2/how-to-guides/recordings/overview.mdx b/docs/server-side/v2/how-to-guides/recordings/overview.mdx index 23f8fd2d71..5348c82ced 100644 --- a/docs/server-side/v2/how-to-guides/recordings/overview.mdx +++ b/docs/server-side/v2/how-to-guides/recordings/overview.mdx @@ -3,9 +3,7 @@ title: Start and stop recording nav: 5.1 --- -import MeetingUrlConfig from '@/common/meeting-url.md'; - -This guide focuses on using room composite (browser-based) recordings. +This guide focuses on using room composite (browser-based) recordings. Learn more in [recordings overview](/get-started/v2/get-started/features/recordings/overview). A composite recording is a single MP4 file that records all peers and their tracks (audio, video and screen share). It captures the experience that a peer has in the room and is equivalent to recordings generated from Google Meet and Zoom. @@ -14,6 +12,7 @@ A composite recording is different from track-level recordings, which generate s +### Enable recordings on the dashboard + +In your template configuration, enable room composite recordings. + +![Composite recording on 100ms Dashboard](/docs/v2/composite-recording-dashboard.png) + ### Start recording with API -Use the server-side API to start recording for a given room (passed as `room_id` in the request body). +Use the server-side API to start recording for a given room (passed as `room_id` in the request body). Any configuration specified here will override the configuration on the template. Also see [API reference](../../api-reference/recordings/start-recording-for-room). @@ -39,26 +44,18 @@ curl --request POST 'https://api.100ms.live/v2/recordings/room//start' --header 'Content-Type: application/json' \ --header 'Authorization: Bearer ' \ --data-raw '{ - "meeting_url" : "", "resolution" : {"width": 1280, "height": 720} }' ``` -Internally, this API launches a browser window that opens the `meeting_url` of the room. This browser instance joins the room and records it, similar to what a peer would do. You can [define a role](./customize) for this peer to ensure the peer's tile is not visible. - -##### meeting_url - - - ### Automate start recording (Optional) -Instead of relying on peers in the room to start recording, you use room lifecycle events to automate start recording. +Instead of relying on peers in the room to start recording, you use room lifecycle events to automate start recording. If you want to start recording when the first peer joins, you can also enable "auto-start" on the template configuration (see step 1). To do so, set up a [webhook listener](../configure-webhooks/overview) and act on the relevant webhook: -- Start recording for every session: Use the [`session.open.success` event](../configure-webhooks/webhook#sessionopensuccess) to start recording (with the above API) - Start recording when a particular peer joins: Use the [`peer.join.success` event](../configure-webhooks/webhook#peerjoinsuccess) to start recording (with the above API) ### Listen to recording status updates diff --git a/lib/algolia/getRecords.js b/lib/algolia/getRecords.js index 3d4b19c1a0..d519a1fc7d 100644 --- a/lib/algolia/getRecords.js +++ b/lib/algolia/getRecords.js @@ -184,11 +184,6 @@ async function updateIndex() { "questions": "I’d like to use the endpoint of my backend service instead of the 100ms token endpoint for auth token generation in the React sample app. How do I do that?", "answers": "You can set up a token generation service on your end to create auth tokens and block users that are trying to join without a token that's generated from your service. Please check authentication and tokens guide for more information.\n\nYou can update the code to point to your own token service (relevant code in the sample - see getToken(...)), your token endpoint can follow a similar interface: for a given room_id and role name, return the auth token JWT.\n\nYou can continue using the existing routes (room_id/role) or set up your own routes in the cloned/forked code." }, - { - "platform": "Common", - "questions": "What is the difference between the Beam recording vs. SFU recording?", - "answers": "Beam recording is the browser recording, built to give users a participant-first recording experience. SFU recording is a composite recording which gets created after recording each of the individual peers and merging it. Please check this guide for more information." - }, { "platform": "Common", "questions": "After a live stream ends, how long does it take (for both Beam recording and SFU) to show up in our s3 bucket?", @@ -293,11 +288,6 @@ async function updateIndex() { "platform": "Common", "questions": "Hey 100ms team - is there a way for the beam recorder to record what is happening in the chat without the chat being open and covering any tiles?" }, - { - "platform": "Common", - "questions": "Whats the difference between the beam recording vs. SFU recording?", - "answers": "Beam recording is the browser recording which you are actually using.\nSFU recording is a composite recording which gets created after recording each of the individual peers and merging it.\nmore on both recordings here - https://www.100ms.live/docs/server-side/v2/introduction/recordings" - }, { "platform": "Common", "questions": "Will the 100ms bot go to a video call webpage and render dynamic pages and stream/record the screen, or will it collect incoming video/audio streams only?", diff --git a/public/docs/v2/auto-start-dashboard.png b/public/docs/v2/auto-start-dashboard.png new file mode 100644 index 0000000000..a1c6effd08 Binary files /dev/null and b/public/docs/v2/auto-start-dashboard.png differ diff --git a/public/docs/v2/composite-recording-dashboard.png b/public/docs/v2/composite-recording-dashboard.png new file mode 100644 index 0000000000..494d9aae78 Binary files /dev/null and b/public/docs/v2/composite-recording-dashboard.png differ diff --git a/public/docs/v2/sfu-migration.png b/public/docs/v2/sfu-migration.png new file mode 100644 index 0000000000..620fe76c76 Binary files /dev/null and b/public/docs/v2/sfu-migration.png differ diff --git a/public/docs/v2/sfu-recording-dashboard.png b/public/docs/v2/sfu-recording-dashboard.png new file mode 100644 index 0000000000..3c5b15372e Binary files /dev/null and b/public/docs/v2/sfu-recording-dashboard.png differ