Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rendering and recording mirrored video stream from camera. #1628

Open
wants to merge 11 commits into
base: gh-pages
Choose a base branch
from
65 changes: 65 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
sudo: false
language: node_js
dist: trusty
node_js:
- "7"

env:
- CXX=g++-4.8
matrix:
include:
- os: linux
sudo: false
env: BROWSER=chrome BVER=stable
- os: linux
sudo: false
env: BROWSER=chrome BVER=beta
- os: linux
sudo: false
env: BROWSER=chrome BVER=unstable
- os: linux
sudo: false
env: BROWSER=firefox BVER=stable
- os: linux
sudo: false
env: BROWSER=firefox BVER=beta
- os: linux
sudo: false
env: BROWSER=firefox BVER=unstable
- os: osx
sudo: required
osx_image: xcode9.4
env: BROWSER=safari BVER=stable
- os: osx
sudo: required
osx_image: xcode11.2
env: BROWSER=safari BVER=unstable

fast_finish: true

allow_failures:
- os: linux
sudo: false
env: BROWSER=chrome BVER=unstable
- os: linux
sudo: false
env: BROWSER=firefox BVER=unstable

before_script:
- ./node_modules/travis-multirunner/setup.sh
- export DISPLAY=:99.0
- if [ -f /etc/init.d/xvfb ]; then sh -e /etc/init.d/xvfb start; fi

after_failure:
- for file in *.log; do echo $file; echo "======================"; cat $file; done || true

notifications:
email:
-

addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- g++-4.8
27 changes: 27 additions & 0 deletions src/content/insertable-streams/video-recording/css/main.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
/*
* Copyright (c) 2015 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree.
*/

button {
margin: 20px 10px 0 0;
min-width: 100px;
}

div#buttons {
margin: 0 0 20px 0;
}

div#status {
height: 2em;
margin: 1em 0 0 0;
}

video {
--width: 30%;
width: var(--width);
height: calc(var(--width) * 0.75);
}
89 changes: 89 additions & 0 deletions src/content/insertable-streams/video-recording/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
<!DOCTYPE html>
<!--
* Copyright (c) 2021 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree.
-->
<html>
<head>

<meta charset="utf-8">
<meta name="description" content="WebRTC code samples">
<meta name="viewport" content="width=device-width, user-scalable=yes, initial-scale=1, maximum-scale=1">
<meta itemprop="description" content="Client-side WebRTC code samples">
<meta itemprop="image" content="../../../images/webrtc-icon-192x192.png">
<meta itemprop="name" content="WebRTC code samples">
<meta name="mobile-web-app-capable" content="yes">
<meta id="theme-color" name="theme-color" content="#ffffff">

<base target="_blank">

<title>Insertable Streams - Mirror in a worker vs Mirror in main thread</title>

<link rel="icon" sizes="192x192" href="../../../images/webrtc-icon-192x192.png">
<link href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700" rel="stylesheet" type="text/css">
<link rel="stylesheet" href="../../../css/main.css"/>
<link rel="stylesheet" href="css/main.css"/>

</head>

<body>

<div id="container">
<h1><a href="//webrtc.github.io/samples/" title="WebRTC samples homepage">WebRTC samples</a>
<span>Breakout Box crop</span></h1>

<p>This sample shows how to perform mirroring of a video stream using the experimental
<a href="https://github.com/w3c/mediacapture-transform">mediacapture-transform</a> API
in a Worker.

It also provides comparison between mirroring in the main thread using canvas and mirroring in a worker.
</p>

<div>
<span>Original video</span>
<div>
<video id="originalVideo" playsinline autoplay muted></video>
<video id="recordedOriginalVideo" controls playsinline autoplay muted></video>
</div>
</div>

<div>
<span>Mirrored With Canvas</span>
<div>
<video id="mirroredWithCanvasVideo" playsinline autoplay muted></video>
<video id="recordedMirroredWithCanvasVideo" controls playsinline autoplay muted></video>
</div>
</div>

<div>
<span>Mirrored in a worker</span>
<div>
<video id="mirroredInWebWorkerVideo" playsinline autoplay muted></video>
<video id="recordedMirroredInWorkerVideo" controls playsinline autoplay muted></video>
</div>
</div>

<div class="box">
<button id="startButton">Start</button>
<button id="stopButton">Stop</button>
<button id="slowDownButton">Slow Down Main Thread</button>
</div>

<p>
<b>Note</b>: This sample is using an experimental API that has not yet been standardized. As
of 2022-11-21, this API is available in the latest version of Chrome based browsers.
</p>
<a href="https://github.com/webrtc/samples/tree/gh-pages/src/content/insertable-streams/video-crop"
title="View source for this page on GitHub" id="viewSource">View source on GitHub</a>

</div>

<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
<script src="js/main.js" async></script>

<script src="../../../js/lib/ga.js"></script>
</body>
</html>
154 changes: 154 additions & 0 deletions src/content/insertable-streams/video-recording/js/main.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
/*
* Copyright (c) 2021 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree.
*/

'use strict';

/* global MediaStreamTrackProcessor, MediaStreamTrackGenerator */
if (typeof MediaStreamTrackProcessor === 'undefined' ||
typeof MediaStreamTrackGenerator === 'undefined') {
alert(
'Your browser does not support the experimental MediaStreamTrack API ' +
'for Insertable Streams of Media. See the note at the bottom of the ' +
'page.');
}

const startButton = document.getElementById('startButton');
const slowDownButton = document.getElementById('slowDownButton');
const stopButton = document.getElementById('stopButton');
const originalVideo = document.getElementById('originalVideo');
const recordedOriginalVideo = document.getElementById('recordedOriginalVideo');
const mirroredWithCanvasVideo = document.getElementById('mirroredWithCanvasVideo');
const recordedMirroredWithCanvasVideo = document.getElementById('recordedMirroredWithCanvasVideo');
const mirroredInWebWorkerVideo = document.getElementById('mirroredInWebWorkerVideo');
const recordedMirroredInWorkerVideo = document.getElementById('recordedMirroredInWorkerVideo');

const worker = new Worker('./js/worker.js', { name: 'Crop worker' });


class VideoRecorder {
constructor(stream, outputVideoElement) {
this.videoElement = outputVideoElement;
this.mediaRecorder = new MediaRecorder(stream);

this.mediaRecorder.ondataavailable = (event) => {
if (event.data.size > 0) {
this.recordedBlob.push(event.data);
}
};

this.mediaRecorder.onerror = (e) => {
throw e;
};

this.recordedBlob = [];
}

start() {
this.mediaRecorder.start(1000);
}

stop() {
this.mediaRecorder.stop();
console.log('stopped');
const blob = new Blob(this.recordedBlob, { type: 'video/webm' });
const url = URL.createObjectURL(blob);
this.videoElement.src = url;
}
}

let recorders = [];

startButton.addEventListener('click', async () => {
const stream = await navigator.mediaDevices.getUserMedia({ video: { width: 1280, height: 720 } });
Copy link

@hthetiot hthetiot Jul 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hard coded camera size will fail on many device.

Edit: you do part bellow already:

Use video: true constraints and videoWidth/Height on video tag once metadataloaded to get size instead or VideoTrack width or height settings.

originalVideo.srcObject = stream;

const [track] = stream.getTracks();
const processor = new MediaStreamTrackProcessor({ track });
const { readable } = processor;

const generator = new MediaStreamTrackGenerator({ kind: 'video' });
const { writable } = generator;

const mediaStream = new MediaStream([generator]);
mirroredInWebWorkerVideo.srcObject = mediaStream;

const mirroredWithCanvasVideoStream = createMirroredCanvasStream(stream);
mirroredWithCanvasVideo.srcObject = mirroredWithCanvasVideoStream;

recorders.push(new VideoRecorder(stream, recordedOriginalVideo));
recorders.push(new VideoRecorder(mediaStream, recordedMirroredInWorkerVideo));
recorders.push(new VideoRecorder(mirroredWithCanvasVideoStream, recordedMirroredWithCanvasVideo));


recorders.forEach(recorder => recorder.start());

worker.postMessage({
operation: 'mirror',
readable,
writable,
}, [readable, writable]);
});

stopButton.addEventListener('click', () => {
recorders.forEach(recorder => recorder.stop());
recorders = [];
});

slowDownButton.addEventListener('click', () => {
console.time('slowDownButton');
let str = '';
for (let i = 0; i < 100000; i++) {
str += i.toString();
if (str[str.length - 1] === '0') {
str += '1';
}
}
console.timeEnd('slowDownButton');
});


function createMirroredCanvasStream(stream) {
const videoElement = document.createElement('video');
videoElement.playsInline = true;
videoElement.autoplay = true; // required in order for <canvas/> to successfully capture <video/>
videoElement.muted = true;
videoElement.srcObject = stream;

const videoTrack = stream.getVideoTracks()[0];
const { width, height } = videoTrack.getSettings();

const canvasElm = document.createElement('canvas');
canvasElm.width = width;
canvasElm.height = height;

const ctx = canvasElm.getContext('2d');

ctx.translate(canvasElm.width, 0);
ctx.scale(-1, 1);

function drawCanvas() {
ctx.drawImage(videoElement, 0, 0, canvasElm.width, canvasElm.height);
requestAnimationFrame(drawCanvas);
}
// our stepping criteria to recursively draw on canvas from <video/> frame
requestAnimationFrame(drawCanvas);

// testing this, we realized that Chrome makes the video super laggy if the this._preCanvasVideoElm
// is not in the DOM, and visible. We tried turning the opacity to 0, positioning the
// video offscreen, etc. But the only thing that makes the performance good is making
// it actually visible. So we make a 1px X 1px video in the top corner of the screen.
videoElement.style.width = '1px';
videoElement.style.height = '1px';
videoElement.style.position = 'absolute';
videoElement.style.zIndex = '9999999999999';
document.body.appendChild(videoElement);
videoElement.play();
const canvasStream = canvasElm.captureStream(30);

return canvasStream;
}
44 changes: 44 additions & 0 deletions src/content/insertable-streams/video-recording/js/worker.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
/*
* Copyright (c) 2021 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree.
*/

'use strict';

const offscreenCanvas = new OffscreenCanvas(256, 256);
const ctx = offscreenCanvas.getContext('2d');


function transform(frame, controller) {
if (offscreenCanvas.width !== frame.displayWidth) {
offscreenCanvas.width = frame.displayWidth;
offscreenCanvas.height = frame.displayHeight;
ctx.translate(1280, 0);
ctx.scale(-1, 1);
}

// Draw frame to offscreen canvas with flipped x-axis.
ctx.drawImage(frame, 0, 0, offscreenCanvas.width, offscreenCanvas.height);

const newFrame = new VideoFrame(offscreenCanvas, {
timestamp: frame.timestamp,
duration: frame.duration,
});
controller.enqueue(newFrame);
frame.close();
}

onmessage = async (event) => {
const {operation} = event.data;
if (operation === 'mirror') {
const {readable, writable} = event.data;
readable
.pipeThrough(new TransformStream({transform}))
.pipeTo(writable);
} else {
console.error('Unknown operation', operation);
}
};