Many people are sensitive to their faces being shown publicly on video without their consent. Cinematographers often need to have subjects within the video to sign a release form of some sort, which can be a hassle at times, and perhaps unnecessary for the purposes of their video. A good solution to avoiding this hassle is to simply blur the subject’s face from the post-production footage. However, this is often a tedious task that usually requires manual work. Automating this task will make the life of video editors much simpler, and save them a significant amount of time.
The goal of Don't Film Me is to provide a tool to assist with this task of blurring by automating the process.
- Get a
reference
face - Break the video down into images and audio
- Compare one image against the
reference
face everyf
images
f
: frequency of frame analysis- Analysis is done via [Microsoft Cognitive Services Vision API](Microsoft Cognitive Services Vision API)
- Blur the face region of the image for
f
frames if there is a match
- A neat optimization would be blur the previous 5 or 10 frames as well as the following 5-10 frames. This usually ensures full coverage throughout the video.
- The blurring is done via the StackBlur Algorithm, which is an amazing blurring algorithm. It's similar to Gaussian, but is 7x faster, making it a very viable option to use as a real-time tool.
- Stitch the images and audio back together. Et voila!
babel-node blur.js <video_file> <reference_image>