Skip to content

Latest commit

 

History

History
68 lines (51 loc) · 2.54 KB

Readme.md

File metadata and controls

68 lines (51 loc) · 2.54 KB

Don't Film Me!

Many people are sensitive to their faces being shown publicly on video without their consent. Cinematographers often need to have subjects within the video to sign a release form of some sort, which can be a hassle at times, and perhaps unnecessary for the purposes of their video. A good solution to avoiding this hassle is to simply blur the subject’s face from the post-production footage. However, this is often a tedious task that usually requires manual work. Automating this task will make the life of video editors much simpler, and save them a significant amount of time.

The goal of Don't Film Me is to provide a tool to assist with this task of blurring by automating the process.

How it works

  1. Get a reference face
  2. Break the video down into images and audio
  3. Compare one image against the reference face every f images
  • f: frequency of frame analysis
  • Analysis is done via [Microsoft Cognitive Services Vision API](Microsoft Cognitive Services Vision API)
  1. Blur the face region of the image for f frames if there is a match
  • A neat optimization would be blur the previous 5 or 10 frames as well as the following 5-10 frames. This usually ensures full coverage throughout the video.
  • The blurring is done via the StackBlur Algorithm, which is an amazing blurring algorithm. It's similar to Gaussian, but is 7x faster, making it a very viable option to use as a real-time tool.
  1. Stitch the images and audio back together. Et voila!

Usage

babel-node blur.js <video_file> <reference_image>

Demos

ccccOriginal

Reference Images

refm reff

Blurring One Person

ccccOne

Blurring Two People

ccccTwo

trumpOriginal

Reference Image

trump

Blurred Result

trumpBlur