Ricard Lado Roigé, Marco A. Pérez
IQS School of Engineering, Universitat Ramon Llull
This repository contains the official implementation of the STB-VMM: Swin Transformer Based Video Motion Magnification paper in PyTorch.
The goal of Video Motion Magnification techniques is to magnify small motions in a video to reveal previously invisible or unseen movement. Its uses extend from bio-medical applications and deep fake detection to structural modal analysis and predictive maintenance. However, discerning small motion from noise is a complex task, especially when attempting to magnify very subtle often sub-pixel movement. As a result, motion magnification techniques generally suffer from noisy and blurry outputs. This work presents a new state-of-the-art model based on the Swin Transformer, which offers better tolerance to noisy inputs as well as higher-quality outputs that exhibit less noise, blurriness and artifacts than prior-art. Improvements in output image quality will enable more precise measurements for any application reliant on magnified video sequences, and may enable further development of video motion magnification techniques in new technical fields.
pip install -r requirements.txt
❗FFMPEG is required to run the magnify_video script
To test STB-VMM just run the script named magnify_video.sh with the appropriate arguments.
For example:
bash magnify_video.sh -mag 20 -i ../demo_video/baby.mp4 -m ckpt/ckpt_e49.pth.tar -o STB-VMM_demo_x20_static -s ../demo_video/ -f 30
Note: To magnify any video a pre-trained checkpoint is required.
Note 2: If you are running Windows an alternative powershell script is provided
To train the STB-VMM model use train.py with the appropriate arguments. The training dataset can be downloaded from here.
For example:
python3 train.py -d ../data/train -n 100000 -j 32 -b 5 -lr 0.00001 --epochs 50 #--resume ckpt/ckpt_e01.pth.tar
STB_VMM_Video.mp4
More at http://doi.org/10.17632/76s26nrcpv.2
@article{LADOROIGE2023110493,
title = {STB-VMM: Swin Transformer based Video Motion Magnification},
journal = {Knowledge-Based Systems},
pages = {110493},
year = {2023},
issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2023.110493},
url = {https://www.sciencedirect.com/science/article/pii/S0950705123002435},
author = {Ricard Lado-Roigé and Marco A. Pérez},
keywords = {Computer vision, Deep learning, Swin Transformer, Motion magnification, Image quality assessment},
abstract = {The goal of video motion magnification techniques is to magnify small motions in a video to reveal previously invisible or unseen movement. Its uses extend from bio-medical applications and deepfake detection to structural modal analysis and predictive maintenance. However, discerning small motion from noise is a complex task, especially when attempting to magnify very subtle, often sub-pixel movement. As a result, motion magnification techniques generally suffer from noisy and blurry outputs. This work presents a new state-of-the-art model based on the Swin Transformer, which offers better tolerance to noisy inputs as well as higher-quality outputs that exhibit less noise, blurriness, and artifacts than prior-art. Improvements in output image quality will enable more precise measurements for any application reliant on magnified video sequences, and may enable further development of video motion magnification techniques in new technical fields.}
}
This implementation borrows from the awesome works of: