Skip to content

Lab-LVM/awesome-VLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 

Repository files navigation

Awesome Vision Language Model

Overview

Contrastive Learning

Narrow the distance of latent space between the image and text.

PrefixLM

Unified multi-modal architecture consisting of encoder and decoder. Main tasks are image-conditioned text generation/captioning and VQA.

Multi-modal Fusing with Cross Attention

Fuse visual information into a language model decorder using a cross attention mechanism. Main tasks are image captioning and VQA.

Masked-Language Modeling / Image-Text Matching

Combination of MLM and ITM. MLM predicts the masked word by image that annotated more information as bounding box and ITM matches image and caption among many negative captions.

No Training

Without training, just using pretrained model, make two features into one space.

About

Vision Language Model paper

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published