Skip to content
/ ava Public

building AVA from ex-machina; a lightweight multi-modal system from scratch, just for learning & experimentation

License

Notifications You must be signed in to change notification settings

shivendrra/ava

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AVA

ava

Introduction

Building Ava from Ex-Machina using Language model paired audio engine to generate speech along with a vision model capabale of understanding human emotions & . Using MoE to generated and understand speech, and then using vision models to identify and see physical things.

Trained models can be downloaded from: huggingface/ava-v1

Audio-Language Model

A transformer based language MoE model fused with an audio engine, that could directly be trained on audio lanugage rather than on written data without even need of text data in any way. Still experimenting, will see what happens.

Vision Model

Not yet decided, but I'll soon.

Contribution

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

MIT

Releases

No releases published

Packages

No packages published

Languages