Building Ava from Ex-Machina using Language model paired audio engine to generate speech along with a vision model capabale of understanding human emotions & . Using MoE to generated and understand speech, and then using vision models to identify and see physical things.
Trained models can be downloaded from: huggingface/ava-v1
A transformer based language MoE model fused with an audio engine, that could directly be trained on audio lanugage rather than on written data without even need of text data in any way. Still experimenting, will see what happens.
Not yet decided, but I'll soon.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests as appropriate.
MIT