Skip to content

cymcymcymcym/llm_voice_chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Voice Chat with LLM

This code starts a voice chat with a LLM. VAD is used to detect start and end of speech. The language and voice models used are:

  1. Openai Whisper for audio-to-text transcription;
  2. Llama3-8b with Groq api to improve response speed;
  3. Openai TTS for text-to-speech. demo

Deployment

  1. create an .env file
OPENAI_API_KEY=
GROQ_API_KEY=
  1. run the following commands
pip install -r requirement.txt
python app.py

Releases

No releases published

Packages

No packages published

Languages