Комментарии:
Please make a tutorial video for installing ai, I tried following the guide but I couldn't do it.
ОтветитьGreat work! If you had a strong enough computer you can run a smaller 13B model with fast tts with much lower latency
ОтветитьWow this project is insane is it possible to exchange openai with an llm instead to have 100% offline voice assistant ?
Ответитьvery impressive work!
ОтветитьIncredible! I was working on the same project and had the issue of TTS latency: any Cloud TTS service has latency that is too high for real-time purposes. Definitely going to implement you approach. Thanks!
ОтветитьIncredible work!
Found your projects today and I cannot describe in words how impressive this all is. +1!
Hey brother! When i am running your program it is showing rate limit error. btw I am using free tier of openai
Ответитьhi Buddy!!
Im trying this approach but getting error, I have trained voice assitant using langchain and gpt 3.5 turbo and using elevenlabs api and opean ai api but latency is not reducing
Actually you want about 100 MS of delay at the very least. We're human and take time to process information and it would just seem unnatural to have a conversation where you felt like someone was finishing your sentences for you all the time.
ОтветитьIt's impressive! Which GPU are you using?
ОтветитьImpressive work, thanks
ОтветитьVery nice. Greatjob❤
Out of curiosity, how would you handle back to back conversation with interruptionhandling without using space?
why she sounds like sneaky 😁😁🤣🤣
Ответить