Комментарии:
Thank you so much~I never know that using models from huggingface in ollama can be this easy
ОтветитьHi, should the hugging face models ONLY be in GGUF format for them to run in ollama/openweb ui?
ОтветитьCan you please create a video on comparing LMStudio vs Open Web UI + Ollama vs localai?
Ответитьthanks! not docker, ueeee, more ram for models xD Which do you think is more powerful, anythingllm, webui, lobechat or other?
Ответитьhi sir, most models i tried from HF on ollama are not optimised, sometimes they dont have markdown, sometimes, the information quality they output is lacklustre, gammar is off, punctuations etc.
ОтветитьI've been waiting for this for a long time. Thank you!
ОтветитьCan we use the models (text to speech, text to image) with ollama?, if yes, can you make an explanatory video, thank you very much.
ОтветитьIs there a way to do this for vision models for analyzing images?
ОтветитьHi Fahd, I'm absolutely loving the content on your channel and I think I might have an idea for you.
Could you please do a video on how to find and set the best setting on OpenWebUI using ollama models to best optimise their responses? For example how to find and set the best settings (Temperature, top k etc) for a coding model like codellama
I feel like simply importing the models and running them don't give the best outputs despite a lot of claims of them outperforming models like ChatGPT etc. Mine is not outperforming anything at the moment 😅
It doesn't work, I tried pulling l lama 3.2 11B model , just fails, download cancelled. :-(
Ответить