Have you tried Matrix?
Have you tried Matrix?
OpenWebUI connected tabbyUI’s OpenAI endpoint. I will try reducing temperature and seeing if that makes it more accurate.
Context was set to anywhere between 8k and 16k. It was responding in English properly, and then about halfway to 3/4s of the way through a response, it would start outputting tokens in either a foreign language (Russian/Chinese in the case of Qwen 2.5) or things that don’t make sense (random code snippets, improperly formatted text). Sometimes the text was repeating as well. But I thought that might have been a template problem, because it seemed to be answering the question twice.
Otherwise, all settings are the defaults.
I tried it with both Qwen 14b and Llama 3.1. Both were exl2 quants produced by bartowski.
Perplexica works. It can understand ollama and custom OpenAI providers.
Super useful guide. However after playing around with TabbyAPI, the responses from models quickly become jibberish, usually halfway through or towards the end. I’m using exl2 models off of HuggingFace, with Q4, Q6, and FP16 cache. Any tips? Also, how do I control context length on a per-model basis? max_seq_len in config.json?
Seems to be the only necessary thing in my case! Thanks.
Yeah I definitely have the default GTK chooser. Guess I have some config playing to do later.
Can you explain a bit more about this and how to configure it? When I use FF on gnome, the save dialogue just looks like other dialogues?
The only problem I really have, is context size. It’s harder to get larger than 8k context size and maintain decent generation speed with 16 GB of VRAM and 16 GB of RAM. Gonna get more RAM at some point though, and hope ollama/llamacpp gets better at memory management. Hopefully the distributed running from llamaccp ends up in ollama.
I do have a local setup. Not powerful enough to run Mixtral 8x22b, but can run 8x7b (albeit quite slowly). Use it a lot.
No trying to get around anything. No funny instructions like my grandma singing a lullaby about illegal activities. Just using instructions to tell a story. Even things like having a superhero in a fight is enough to trigger this. Also doesn’t explain why regen makes it continue.
Doesn’t gnome already have this?
I use a Misskey fork for micro blogging and I can’t even get Lemmy posts to load. The profiles of communities do, but that’s it.
Ah right. What I really meant to ask was if it can do protocols other than http.
Which I don’t think it can…
Are you able to tunnel ports other than 80 and 443 through Cloudflare?
I find it somewhat unclear how this works. Is it the JavaScript that loads comments on the posts, on the static site itself?
It’s not about that. It’s about targeting minorities, history they don’t like, etc.
Where can I get a sub 400 AMD card with 26 GB of VRAM?