Gemopus-4-26B-A4B-it GGUF: Opus-Tuned Gemma 4 for Local Inference
Читати українськоюGemopus-4-26B-A4B-it GGUF
On Hugging Face, an enthusiast posted a quantized Gemma 4 trained on Opus data.
I think anyone who has used Opus knows that its advantage is the quality of its answers: compared to most LLMs, they’re probably the most natural and most readable.
Instruction-tuned (it) means it was specifically fine-tuned for precise commands and dialogues.
It will run in LM Studio, Ollama, GPT4All, or via bare llama.cpp.
For comfortable use with Q4_K_M or Q5 quantization, you’ll need about 16–24 GB of RAM/VRAM in total.
https://huggingface.co/Jackrong/Gemopus-4-26B-A4B-it-GGUF