cpaua
·1 min24

Qwen3.6 35B A3B vs Gemma4 26B A4B: MoE Vibe-Coding Face-Off

Someone gave two MoE models the exact same vibe-coding challenge.

Qwen3.6 35B A3B (31.8 GB) vs Gemma4 26B A4B (23.3 GB)

Stack:

> Unsloth Q6_K_XL
> llama.cpp
> for each — sampling recommended in the model card

4 prompts, side by side. Who do you think will win?

Share:
Author
cpaua

VibeCode blog admin. Writing about vibe coding, AI and open source.

Comments

To leave a comment, log in or sign up
Loading...

Related articles