Comparing #meta #llama 4 (maverick / scout) vs #qwen 32b for decompilation purposes #r2ai #reverseengineering
PD: groq is the best place to try all these models if you don't have the hardware
PD: qwen-qwq reasoning takes more time, but improves the output, much better than openai/claude/meta for decompilation usecases