DevHunt
Mercury 2

Mercury 2

AI & LLM Tools
143 upvotes

Mercury 2 ditches sequential decoding for parallel refinement. As the first reasoning diffusion LLM, it generates tokens simultaneously to hit 1,000+ tokens/sec. This delivers reasoning-grade quality inside tight latency budgets for your agentic loops.

Screenshots

Mercury 2 screenshot 1
Mercury 2 screenshot 2
Mercury 2 screenshot 3
Mercury 2 screenshot 4
Watch Video
Added February 25, 2026View on Product Hunt

More in AI & LLM Tools

Feedback