MiniMax Just Open Sourced MiniMax M2.7: A Self-Evolving Agent Model that Scores 56.22% on SWE-Pro and 57.0% on Terminal Bench 2

by CryptoExpert


MiniMax has officially open-sourced MiniMax M2.7, making the model weights publicly available on Hugging Face. Originally announced on March 18, 2026, MiniMax M2.7 is the MiniMax’s most capable open-source model to date — and its first model to actively participate in its own development cycle, a meaningful shift in how large language models are built and iterated.

What is MiniMax M2.7?

MiniMax M2.7 is part of MiniMax’s M2-series of Mixture-of-Experts (MoE) models. MoE is an architectural design where only a subset of the total parameters are ‘activated’ during any inference pass, which makes the model significantly faster and cheaper to serve compared to a dense model of similar output quality.

MiniMax M2.7 is built around three core capability areas: professional software engineering, professional office work, and what MiniMax calls Agent Teams — native multi-agent collaboration. MiniMax M2.7 is capable of building complex agent harnesses and completing highly elaborate productivity tasks, leveraging capabilities such as Agent Teams, complex Skills, and dynamic tool search.

SOTA Benchmark Performance: SWE-Pro and Terminal Bench 2

On SWE-Pro, which covers multiple programming languages, MiniMax M2.7 achieved a 56.22% accuracy rate, matching GPT-5.3-Codex. SWE-Pro tasks span log analysis, bug troubleshooting, code security review, and machine learning workflow debugging — much closer to the messy reality of production systems than standard algorithmic coding tests.

On Terminal Bench 2 (57.0%) and NL2Repo (39.8%), both of which demand a high degree of system-level comprehension, MiniMax M2.7 performs solidly. The model excels not only at code generation but can also deeply understand the operational logic and collaborative dynamics of software systems.

On the repo-level code generation benchmark VIBE-Pro, MiniMax M2.7 scored 55.6%, nearly on par with Opus 4.6 — meaning whether the requirement involves Web, Android, iOS, or simulation tasks, they can be handed directly to MiniMax M2.7 to complete. It also demonstrates a strong advantage on benchmarks closer to real-world engineering scenarios: SWE Multilingual (76.5) and Multi SWE Bench (52.7).

Production Debugging: Under Three Minutes

When faced with alerts in production, MiniMax M2.7 can correlate monitoring metrics with deployment timelines to perform causal reasoning, conduct statistical analysis on trace sampling and propose precise hypotheses, proactively connect to databases to verify root causes, pinpoint missing index migration files in the code repository, and use non-blocking index creation to stop the bleeding before submitting a merge request. MiniMax team reports that on multiple occasions, this reduced recovery time for live production system incidents to under three minutes. From observability analysis and database expertise to SRE-level decision-making, this positions MiniMax M2.7 as something beyond a code-generation model.

The Self-Evolution Architecture

To test the boundaries of autonomous improvement, MiniMax M2.7 was tasked with optimizing a model’s programming performance on an internal scaffold. It ran entirely autonomously, executing an iterative loop of ‘analyze failure trajectories → plan changes → modify scaffold code → run evaluations → compare results → decide to keep or revert changes’ for over 100 rounds. During this process, MiniMax M2.7 discovered effective optimizations on its own: systematically searching for the optimal combination of sampling parameters such as temperature, frequency penalty, and presence penalty; designing more specific workflow guidelines (such as automatically searching for the same bug pattern in other files after a fix); and adding loop detection to the scaffold’s agent loop. This achieved a 30% performance improvement on internal evaluation sets.

Within MiniMax’s own reinforcement learning team workflows, M2.7 is now capable of handling 30%–50% of the workflow end-to-end, with human researchers only interacting for critical decisions and discussions.

MLE Bench Lite: Testing Autonomous ML Experimentation

MiniMax team also tested MiniMax M2.7 on MLE Bench Lite, OpenAI’s open-sourced suite of 22 machine learning competitions runnable on a single A30 GPU, covering virtually all stages of the ML workflow.

For this evaluation, MiniMax team designed a simple three-component harness: short-term memory, self-feedback, and self-optimization. After each iteration round, the agent generates a short-term memory markdown file, performs self-criticism on the current results, and provides optimization directions for the next round. Three trials were run, each with a 24-hour window for iterative evolution.

The best run achieved 9 gold medals, 5 silver medals, and 1 bronze medal. The average medal rate across the three runs was 66.6%, a result second only to Opus-4.6 (75.7%) and GPT-5.4 (71.2%), tying with Gemini-3.1 (66.6%).

Professional Office Work and Finance

Beyond software engineering, MiniMax M2.7 targets professional office tasks. In the GDPval-AA evaluation, which measures domain expertise and task delivery capability across 45 models, MiniMax M2.7 achieved an ELO score of 1495 — the highest among open-source models, second only to Opus 4.6, Sonnet 4.6, and GPT-5.4, and surpassing GPT-5.3.

On Toolathon, MiniMax M2.7 achieved an accuracy of 46.3%, reaching the global top tier. In MM Claw testing — an evaluation MiniMax built based on real-world usage patterns from the OpenClaw personal agent platform — MiniMax M2.7 maintained a 97% skill compliance rate across 40 complex skills (each exceeding 2,000 tokens) and achieved an overall accuracy of 62.7%, approaching Sonnet 4.6.

In finance, MiniMax M2.7 can autonomously read a company’s annual reports and earnings call transcripts, cross-reference multiple research reports, independently design assumptions and build a revenue forecast model, and produce a PPT and Word research report based on templates — understanding, making judgments, and producing output like a junior analyst.

Key Takeaways

  • MiniMax M2.7 is now officially open source, with weights available on Hugging Face, making a frontier-grade agentic model freely accessible for developers to deploy and build on.
  • MiniMax M2.7 achieves SOTA performance on real-world software engineering benchmarks, scoring 56.22% on SWE-Pro (matching GPT-5.3-Codex) and 57.0% on Terminal Bench 2 — tests that measure production-level reasoning, not just code generation.
  • MiniMax M2.7 is the first model to actively participate in its own development, running over 100 autonomous rounds of scaffold optimization and achieving a 30% performance improvement — an early, concrete example of AI-assisted AI development in practice.
  • The model is built for real agentic deployments, maintaining 97% skill adherence across 40 complex skills (each exceeding 2,000 tokens), supporting native Agent Teams with stable role boundaries, and handling 30–50% of MiniMax’s internal RL team workflows autonomously.
  • MiniMax M2.7 is the highest-ranked open-source model on GDPval-AA with an ELO score of 1495 across 45 models, demonstrating strong professional work capabilities spanning office document editing, financial analysis, and multi-round high-fidelity task delivery.

Check out the Technical details and Model Weight. Also, feel free to follow us on Twitter and don’t forget to join our 130k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us



Source link

You may also like