Moxin 7B: A Fully Open-Source 7B Language Model with Unprecedented Transparency

We’re thrilled to unveil Moxin 7B, a new milestone in open large language model (LLM) development — designed to push the boundaries of performance and openness.

In an era where many "open" LLMs lack true transparency (e.g., missing training code, data, or restrictive licenses), Moxin 7B sets a new gold standard by committing to full disclosure and reproducibility. Developed under the Model Openness Framework (MOF), Moxin 7B achieves the top classification level of Open Science, thanks to:

What we’ve open-sourced:

Performance Highlights:

Post-training Frameworks:

Get the models and code:

We believe this is a step toward a more transparent, reproducible, and innovation-friendly AI ecosystem — especially for researchers, developers, and startups looking to build upon a robust, open foundation. Let’s build open AI the right way.