IRIS

Purpose-built intelligence for code.

IRIS is a 238 billion parameter Sparse Mixture-of-Experts model built exclusively for software engineering. It writes, refactors, and transforms code.

Sparse by Design

Mixture-of-Experts routes each input to a subset of specialized sub-networks. Only relevant experts activate per request.

This is not ensembling. Parameters are allocated, not averaged.

Each expert maintains isolated state. Trained on distinct distribution slices of production codebases.

Inference cost remains constant. Capacity scales without latency penalty.

TOTAL PARAMETERS
238,000,000,000
ACTIVE PER REQUEST
~14,000,000,000
ROUTING MECHANISM
Learned gating network

Narrow Focus. Maximal Depth.

IRIS does not perform general reasoning. It was not trained for dialogue, summarization, or trivia.

This constraint is intentional.

Training budget, parameter allocation, and optimization objectives target code comprehension, generation, and transformation.

Capabilities

Large-scale refactors across dependency graphs
Cross-repository semantic reasoning
Compiler-aware transformations and optimizations
Deterministic output under specified constraints
Type system inference and migration

Constraints Are a Feature

Output is reproducible given identical context. Behavior is predictable under load.

IRIS does not improvise. It does not approximate when precision is required.

It does not generate code that "might work" when determinism is specified.

Suitable for integration into production build systems, continuous integration pipelines, and version control workflows.