Skip to content

Teutonic Subnet Begins Training 80B AI Model on Bittensor, Marking Largest Decentralized Training Run Yet

Teutonic Subnet 3 has begun training an 80B AI model on Bittensor, marking the largest decentralized AI training effort yet and a major milestone for TAO infrastructure.

Teutonic launches decentralized 80B AI model training on Bittensor as subnet 3 advances large-scale distributed machine learning and TAO-powered AI infrastructure

Table of Contents

The Teutonic subnet on Bittensor has officially begun training an 80-billion-parameter AI model, marking the largest decentralized AI training effort ever attempted.

The announcement became public on May 11 after Jacob Steeves, better known as Const and founder of Bittensor (and former CEO of the Opentensor Foundation), posted: “Training 80,” alongside a link to Teutonic’s live dashboard. Behind the understated message sits an ambitious experiment in decentralized AI coordination, one that replaces centralized data centers and tightly controlled training pipelines with an open market where independent compute providers compete to improve a shared model.

Known as Teutonic-LXXX, the 80B parameter model is being trained through a continuous “king-of-the-hill” system in which miners compete to produce model updates that reduce loss. Winning submissions become the new leading model and capture subnet rewards, creating what Const describes as a “pure incentive-driven loss reduction game across all available compute.”

If successful, the effort would be another major milestone for decentralized AI infrastructure, building on Bittensor’s earlier viral 72B model training run that sparked widespread interest in the network.

What Teutonic Is Training and Why 80B Matters

Teutonic (SN3), is a training-focused subnet within Bittensor that coordinates distributed compute to train large AI models. The subnet recently transitioned from the earlier Templar system to a new architecture designed to support larger-scale, more competitive training dynamics.

At the center of the latest effort is Teutonic-LXXX, an 80-billion-parameter language model. The “LXXX” designation, Roman numerals for 80, reflects the model’s scale and follows earlier Teutonic training phases that progressed through smaller parameter sizes.

The size of the model matters because training frontier-scale AI systems has historically been limited to a small group of well-capitalized technology companies with access to massive GPU clusters and centralized infrastructure. By comparison, Teutonic is attempting to coordinate that process across independent miners competing on an open network.

The launch also represents a step up from Bittensor’s previous decentralized training milestone. Earlier this year, the network completed training for a 72-billion-parameter model through the Templar subnet, a development that drew broader industry attention after being publicly referenced by leaders including Jensen Huang and Jack Clark.

The Teutonic’s 80B effort notably introduces a different coordination model that continuously incentivizes independent participants to compete for model improvements instead of relying on a fixed training schedule or centralized operator.

How Teutonic’s Decentralized Training System Works

Const described the Teutonic training mechanism in his announcement: “We construct the loss landscape as a market. Miners compete to win the sequence of updates which take the loss lower.”

In practice, the system functions as a continuous “king-of-the-hill” competition that users can connect compute to through providers such as LIUM or Targon, while the system coordinates the remaining training process automatically.

The network maintains a current leading model, referred to as the KING, which serves as the benchmark for all challengers. Miners clone the active model, train their own improvements using independent infrastructure and techniques, then submit updated checkpoints for evaluation. Validators then compare those submissions against the current KING using held-out evaluation datasets and loss-based scoring. If a challenger performs better, it immediately replaces the current leading model and becomes the new KING. That miner then earns subnet rewards until another participant surpasses their results.

The design differs from traditional distributed training because participants are not required to use identical hardware or training methods. Instead, miners are incentivized to find the most efficient way to improve model performance, whether through better optimization techniques, training configurations, data strategies, or compute resources.

Overall, we have a system designed to run continuously, with independent participants competing in real time to improve model performance rather than contributing to a single, fixed training run.

Why This Matters for Bittensor

For Bittensor, Teutonic’s 80B training effort represents more than a technical milestone. It also marks an important moment for the network following the high-profile departure of Covenant AI earlier this year.

The Rug Pull of Covenant AI
Yesterday, we reserved judgment. Today, it’s time to call a spade a spade. This was a rug pull masked as a frustrated, genuine builder’s final resort.

With the exit stalling the momentum and hype that Bittensor had accumulated earlier this year, uncertainty spread, largely fueled by misinformed KOLs and media outlets.

Teutonic is our bounce-back moment, a shot at redemption, if you will. Assuming the 80B mark is hit, we'll have an important (and likely, very viral) proving point for Bittensor’s resilience and ability to continue innovating despite setbacks. That's worth its weight in gold; it could expand interest in TAO, encourage a flock of new developers to contribute to subnets, and revitalize the Bittensor narrative.

That will be something worth the entire ecosystem celebrating.


Disclaimer: This article is for informational purposes only and does not constitute financial, investment, or trading advice. The information provided should not be interpreted as an endorsement of any digital asset, security, or investment strategy. Readers should conduct their own research and consult with a licensed financial professional before making any investment decisions. The publisher and its contributors are not responsible for any losses that may arise from reliance on the information presented.

Comments

Latest