Table of Contents
Venice has launched Venice Uncensored 1.2, a new AI model trained on Targon, with expanded context, vision support, and reduced refusal behavior.
Developed with Dolphin, the model is positioned as an uncensored version of Mistral 24B, with a fourfold increase in context length and enhanced tool-use capabilities.
Built on Bittensor’s Targon Subnet
Training for Venice Uncensored 1.2 was conducted using compute provided by Targon (SN4), a Bittensor subnet focused on distributed AI compute.
Targon recently earned widespread attention for its collaboration with Intel, which quickly snowballed into the unveiling of TargonOS, an operating system designed to expand decentralized compute into consumer GPUs.

Model Design and Positioning Outlined
Documentation for Venice Uncensored 1.2 describes a training approach that combines supervised fine-tuning with a reinforcement learning phase (KTO), which the team says was used to adjust model behavior without relying on direct weight editing techniques often associated with “abliteration.”
This method was intended to preserve model performance while reducing refusal behaviors. Benchmark results cited in the documentation include full compliance across Dolphin Refusals and Nous Research’s RefusalBench datasets.
The documentation also frames the model as an alternative to closed AI systems, outlining limitations such as restricted control over system prompts, unannounced model updates, and standardized alignment policies.
"Dolphin belongs to YOU, it is your tool, an extension of your will. Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin."
The model is now live on venice.ai as the default system for users under the name “Venice Uncensored.”