Skip to content

Chutes' Mega December

Chutes shipped major December updates, including confidential AI with TEEs, n8n community nodes, and a Vercel AI SDK integration for decentralized, privacy-first AI workflows.

Table of Contents

December is typically a slower month for product development, with teams winding down around the holidays and major launches often pushed into the new year.

Don't tell that to the Chutes team, though.

One quick scroll of their X timeline reveals a series of notable updates across security, developer tooling, and workflow automation. The releases expand access to decentralized AI infrastructure while emphasizing lower costs, stronger privacy guarantees, and simpler integration paths.

Let's explore some of the most interesting.

Chutes TEE

Chutes announced the launch of Trusted Execution Environments (TEEs), introducing a new confidential computing option for users running sensitive AI workloads.

According to the team, Chutes TEE enables end-to-end protection for inference by allowing models to run inside hardware-isolated environments. The goal is to ensure that prompts, responses, and model execution remain fully private, even from infrastructure providers themselves.

This is important because much of today's AI infra is built on blind trust in vendors that maintain data privacy as a policy, not a hardcoded characteristic. With TEE, Chutes can't look at your data even if they wanted to. The team described the release as delivering “pure inference” with “no vendor or provider eyes” on user data, positioning TEEs as a safeguard for proprietary models.

The TEE implementation builds on Chutes’ existing defense-in-depth security architecture. While all chutes on the network benefit from baseline protections such as end-to-end encryption, code and filesystem integrity checks, environment attestation, and strict containment, TEE-enabled chutes add a hardware-enforced layer of isolation.

This layer leverages Intel Trust Domain Extensions (TDX) alongside NVIDIA GPUs configured for confidential computing, creating a verifiable execution environment in which data remains encrypted while in use.

More Info: Read the Chutes security architecture docs

Chutes emphasized that TEEs are designed for users with the highest confidentiality requirements, including those concerned about data exfiltration or intellectual property leakage. When running in TEE mode, workloads are isolated from the host operating system and hypervisor, with memory encryption and remote attestation used to prove that both hardware and software configurations match an expected, untampered state before execution begins.TEE protection is not applied universally by default across the Chutes platform. TEE-enabled models are labeled:

0:00
/0:16

Finding TEE-enabled models

N8N Community Nodes

Chutes Community Nodes are now live on n8n, allowing n8n's 200k+ users to access more than 60 AI models directly within a single Chutes node. The integration enables agencies, operations teams, and businesses to deploy AI-powered workflows without requiring dedicated AI engineering resources.

In outlining the motivation for the release, Chutes said the integration addresses three common challenges faced by teams building AI-driven workflows:

"1) Cost: OpenAI bills stack up fast. Chutes cuts inference costs by 70-85%. Same quality, fraction of the price.

2) Privacy: TEE models run in hardware-enforced secure enclaves. Your data never leaves protected zones.

3) Simplicity: One node for ALL AI operations. Text, images, video, speech, music, embeddings, moderation. 14 actions in one integration. Stop managing 10+ different AI nodes and API keys."

The Chutes node supports model switching directly within n8n’s user interface, allowing users to change models without modifying workflow logic or migrating APIs. Among the supported options highlighted by the team are DeepSeek R1 for reasoning tasks, GLM-4.6 for long-context workloads, Qwen3 Coder for development use cases, and Kimi K2 with extended context lengths.

Here's how to get started:

Vercel AI SDK

December has also seen Chutes release an npm package for Vercel AI SDK, enabling developers to access decentralized, open-source AI models directly within Vercel.

Developers can connect to more than 60 open-source models, including DeepSeek R1, GLM 4.6, Qwen 3, and Flux. The package is designed specifically for Next.js applications and is described as TypeScript-first and production-ready.

The integration supports core Vercel AI SDK features such as streaming responses, tool calling, and embeddings, alongside image and video generation, text-to-speech, and speech-to-text capabilities. Chutes stated that the package includes extensive test coverage, with more than 327 tests, and full TypeScript support.

Installation is handled through npm, allowing developers to add the provider with a single command. Once installed, the package enables Chutes models to be used as a native backend within the Vercel AI SDK, without requiring changes to existing application architecture.

In explaining the significance of the release, Chutes pointed to the Vercel AI SDK’s role as a widely used framework for building AI-powered applications in Next.js. The team noted that, historically, usage within the SDK has centered primarily on centralized providers. By releasing an official provider, Chutes said it is extending seamless access to the broader open-source AI ecosystem for Next.js developers.

The provider is also designed to work with Sign in with Chutes, another feature recently launched. Under this model, users bring their own compute, and developers do not pay inference costs.To accompany the release, the company published a live demo app showcasing the integration between the Vercel AI SDK and Sign in with Chutes.

View the live demo app: https://npm-demo.chutes.ai/

Additional Shipping (Yes, There's More)


Disclaimer: This article is for informational purposes only and does not constitute financial, investment, or trading advice. The information provided should not be interpreted as an endorsement of any digital asset, security, or investment strategy. Readers should conduct their own research and consult with a licensed financial professional before making any investment decisions. The publisher and its contributors are not responsible for any losses that may arise from reliance on the information presented.

Comments

Latest