Framework Integrations

Use Till scoped API keys with your favorite AI agent frameworks. Drop-in replacement with hard usage limits.

🦜

LangChain

Python and JavaScript agent framework

LangChain makes it easy to build agents that call LLMs. With Till, you can give each agent run a scoped key with hard limits on spend or activations.

Python

from langchain_openai import ChatOpenAI

# Create a Till scoped key first (via dashboard or API)
# till_sk_abc123... with $5 spend limit

llm = ChatOpenAI(
    model="gpt-4",
    api_key="till_sk_abc123...",  # Your Till scoped key
    base_url="https://api.till.ac/proxy/openai/v1"
)

# Use normally - Till enforces limits automatically
response = llm.invoke("Hello, world!")

JavaScript/TypeScript

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  model: "gpt-4",
  apiKey: "till_sk_abc123...",
  configuration: {
    baseURL: "https://api.till.ac/proxy/openai/v1"
  }
});

const response = await llm.invoke("Hello, world!");

With Anthropic

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
    model="claude-3-opus-20240229",
    api_key="till_sk_xyz789...",
    base_url="https://api.till.ac/proxy/anthropic"
)
🤖

AutoGPT / Agent-GPT

Autonomous AI agent frameworks

AutoGPT can burn through API credits quickly due to its autonomous nature. Till lets you set hard spend limits so you never get surprised.

Configuration

Set these environment variables in your .env file:

# Replace your OpenAI key with a Till scoped key
OPENAI_API_KEY=till_sk_abc123...

# Point to Till's proxy
OPENAI_API_BASE=https://api.till.ac/proxy/openai/v1

# The scoped key has built-in limits:
# - 100 activations max
# - $10 spend cap
# - Whichever hits first stops the agent

Why this matters

AutoGPT's agent loop can make hundreds of API calls. Without limits, a single stuck agent could cost hundreds of dollars. With Till:

  1. Set a $10 spend limit per agent run
  2. Agent hits the limit and stops cleanly
  3. You review results, create a new key if needed
  4. No surprise bills, ever
👥

CrewAI

Multi-agent orchestration framework

CrewAI orchestrates multiple agents working together. Give each crew a separate Till key to track and limit spend per team.

from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI

# Each crew gets its own scoped key with budget
research_llm = ChatOpenAI(
    api_key="till_sk_research_team...",  # $20 budget
    base_url="https://api.till.ac/proxy/openai/v1"
)

writing_llm = ChatOpenAI(
    api_key="till_sk_writing_team...",  # $15 budget
    base_url="https://api.till.ac/proxy/openai/v1"
)

researcher = Agent(
    role="Research Analyst",
    llm=research_llm,
    # ...
)

writer = Agent(
    role="Content Writer",
    llm=writing_llm,
    # ...
)
⬡

OpenAI SDK (Direct)

Official Python and Node.js SDKs

Use Till with the official OpenAI SDK by setting the base URL.

Python

from openai import OpenAI

client = OpenAI(
    api_key="till_sk_abc123...",
    base_url="https://api.till.ac/proxy/openai/v1"
)

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}]
)

Node.js

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "till_sk_abc123...",
  baseURL: "https://api.till.ac/proxy/openai/v1"
});

const response = await client.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: "Hello!" }]
});

Works with any framework

If it can set a base URL and API key, it works with Till. Create your first scoped key now.

Get started free