Statium Documentation
Drop-in analytics SDK for AI-powered applications. Track conversations, understand user intent, and serve contextual ads with zero latency impact.
Zero Latency
Analytics run in background, streaming stays instant
Multi-Provider
OpenAI, Anthropic, Google, DeepSeek, and more
3 Lines of Code
Wrap your existing SDK and you're done
Quick Start
Get started with Statium in under 5 minutes. This guide uses OpenAI as an example, but the same pattern works for all providers.
1. Install the SDK
npm install @statium/sdk2. Get your API key
Sign up at statium.ai and create an app to get your API key and App ID.
3. Wrap your OpenAI client
import OpenAI from 'openai';
import { createOpenAIClient } from '@statium/sdk';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const statium = createOpenAIClient({
apiKey: process.env.STATIUM_API_KEY,
appId: process.env.STATIUM_APP_ID,
openaiClient: openai,
});
// Use exactly like the OpenAI SDK
const response = await statium.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
// Analytics are automatically tracked!
console.log(response.choices[0].message.content);Installation
npm
npm install @statium/sdkyarn
yarn add @statium/sdkpnpm
pnpm add @statium/sdkRequirements: Node.js 18+ or modern browser with ES2020 support.
Configuration Options
All Statium clients accept a configuration object with the following options:
| Option | Type | Required | Description |
|---|---|---|---|
| apiKey | string | Yes | Your Statium API key (sk_live_...) |
| appId | string | Yes | Your Statium App ID |
| endpoint | string | No | Custom API endpoint (default: https://api.statium.ai) |
| enabled | boolean | No | Enable/disable analytics (default: true) |
| debug | boolean | No | Enable debug logging (default: false) |
| ads.enabled | boolean | No | Enable ad serving (default: true) |
| ads.onAd | (ad) => void | No | Callback when an ad is received |
Environment Variables
We recommend storing your credentials in environment variables:
# .env
STATIUM_API_KEY=sk_live_your_api_key_here
STATIUM_APP_ID=app_your_app_id_here
# Your LLM provider keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...OpenAI Provider
The OpenAI provider wraps the official OpenAI SDK with full analytics support.
Basic Usage
import OpenAI from 'openai';
import { createOpenAIClient } from '@statium/sdk';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const statium = createOpenAIClient({
apiKey: process.env.STATIUM_API_KEY,
appId: process.env.STATIUM_APP_ID,
openaiClient: openai,
});
// Non-streaming
const response = await statium.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);Streaming
const stream = await statium.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}
// Analytics are sent automatically after stream completesAnthropic (Claude) Provider
The Anthropic provider wraps the official Anthropic SDK for Claude models.
import Anthropic from '@anthropic-ai/sdk';
import { createAnthropicClient } from '@statium/sdk';
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY
});
const statium = createAnthropicClient({
apiKey: process.env.STATIUM_API_KEY,
appId: process.env.STATIUM_APP_ID,
anthropicClient: anthropic,
});
const response = await statium.messages.create({
model: 'claude-3-opus-20240229',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello Claude!' }],
});
console.log(response.content[0].text);Google Gemini Provider
The Google provider wraps the Google Generative AI SDK for Gemini models.
import { GoogleGenerativeAI } from '@google/generative-ai';
import { createGoogleClient } from '@statium/sdk';
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY);
const model = genAI.getGenerativeModel({ model: 'gemini-pro' });
const statium = createGoogleClient({
apiKey: process.env.STATIUM_API_KEY,
appId: process.env.STATIUM_APP_ID,
googleModel: model,
modelName: 'gemini-pro',
});
const result = await statium.generateContent({
contents: [{ role: 'user', parts: [{ text: 'Hello Gemini!' }] }],
});
console.log(result.response.text());Generic Provider
Use the generic provider for any LLM not directly supported, including DeepSeek, Grok, Perplexity, Mistral, and Cohere.
Using wrap() method
import OpenAI from 'openai';
import { createGenericTracker } from '@statium/sdk';
// DeepSeek uses OpenAI-compatible API
const deepseek = new OpenAI({
baseURL: 'https://api.deepseek.com/v1',
apiKey: process.env.DEEPSEEK_API_KEY,
});
const statium = createGenericTracker({
apiKey: process.env.STATIUM_API_KEY,
appId: process.env.STATIUM_APP_ID,
});
const { result } = await statium.wrap(
async () => {
return deepseek.chat.completions.create({
model: 'deepseek-chat',
messages: [{ role: 'user', content: 'Hello!' }],
});
},
{
userMessage: 'Hello!',
model: 'deepseek-chat',
provider: 'deepseek',
extractResponse: (r) => r.choices[0].message.content,
extractUsage: (r) => ({
promptTokens: r.usage?.prompt_tokens,
completionTokens: r.usage?.completion_tokens,
}),
}
);
console.log(result.choices[0].message.content);Manual tracking
const startTime = Date.now();
const response = await yourLLM.chat({ message: 'Hello!' });
await statium.track({
userMessage: 'Hello!',
assistantResponse: response.text,
model: 'your-model',
provider: 'custom',
latencyMs: Date.now() - startTime,
});Ingest Endpoint
The ingest endpoint receives analytics events from the SDK. You typically don't call this directly as the SDK handles it automatically.
POST https://api.statium.ai/v1/ingestHeaders
Authorization: Bearer sk_live_your_api_key
X-Statium-App-Id: app_your_app_id
Content-Type: application/jsonResponse
{
"success": true,
"request_id": "evt_abc123",
"timestamp": "2024-01-15T10:30:00.050Z",
"ad": {
"campaign_id": "camp_travel_q1",
"format": "native",
"content": {
"headline": "Planning a Trip?",
"body": "Get 20% off hotels",
"cta_text": "View Deals",
"cta_url": "https://example.com/deals"
}
}
}Analytics Endpoints
Query your analytics data programmatically.
GET /v1/analytics/summaryGet overall analytics summary for a date range.
GET /v1/analytics/timeseriesGet daily breakdown of analytics data.
GET /v1/analytics/topicsGet topic distribution analysis.
API Key Management
Programmatically manage your API keys.
POST /v1/keysCreate a new API key.
GET /v1/keysList all API keys.
POST /v1/keys/:keyId/revokeRevoke an API key.
StatiumConfig
interface StatiumConfig {
// Required
apiKey: string; // Your Statium API key
appId: string; // Your Statium App ID
// Optional
endpoint?: string; // Custom API endpoint
enabled?: boolean; // Enable/disable analytics
debug?: boolean; // Enable debug logging
ads?: {
enabled: boolean;
onAd?: (ad: StatiumAd) => void;
};
privacy?: {
disableDemographics?: boolean;
};
}StatiumMetrics
interface StatiumMetrics {
topic: StatiumTopic; // Primary topic
intent: StatiumIntent; // User's intent
sentiment_score: number; // 1-5 scale
ad_opportunity_flag: boolean; // Purchase intent detected
confidence: number; // 0-1 scale
keywords?: string[]; // Top 5 keywords
}
type StatiumTopic =
| 'general' | 'sports' | 'finance' | 'travel'
| 'shopping' | 'health' | 'technology' | 'entertainment'
| 'food' | 'education' | 'automotive' | 'real_estate'
| 'career' | 'relationships' | 'news' | 'gaming'
| 'fitness' | 'fashion' | 'home_garden' | 'pets';
type StatiumIntent =
| 'informational' | 'transactional' | 'navigational'
| 'comparison' | 'support' | 'creative'
| 'social' | 'decision_making';IngestPayload
interface IngestPayload {
event_id: string;
app_id: string;
session_id: string;
conversation_id: string;
metrics: StatiumMetrics;
messages: {
user: string;
assistant: string;
};
model_metadata: {
model: string;
provider: string;
tokens_prompt?: number;
tokens_completion?: number;
latency_ms: number;
stream: boolean;
};
client: {
sdk_version: string;
platform: string;
};
timestamp: string;
}Session Management
Manage analytics sessions programmatically.
// Start a new session (e.g., when user logs in)
statium.newSession();
// Get current session ID
const sessionId = statium.getSessionId();
// Temporarily disable analytics
statium.setEnabled(false);
// Re-enable analytics
statium.setEnabled(true);
// Clean message history (remove shadow tool artifacts)
const cleanedMessages = statium.cleanHistory(messages);Ad Integration
Receive and display contextual ads based on conversation content.
const statium = createOpenAIClient({
apiKey: process.env.STATIUM_API_KEY,
appId: process.env.STATIUM_APP_ID,
openaiClient: openai,
ads: {
enabled: true,
onAd: (ad) => {
// Display ad in your UI
showAd({
headline: ad.content.headline,
body: ad.content.body,
ctaText: ad.content.cta_text,
ctaUrl: ad.content.cta_url,
});
// Track impression
fetch(ad.tracking.impression_url, { method: 'POST' });
// Track click when user clicks
onCtaClick(() => {
fetch(ad.tracking.click_url, { method: 'POST' });
});
},
},
});Streaming Responses
Statium fully supports streaming with zero latency impact. Analytics are sent asynchronously after the stream completes.
// OpenAI streaming
const stream = await statium.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content ?? '';
process.stdout.write(content);
}
// Analytics sent automatically after stream endsTroubleshooting
Analytics not appearing in dashboard
- Verify your API key and App ID are correct
- Check that
enabledis not set to false - Enable
debug: trueto see console logs - Analytics may take up to 30 seconds to appear
Ads not being served
- Ads only appear when
ad_opportunity_flagis true - Ensure you have active campaigns matching your traffic
- Check that
ads.enabledis true
Rate limiting
The API allows 1000 requests per minute per API key. If you exceed this, you'll receive a 429 response. Consider batching events or upgrading your plan.