The official TypeScript/JavaScript SDK for AI/ML API - access 400+ AI models (GPT-4, Claude, Gemini, DeepSeek, etc.) with an OpenAI-compatible interface.
npm install @ai-ml.api/aimlapi-sdk-nodeCreate a .env file with your API key:
AIML_API_KEY=your-api-key-hereimport { AIMLAPI } from "@ai-ml.api/aimlapi-sdk-node";
import dotenv from "dotenv";
dotenv.config();
const client = new AIMLAPI();Use npx tsx to run TypeScript/JavaScript files directly:
npx tsx your-script.tsOr with environment file:
AIML_API_KEY=your-key npx tsx your-script.ts- Chat & Messages: OpenAI-compatible chat, Anthropic messages
- Images & Vision: Image generation, OCR, vision analysis
- Video & Audio: Video generation, TTS, speech-to-text
- Music & Search: Music generation, Bagoodex search
- Utilities: Batches, embeddings, account management
import { AIMLAPI } from "@ai-ml.api/aimlapi-sdk-node";
import dotenv from "dotenv";
dotenv.config();
const client = new AIMLAPI();
const completion = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain quantum computing." },
],
});
console.log(completion.choices[0]?.message?.content);const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a story." }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}Long-running video generation with automatic status polling:
// Generate video and wait for completion
const result = await client.video.createWithPolling(
{
model: "openai/sora-2-t2v",
prompt: "A menacing evil dragon appears above a mountain",
resolution: "720p",
aspect_ratio: "16:9",
duration: 4,
},
20, // maxAttempts
5000 // pollIntervalMs
);
console.log("Video status:", result.status);
console.log("Video URL:", result.video?.url);Alternative: Generate first, then poll separately:
// Start generation
const generation = await client.video.generate({
model: "openai/sora-2-t2v",
prompt: "A robot dancing",
resolution: "720p",
aspect_ratio: "16:9",
});
// Poll for status
const status = await client.video.getStatus(generation.id);
console.log("Status:", status.status);Music generation supports multiple models (ElevenLabs, Minimax, Google Lyria):
// Generate music and wait for completion
const music = await client.music.createWithPolling(
{
model: "elevenlabs/eleven_music",
prompt: "Lo-fi hip-hop ambient music",
music_length_ms: 20000,
},
30, // maxAttempts
120 // timeoutSeconds
);
console.log("Music status:", music.status);
console.log("Audio URL:", music.audio_file?.url);Using Minimax Music-2.0 with lyrics:
const music = await client.music.generateMusic({
model: "minimax/music-2.0",
prompt: "Electronic dance music",
lyrics: "[Verse]\nElectronic beats\n[Chorus]\nDance all night",
audio_setting: {
format: "mp3",
sample_rate: 44100,
},
});
console.log("Music ID:", music.id);Transcribe audio with automatic polling for completion:
// Transcribe audio with polling
const result = await client.speechToText.createWithPolling(
{
model: "aai/slam-1",
audio: { url: "https://example.com/audio.mp3" },
},
10, // pollInterval seconds
300 // pollTimeout seconds
);
console.log("Transcription:", result.text);Or submit first, then poll separately:
// Start transcription
const stt = await client.speechToText.create({
model: "aai/slam-1",
audio: { url: "https://example.com/audio.mp3" },
});
// Poll for status
const status = await client.speechToText.getStatus(stt.generation_id);
console.log("Status:", status.status);const audio = await client.textToSpeech.create({
model: "alibaba/qwen3-tts-flash",
text: "Hello, this is a test.",
voice: "Cherry",
});
console.log("Audio URL:", audio.audio.url);const response = await client.images.generate({
model: "dall-e-3",
prompt: "A beautiful sunset over mountains",
size: "1024x1024",
});
console.log("Image URL:", response.data[0].url);Get balance and available models:
// Get account balance
const balance = await client.account.getBalance();
console.log("Balance:", balance.balance);
console.log("Low balance:", balance.lowBalance);
// Get all available models
const models = await client.account.getModels();
console.log("Total models:", models.models?.length);
// Filter models by type
const chatModels = await client.account.getModelsByType("chat-completion");
console.log("Chat models:", chatModels.models?.length);
// Filter by provider
const openaiModels = await client.account.getModelsByDeveloper("openai");
console.log("OpenAI models:", openaiModels.models?.length);
// Find model by ID
const model = await client.account.getModelById("gpt-4o");
console.log("Model:", model);const message = await client.messages.create({
model: "claude-sonnet-4.5",
messages: [{ role: "user", content: "Hello, Claude!" }],
max_tokens: 100,
});
console.log(message.content[0]?.text);// Google Document AI OCR
const ocr = await client.ocr.google({
document: "https://example.com/document.png",
mimeType: "image/png",
});
console.log("Pages:", ocr.pages?.length);
// Mistral OCR
const mistralOcr = await client.ocr.mistral({
document: {
type: "image_url",
image_url: "https://example.com/image.jpg",
},
});
console.log("Text:", mistralOcr.pages?.[0]?.text);Process multiple requests asynchronously:
const batch = await client.batch.create({
requests: [
{
custom_id: "request-1",
params: {
model: "claude-3-5-haiku-20241022",
messages: [{ role: "user", content: "Hello" }],
max_tokens: 100,
},
},
],
});
console.log("Batch ID:", batch.id);
console.log("Status:", batch.processing_status);
// Wait for completion
const result = await client.batch.waitForCompletion(batch.id, {
maxAttempts: 30,
pollInterval: 10,
});
// Parse results
const parsed = client.batch.parseResults(result as string);// Search knowledge base
const knowledge = await client.bagoodex.searchKnowledge("Who is Nikola Tesla");
console.log("Title:", knowledge.title);
// Get weather
const weather = await client.bagoodex.getWeather("San Francisco, CA");
console.log("Temperature:", weather.temperature);
// Search images
const images = await client.bagoodex.searchImages("cute cats");
console.log("Results:", images.length);
// Search videos
const videos = await client.bagoodex.searchVideos("tutorial");
console.log("Results:", videos.length);| Provider | Models |
|---|---|
| OpenAI | gpt-4o, gpt-4o-mini, o1, o3-mini, gpt-5 |
| Anthropic | claude-sonnet-4.5, claude-opus-4.5 |
| gemini-2.5-pro, gemini-2.0-flash | |
| DeepSeek | deepseek-chat, deepseek-reasoner |
| Meta | llama-4-maverick, llama-4-scout |
| Video | openai/sora-2-t2v, alibaba/wan2.1-t2v-plus, luma/ray-2 |
| Music | elevenlabs/eleven_music, minimax/music-2.0, google/lyria2 |
| And more... | 400+ models total |
| Variable | Description |
|---|---|
AIML_API_KEY |
Your API key |
AIML_API_BASE |
Custom base URL (optional) |
AIMLAPI_LOG |
Log level: debug, info, warn, error |
Licensed under the Apache License 2.0.