Skip to main content

VillageSQL is a drop-in replacement for MySQL with extensions.

All examples in this guide work on VillageSQL. Install Now →
VillageSQL’s vsql_ai extension lets you call AI language models directly from SQL queries. Before you can run your first ai_prompt(), you need the extension installed and an API key ready. This guide covers setup, provider options, and the security considerations that matter before you go to production.

Install the Extension

INSTALL EXTENSION vsql_ai;
Once an API key is set (see below), make a test call:
SET @anthropic_key = 'sk-ant-your-key-here';
SELECT ai_prompt('anthropic', 'claude-haiku-4-5-20251001', @anthropic_key, 'Say hello') AS test;

Setting Your API Key

The ai_prompt() and ai_embedding() functions take the API key as a parameter. The recommended pattern is a session variable set at connection time — never embed a key literal in a query that could end up in a log file. For the 'local' provider, pass an empty string — no API key is required.
-- Set once per connection, use everywhere
SET @anthropic_key = 'sk-ant-your-key-here';
SET @openai_key    = 'sk-your-openai-key';
SET @google_key    = 'your-google-ai-key';

-- Then use the variable
SELECT ai_prompt('anthropic', 'claude-haiku-4-5-20251001', @anthropic_key, 'Hello') AS response;
What to avoid:
-- Don't do this — key is visible in slow query logs and PROCESSLIST
SELECT ai_prompt('anthropic', 'claude-haiku-4-5-20251001', 'sk-ant-12345...', 'Hello');
Set the session variable in your application’s connection setup, not in SQL you write to a file. If your MySQL client logs queries, the variable assignment itself is also logged — use your application layer to set the variable programmatically rather than via a SQL string.

Choosing a Provider and Model

Anthropic

Best for: complex reasoning, long documents, code generation, and instruction-following tasks.
-- Fast and cost-effective
SELECT ai_prompt('anthropic', 'claude-haiku-4-5-20251001', @anthropic_key, @prompt);

-- Balanced — recommended for most use cases
SELECT ai_prompt('anthropic', 'claude-sonnet-4-5-20250929', @anthropic_key, @prompt);

-- Maximum capability
SELECT ai_prompt('anthropic', 'claude-opus-4-5-20251101', @anthropic_key, @prompt);
Anthropic does not support ai_embedding().

Google Gemini

Best for: embedding generation and multimodal tasks.
-- Prompting
SELECT ai_prompt('google', 'gemini-2.5-flash', @google_key, @prompt);

-- Embeddings (3072 dimensions)
SELECT ai_embedding('google', 'gemini-embedding-001', @google_key, @text);

OpenAI

Best for: embedding generation with the widely-adopted text-embedding-3 family.
-- Prompting
SELECT ai_prompt('openai', 'gpt-4o-mini', @openai_key, @prompt);

-- Embeddings (1536 dimensions)
SELECT ai_embedding('openai', 'text-embedding-3-small', @openai_key, @text);

-- Embeddings (3072 dimensions — higher accuracy)
SELECT ai_embedding('openai', 'text-embedding-3-large', @openai_key, @text);

Local (Ollama)

Best for: local development and private data that can’t leave your network. Requires Ollama running on 127.0.0.1:11434. No API key needed — pass an empty string.
-- Prompting
SELECT ai_prompt('local', 'llama3.2', '', @prompt);

-- Embeddings
SELECT ai_embedding('local', 'nomic-embed-text', '', @text);
Use any model you’ve pulled in Ollama. Common chat models: llama3.2, mistral, gemma2. Common embedding models: nomic-embed-text, mxbai-embed-large.

Provider comparison

ProviderPromptingEmbeddingsBest for
'anthropic'YesNoReasoning, instructions, code
'google'YesYesBalanced; Gemini embeddings
'openai'YesYesOpenAI ecosystem; text-embedding-3
'local'YesYesLocal Ollama; no API key; private data

Timeout and Rate Limit Considerations

AI API calls take 5–30 seconds. MySQL’s default query timeout may cut them off. For sessions that run AI queries, raise the execution time limit:
SET SESSION max_execution_time = 60000;  -- 60 seconds
For batch operations over many rows, this matters — a 1000-row UPDATE calling ai_prompt() on each row can take 30+ minutes. All providers enforce rate limits on how many requests you can make per minute. Hitting a rate limit causes ai_prompt() to return NULL. For bulk operations, process in batches of 50–100 rows and add a pause between batches in your application loop. See Running AI Models from MySQL Queries for batch patterns.

Frequently Asked Questions

Can I store API keys in a MySQL table instead of session variables?

You can, but you’re trading one security risk for another. A table approach requires a SELECT to read the key — that query shows up in logs too. Session variables set programmatically by your application framework are the cleanest option.

What happens if the API call fails?

The function returns NULL. Check for NULL in your results when testing. Common causes: invalid API key, network timeout, rate limit hit, or an unsupported model name.

Can I use different providers in the same query?

Yes — ai_prompt() is a regular function. You can call it with different provider/model combinations in the same SELECT.

Are model names stable?

Providers change model names over time. The names listed here match the extension’s current tested models. Check your provider’s documentation for the latest available model identifiers.

Troubleshooting

ProblemSolution
FUNCTION ai_prompt does not existRun INSTALL EXTENSION vsql_ai
ai_prompt() returns NULLCheck API key is valid; check max_execution_time is set high enough; verify model name
Query times outSet SET SESSION max_execution_time = 60000
Key shows up in slow query logUse session variable set by application code, not a SQL string literal