Documentation

Everything you need to integrate natural language querying into your backend.

Installation

Install the core package and Sequelize using your preferred package manager:

npm install @jsuyog2/sequelize-ai sequelize

You also need to install the SDK for your preferred AI provider:

npm install openai                   # For OpenAI & DeepSeek
npm install @google/generative-ai    # For Gemini
npm install @anthropic-ai/sdk        # For Claude
npm install groq-sdk                 # For Groq

Quick Start

import { Sequelize } from "sequelize";
import SequelizeAI from "@jsuyog2/sequelize-ai";

// 1. Initialize your Sequelize instance
const sequelize = new Sequelize("postgres://user:pass@localhost:5432/db");

// 2. Initialize the AI generator
const ai = new SequelizeAI(sequelize, {
  provider: "openai", 
  apiKey: process.env.OPENAI_API_KEY,
  timeout: 3000 // optional sandbox execution timeout
});

// 3. Ask a question
(async () => {
  const result = await ai.ask("show me the cheapest 5 products with stock over 10");
  console.log(result);
})();

Constructor Options

The SequelizeAI class accepts two arguments: your Sequelize instance, and an options object.

Option Type Default Description
provider string "openai" The LLM provider. Available: openai, gemini, claude, groq, deepseek, together, openrouter.
apiKey string Required API key for the selected provider.
model string - Overrides the provider's default model choice (e.g. "gpt-4o").
timeout number 2000 Sandbox V8 isolate timeout limit in milliseconds. Prevents infinite loops.
memoryLimit number 128 Sandbox V8 memory ceiling in Megabytes.

ai.ask()

Returns a Promise that resolves to the structured result of the database query. The result is always an object comprising the target model, the executed method, and the returned data.

const { model, method, data } = await ai.ask("Count all pending standard users");
console.log(model);  // "User"
console.log(method); // "count"
console.log(data);   // 154

Supported Providers

We officially wrap the SDKs for multiple frontier models. Choose based on your latency and cost preferences:

Groq

Defaults to llama-3.1-8b-instant. Bleeding-edge fast latency. Free tier available.

Gemini

Defaults to gemini-2.0-flash. Generous free tier quotas, very fast.

OpenAI

Defaults to gpt-4o-mini. High reliability and quality formatting.

Claude

Defaults to claude-haiku-4-5.... Extremely fast and highly accurate generation.

DeepSeek

Defaults to deepseek-chat. Excellent coding model, extremely cost efficient.

Together / OpenRouter

OpenAI-compatible endpoints. Fully configurable via factory pattern.

AI Column Hints

You can add aiDescription directly into your Sequelize column definitions to give business logic context to the LLM. This prevents hallucinated queries.

const Order = sequelize.define("Order", {
  status: {
    type: DataTypes.INTEGER,
    // Helps the LLM map "pending" to 0 automatically
    aiDescription: "0=Pending, 1=Shipped, 2=Delivered" 
  }
});

Computed Columns

The AI is capable of generating mathematical combinations of your columns automatically without dropping into raw SQL injection territory.

// Ask: "What is my total potential revenue?"
// Generated internal query uses Sequelize Literal securely:
const revenue = await ai.ask("Show product name, stock, and total potential revenue");
// data format: [{ name: 'Keyboard', stock: 10, totalPotentialRevenue: 1500 }]

Multi-Query Resolving

You can ask compound questions and the system will execute them in parallel and return the combined results in an array.

const stats = await ai.ask("how many users are there, and what is the maximum order amount?");
console.log("Users:", stats[0].data);
console.log("Max Order:", stats[1].data);