githubEdit

TypeScript SDK Documentation

JamAI Base is a backend-as-a-service for AI applications. You define tables with columns that automatically process data through AI pipelines.

Column Types

Type
Purpose
Example

Input

Your data

dtype: "str", "file", "image", "audio"

LLM

AI generates content

gen_config: { model: "...", prompt: "..." }

Data Types: str (text), file (generic), image (.jpeg/.jpg/.png/.gif/.webp), audio (.mp3/.wav)

How It Works

1

Define a table with input + AI columns

2

Add a row with input data

3

AI columns auto-generate based on your prompts

4

Read the completed row with all outputs

Available Models

// List all chat models
const chatModels = await jamai.llm.modelNames({
  capabilities: ["chat"],
});
console.log(chatModels.slice(0, 5)); // ['openai/gpt-4o', 'openai/gpt-4o-mini', ...]

// List embedding models
const embedModels = await jamai.llm.modelNames({
  capabilities: ["embed"],
});
console.log(embedModels); // ['ellm/BAAI/bge-m3', ...]

// Get full model info
const models = await jamai.llm.modelInfo();
for (const m of models.data.slice(0, 3)) {
  console.log(`${m.id}: context=${m.context_length || "N/A"}`);
}

Row Structure

Every row returned from listRows contains:

circle-exclamation

QUICK REFERENCE


1. SETUP

Install

Node.js Version (>= 16.x)

Get Credentials

1

Sign up: https://cloud.jamaibase.com/

2

Create project

3

Get PAT key: Click on your user name on top right corner > ⚙ Account Settingsarrow-up-right > Create a Personal Access Token

4

Get Project ID: from project URL

Initialize Client

To start using the JamAI SDK, you need to create an instance of the client by passing a configuration object. Below are the configurable parameters:

Parameter
Type
Required
Description

token

string

Yes*

Your Personal Access Token (PAT) to authenticate API requests.

projectId

string

Yes*

The ID of the Jamaibase project you want to interact with.

baseURL

string

No

Set a custom API endpoint (useful for self-hosted/OSS instances).

maxRetries

number

No

Maximum number of times to retry a failed request (default: 0, i.e., no retries).

timeout

number

No

Request timeout in milliseconds.

httpClient

AxiosInstance

No

Provide a custom Axios instance if you need advanced request handling.

dangerouslyAllowBrowser

boolean

No

If true, allows use in browser environments (only for advanced/OSS use). Not recommended due to security risks.

userId

string

No

Optionally set a user ID for multi-user or impersonation scenarios.

Note: Both token and projectId are required unless baseURL is specified for OSS/self-hosted use, in which case you may override authentication per your server's configuration.

Example Usage

Example .env File

You may wish to keep your credentials out of your source code by using environment variables:

⚠️ You must ensure you pass the config to the SDK in your code when using environment variables:


2. TABLE TYPES

Type
Use Case
Create Method

action

AI chains, document processing

createActionTable()

knowledge

RAG, embeddings, vector search

createKnowledgeTable()

chat

Conversational AI with context

createChatTable()


3. ACTION TABLES (Most Common)

Create

Column Reference Syntax

Use ${column_name} in prompts to reference other columns. At runtime, each reference is replaced with the corresponding cell value from the current row.

How LLM Columns Work

1

Gather prompts — System Prompt and Prompt (which can reference upstream columns).

2

Optional RAG — Augment prompt with references from a Knowledge Table.

3

Send to LLM — With your chosen generation settings (model, temperature, max_tokens).

4

Write response — Model's response becomes the cell value.

LLM Generation Settings

Parameter
Description

model

LLM model to use (e.g., openai/gpt-4o-mini)

system_prompt

Passed as-is as system message. Define role, style, global instructions

prompt

Main user message with ${column} references

temperature

Controls randomness (0.0-2.0)

max_tokens

Maximum output length

RAG (Retrieval Augmented Generation)

Link an LLM column to a Knowledge Table for grounded responses. See Section 9arrow-up-right for full Knowledge Table setup.

1

Formulate query — LLM generates retrieval query from your Prompt

2

Retrieve — Fetch relevant rows from Knowledge Table

3

Rerank — Optional reranking model (RRF Ranker by default)

4

Inject — Top-k references added to prompt

5

Cite — Optional inline citations: [@ref0; @ref1; @ref2]

Prompting Tips

Separate column references using XML tags or Markdown headings:


4. ADD ROWS

Non-Streaming (Wait for Complete Response)

Streaming (Real-time Output)

With File Upload

Get Row ID After Adding (Non-Streaming)

Get Row ID After Adding (Streaming)


5. READ ROWS

Basic List

With WHERE Filter

Select Specific Columns

Pagination (REQUIRED for >100 rows)

Get Single Row


6. UPDATE ROWS

Basic Update

Regenerate Rows (Non-Streaming)

Regenerate Rows (Streaming)


7. DELETE ROWS


8. TABLE OPERATIONS

List Tables

Get Schema

Delete Table

Duplicate Table

Rename Table

Column Management

Import/Export Data

Progress Tracking for Long Operations

Track progress of long-running operations like table imports:

Available Progress States:

  • PROGRESS_STATES.PENDING - Task is queued

  • PROGRESS_STATES.RUNNING - Task is in progress

  • PROGRESS_STATES.COMPLETED - Task completed successfully

  • PROGRESS_STATES.FAILED - Task failed with error


9. KNOWLEDGE TABLES (RAG)

Create

Add Data

Embed File

Create RAG Action Table


10. CHAT TABLES

Create

Chat (Streaming)

How Chat History Works

Chat tables automatically maintain conversation history. Each row added becomes part of the context for subsequent rows.

Note: Each chat table is a separate conversation. Create multiple tables for multiple users/sessions.


11. FILE OPERATIONS

Upload

Get Presigned URLs


12. DIRECT API (No Tables)

Chat Completions (Non-Streaming)

Chat Completions (Streaming)

Embeddings

Model Info


13. ORGANIZATIONS & PROJECTS

Organizations

Usage & Billing Metrics

Monitor usage and costs for your organization:

Window Size Options:

  • "1h" - Hourly metrics

  • "1d" - Daily metrics

  • "7d" - Weekly metrics

Group By Options:

  • ["org_id"] - Group by organization

  • ["project_id"] - Group by project

  • ["model"] - Group by model (for LLM metrics)

Projects


14. USERS & AUTHENTICATION


15. SECRETS MANAGEMENT

Manage secrets and API keys at the organization level.

circle-info

Secret names must:

  • Start with a letter or underscore

  • Contain only alphanumeric characters and underscores

  • Be uppercase (automatically converted)


16. TEMPLATES

Templates provide pre-built table configurations that you can use as starting points:

Use Cases:

  • Explore pre-built table configurations

  • Learn best practices for table design

  • Quick-start with proven table structures

  • Browse example data and prompts

17. CONVERSATIONS

Manage conversations with AI agents (Chat Tables).


18. COMPLETE EXAMPLE

End-to-end example: Create table, add row, read result, cleanup.


QUICK COPY-PASTE

Was this helpful?