TypeScript SDK Documentation
JamAI Base is a backend-as-a-service for AI applications. You define tables with columns that automatically process data through AI pipelines.
Column Types
Input
Your data
dtype: "str", "file", "image", "audio"
LLM
AI generates content
gen_config: { model: "...", prompt: "..." }
Data Types: str (text), file (generic), image (.jpeg/.jpg/.png/.gif/.webp), audio (.mp3/.wav)
How It Works
Define a table with input + AI columns
Add a row with input data
AI columns auto-generate based on your prompts
Read the completed row with all outputs
Available Models
// List all chat models
const chatModels = await jamai.llm.modelNames({
capabilities: ["chat"],
});
console.log(chatModels.slice(0, 5)); // ['openai/gpt-4o', 'openai/gpt-4o-mini', ...]
// List embedding models
const embedModels = await jamai.llm.modelNames({
capabilities: ["embed"],
});
console.log(embedModels); // ['ellm/BAAI/bge-m3', ...]
// Get full model info
const models = await jamai.llm.modelInfo();
for (const m of models.data.slice(0, 3)) {
console.log(`${m.id}: context=${m.context_length || "N/A"}`);
}Row Structure
Every row returned from listRows contains:
Value Wrapping Context:
SDK reads (
listRows,getRow): Values ARE wrapped → userow['col'].valueAccess values using:
row['column_name'].valueor the helper function below
QUICK REFERENCE
1. SETUP
Install
Node.js Version (>= 16.x)
Get Credentials
Sign up: https://cloud.jamaibase.com/
Create project
Get PAT key: Click on your user name on top right corner > ⚙ Account Settings > Create a Personal Access Token
Get Project ID: from project URL
Initialize Client
To start using the JamAI SDK, you need to create an instance of the client by passing a configuration object. Below are the configurable parameters:
token
string
Yes*
Your Personal Access Token (PAT) to authenticate API requests.
projectId
string
Yes*
The ID of the Jamaibase project you want to interact with.
baseURL
string
No
Set a custom API endpoint (useful for self-hosted/OSS instances).
maxRetries
number
No
Maximum number of times to retry a failed request (default: 0, i.e., no retries).
timeout
number
No
Request timeout in milliseconds.
httpClient
AxiosInstance
No
Provide a custom Axios instance if you need advanced request handling.
dangerouslyAllowBrowser
boolean
No
If true, allows use in browser environments (only for advanced/OSS use). Not recommended due to security risks.
userId
string
No
Optionally set a user ID for multi-user or impersonation scenarios.
Note: Both token and projectId are required unless baseURL is specified for OSS/self-hosted use, in which case you may override authentication per your server's configuration.
Example Usage
Example .env File
You may wish to keep your credentials out of your source code by using environment variables:
⚠️ You must ensure you pass the config to the SDK in your code when using environment variables:
2. TABLE TYPES
action
AI chains, document processing
createActionTable()
knowledge
RAG, embeddings, vector search
createKnowledgeTable()
chat
Conversational AI with context
createChatTable()
3. ACTION TABLES (Most Common)
Create
Column Reference Syntax
Use ${column_name} in prompts to reference other columns. At runtime, each reference is replaced with the corresponding cell value from the current row.
How LLM Columns Work
Gather prompts — System Prompt and Prompt (which can reference upstream columns).
Optional RAG — Augment prompt with references from a Knowledge Table.
Send to LLM — With your chosen generation settings (model, temperature, max_tokens).
Write response — Model's response becomes the cell value.
LLM Generation Settings
model
LLM model to use (e.g., openai/gpt-4o-mini)
system_prompt
Passed as-is as system message. Define role, style, global instructions
prompt
Main user message with ${column} references
temperature
Controls randomness (0.0-2.0)
max_tokens
Maximum output length
RAG (Retrieval Augmented Generation)
Link an LLM column to a Knowledge Table for grounded responses. See Section 9 for full Knowledge Table setup.
Formulate query — LLM generates retrieval query from your Prompt
Retrieve — Fetch relevant rows from Knowledge Table
Rerank — Optional reranking model (RRF Ranker by default)
Inject — Top-k references added to prompt
Cite — Optional inline citations: [@ref0; @ref1; @ref2]
Prompting Tips
Separate column references using XML tags or Markdown headings:
4. ADD ROWS
Non-Streaming (Wait for Complete Response)
Streaming (Real-time Output)
With File Upload
Get Row ID After Adding (Non-Streaming)
Get Row ID After Adding (Streaming)
5. READ ROWS
Basic List
With WHERE Filter
Select Specific Columns
Full-Text Search
Pagination (REQUIRED for >100 rows)
Get Single Row
6. UPDATE ROWS
Basic Update
Regenerate Rows (Non-Streaming)
Regenerate Rows (Streaming)
7. DELETE ROWS
8. TABLE OPERATIONS
List Tables
Get Schema
Delete Table
Duplicate Table
Rename Table
Column Management
Hybrid Search
Import/Export Data
Progress Tracking for Long Operations
Track progress of long-running operations like table imports:
Available Progress States:
PROGRESS_STATES.PENDING- Task is queuedPROGRESS_STATES.RUNNING- Task is in progressPROGRESS_STATES.COMPLETED- Task completed successfullyPROGRESS_STATES.FAILED- Task failed with error
9. KNOWLEDGE TABLES (RAG)
Create
Add Data
Embed File
Create RAG Action Table
10. CHAT TABLES
Create
Chat (Streaming)
How Chat History Works
Chat tables automatically maintain conversation history. Each row added becomes part of the context for subsequent rows.
Note: Each chat table is a separate conversation. Create multiple tables for multiple users/sessions.
11. FILE OPERATIONS
Upload
Get Presigned URLs
12. DIRECT API (No Tables)
Chat Completions (Non-Streaming)
Chat Completions (Streaming)
Embeddings
Model Info
13. ORGANIZATIONS & PROJECTS
Organizations
Usage & Billing Metrics
Monitor usage and costs for your organization:
Window Size Options:
"1h"- Hourly metrics"1d"- Daily metrics"7d"- Weekly metrics
Group By Options:
["org_id"]- Group by organization["project_id"]- Group by project["model"]- Group by model (for LLM metrics)
Projects
14. USERS & AUTHENTICATION
15. SECRETS MANAGEMENT
Manage secrets and API keys at the organization level.
16. TEMPLATES
Templates provide pre-built table configurations that you can use as starting points:
Use Cases:
Explore pre-built table configurations
Learn best practices for table design
Quick-start with proven table structures
Browse example data and prompts
17. CONVERSATIONS
Manage conversations with AI agents (Chat Tables).
18. COMPLETE EXAMPLE
End-to-end example: Create table, add row, read result, cleanup.
QUICK COPY-PASTE
Was this helpful?