# Python SDK Documentation

JamAI Base is a **backend-as-a-service for AI applications**. You define tables with columns that automatically process data through AI pipelines.

### Column Types

| Type       | Purpose                 | Example                                       |
| ---------- | ----------------------- | --------------------------------------------- |
| **Input**  | Your data               | `dtype="str"`, `"file"`, `"image"`, `"audio"` |
| **LLM**    | AI generates content    | `gen_config=t.LLMGenConfig(...)`              |
| **Python** | Custom logic/validation | `gen_config=t.PythonGenConfig(...)`           |

**Data Types**: `str` (text), `file` (generic), `image` (.jpeg/.jpg/.png/.gif/.webp), `audio` (.mp3/.wav)

### How It Works

{% stepper %}
{% step %}
Define a table with input + AI columns
{% endstep %}

{% step %}
Add a row with input data
{% endstep %}

{% step %}
AI columns auto-generate based on your prompts
{% endstep %}

{% step %}
Read the completed row with all outputs
{% endstep %}
{% endstepper %}

### Available Models

```python
# List all chat models
chat_models = jamai.model_ids(capabilities=["chat"])
print(chat_models[:5])  # ['openai/gpt-5.2', 'openai/gpt-5.2-mini', 'anthropic/claude-3-5-sonnet', ...]

# List embedding models
embed_models = jamai.model_ids(capabilities=["embed"])
print(embed_models)  # ['ellm/BAAI/bge-m3', ...]

# Get full model info
models = jamai.model_info()
for m in models.data[:3]:
    print(f"{m.id}: context={getattr(m, 'context_length', 'N/A')}")
```

### Row Structure

Every row returned from `list_table_rows` contains:

```python
{
    'ID': 'uuid-string',           # Use for updates/deletes
    'Updated at': '2026-01-07...',  # Timestamp
    'column_name': {'value': 'actual_data'},  # WRAPPED!
    ...
}
```

{% hint style="warning" %}
**Value Wrapping Context:**

* **SDK reads** (`list_table_rows`, `get_table_row`): Values ARE wrapped → use `row['col']['value']`
* **Python Columns** (`row` dict inside `PythonGenConfig`): Values are NOT wrapped → use `row['col']` directly
  {% endhint %}

***

## QUICK REFERENCE

```python
# INSTALL
pip install jamaibase python-dotenv

# INITIALIZE
from jamaibase import JamAI
import jamaibase.types as t
jamai = JamAI(token="YOUR_PAT", project_id="YOUR_PROJECT_ID")

# CRITICAL: Values are wrapped - ALWAYS extract
def get_value(field):
    if isinstance(field, dict) and 'value' in field:
        return field['value']
    return field

# MAX LIMIT IS 100 - use pagination for more
```

***

## 1. SETUP

### Install

Python Version (>= 3.12)

```bash
pip install jamaibase python-dotenv
```

### Get Credentials

{% stepper %}
{% step %}
Sign up: <https://cloud.jamaibase.com/>
{% endstep %}

{% step %}
Create project
{% endstep %}

{% step %}
Get PAT key: Click on your user name on top right corner > [⚙ Account Settings](https://cloud.jamaibase.com/settings/account) > Create a Personal Access Token
{% endstep %}

{% step %}
Get Project ID: from project URL
{% endstep %}
{% endstepper %}

### Initialize Client

```python
from jamaibase import JamAI
import jamaibase.types as t
from dotenv import load_dotenv
import os

load_dotenv()

jamai = JamAI(
    token=os.getenv('JAMAI_TOKEN'),
    project_id=os.getenv('JAMAI_PROJECT_ID')
)

# Or auto-load from env vars
jamai = JamAI()

# Async version available
from jamaibase import JamAIAsync
```

### .env File

```env
# Default env vars (auto-loaded by SDK)
JAMAI_TOKEN=your_PAT
JAMAI_PROJECT_ID=your_project_id

# Optional
JAMAI_API_BASE=https://api.jamaibase.com/api
JAMAI_TIMEOUT_SEC=300
JAMAI_FILE_UPLOAD_TIMEOUT_SEC=900
```

**Note**: When using `JamAI()` without arguments, it auto-loads from environment variables.

***

## 2. TABLE TYPES

| Type        | Use Case                       | Create Method              |
| ----------- | ------------------------------ | -------------------------- |
| `action`    | AI chains, document processing | `create_action_table()`    |
| `knowledge` | RAG, embeddings, vector search | `create_knowledge_table()` |
| `chat`      | Conversational AI with context | `create_chat_table()`      |

***

## 3. ACTION TABLES (Most Common)

### Create

```python
table = jamai.table.create_action_table(
    t.ActionTableSchemaCreate(
        id="my_table",
        cols=[
            # Input column
            t.ColumnSchemaCreate(id="input", dtype="str"),

            # File input
            t.ColumnSchemaCreate(id="image", dtype="file"),

            # LLM output column
            t.ColumnSchemaCreate(
                id="output",
                dtype="str",
                gen_config=t.LLMGenConfig(
                    model="openai/gpt-5.2",
                    system_prompt="You are helpful.",
                    prompt="Process: ${input}\nImage: ${image}",
                    temperature=0.7,
                    max_tokens=500
                )
            ),

            # Python computed column
            t.ColumnSchemaCreate(
                id="word_count",
                dtype="str",
                gen_config=t.PythonGenConfig(
                    python_code='row["word_count"] = str(len(row["output"].split()))'
                )
            )
        ]
    )
)
```

### Column Reference Syntax

Use `${column_name}` in prompts to reference other columns. At runtime, each reference is replaced with the corresponding cell value from the current row.

```python
# Example prompt
prompt="Translate \"${input}\" into Italian:"

# If input column contains "Good morning", actual prompt sent to LLM:
# "Translate \"Good morning\" into Italian:"
```

### How LLM Columns Work

The LLM Column will:

1. **Gather prompts** - System Prompt and Prompt (which can reference upstream columns)
2. **Optional RAG** - Augment prompt with references from a Knowledge Table
3. **Send to LLM** - With your chosen generation settings (model, temperature, max\_tokens)
4. **Write response** - Model's response becomes the cell value

### LLM Generation Settings

| Parameter       | Description                                                             |
| --------------- | ----------------------------------------------------------------------- |
| `model`         | LLM model to use (e.g., `openai/gpt-5.2`)                               |
| `system_prompt` | Passed as-is as system message. Define role, style, global instructions |
| `prompt`        | Main user message with `${column}` references                           |
| `temperature`   | Controls randomness (0.0-2.0)                                           |
| `max_tokens`    | Maximum output length                                                   |

### RAG (Retrieval Augmented Generation)

Link an LLM column to a Knowledge Table for grounded responses. See [Section 9](#9-knowledge-tables-rag) for full Knowledge Table setup.

**RAG Flow:**

1. **Formulate query** → LLM generates retrieval query from your Prompt
2. **Retrieve** → Fetch relevant rows from Knowledge Table
3. **Rerank** → Optional reranking model (RRF Ranker by default)
4. **Inject** → Top-k references added to prompt
5. **Cite** → Optional inline citations: `[@ref0; @ref1; @ref2]`

```python
t.LLMGenConfig(
    model="openai/gpt-5.2",
    prompt="${question}",
    rag_params=t.RAGParams(
        table_id="my_knowledge_table",  # Must exist (see Section 9)
        k=3,  # Number of references to inject
        # reranking_model="...",  # Optional
    ),
    max_tokens=500
)
```

### Multi-turn Chat in Action Tables

Enable multi-turn chat to use previous rows as conversation history:

```python
t.ColumnSchemaCreate(
    id="response",
    dtype="str",
    gen_config=t.LLMGenConfig(
        model="openai/gpt-5.2",
        system_prompt="You are helpful.",
        prompt="${query}",
        multi_turn=True,  # Enable conversation history
        max_tokens=500
    )
)
```

With multi-turn enabled, each generation sees all previous rows as context.

### Prompting Tips

Separate column references using XML tags or Markdown headings:

```python
# XML tags (recommended)
prompt="""
<user-query>
${input}
</user-query>

Translate user query into Italian.
"""

# Markdown headings
prompt="""
# User Query
${input}

# Instruction
Translate user query into Italian.
"""
```

***

## 4. ADD ROWS

### Non-Streaming (Wait for Complete Response)

```python
response = jamai.table.add_table_rows(
    'action',
    t.MultiRowAddRequest(
        table_id="my_table",
        data=[
            {"input": "Hello world"},
            {"input": "Goodbye world"}
        ],
        stream=False
    )
)

# Get LLM output
print(response.rows[0].columns["output"].text)
```

### Streaming (Real-time Output)

```python
completion = jamai.table.add_table_rows(
    'action',
    t.MultiRowAddRequest(
        table_id="my_table",
        data=[{"input": "Hello"}],
        stream=True
    )
)

for chunk in completion:
    if chunk.output_column_name == "output":
        print(chunk.text, end="", flush=True)
```

### With File Upload

```python
# Upload file first
file_response = jamai.file.upload_file("/path/to/image.png")

# Use URI in row
jamai.table.add_table_rows(
    'action',
    t.MultiRowAddRequest(
        table_id="my_table",
        data=[{"image": file_response.uri, "input": "Describe this"}],
        stream=False
    )
)
```

### Get Row ID After Adding (Non-Streaming)

```python
response = jamai.table.add_table_rows(
    'action',
    t.MultiRowAddRequest(
        table_id="my_table",
        data=[{"input": "Hello"}],
        stream=False
    )
)

# Get the row ID for later updates/deletes
row_id = response.rows[0].row_id
print(f"Created row: {row_id}")

# Get LLM output
output = response.rows[0].columns["output"].text
```

### Get Row ID After Adding (Streaming)

```python
completion = jamai.table.add_table_rows(
    'action',
    t.MultiRowAddRequest(
        table_id="my_table",
        data=[{"input": "Hello"}],
        stream=True
    )
)

row_id = None
for chunk in completion:
    if hasattr(chunk, 'row_id') and chunk.row_id:
        row_id = chunk.row_id
    if chunk.output_column_name == "output" and hasattr(chunk, 'text'):
        print(chunk.text, end="", flush=True)

print(f"\nRow ID: {row_id}")
```

***

## 5. READ ROWS

### Basic List

```python
rows = jamai.table.list_table_rows(
    'action',
    'my_table',
    offset=0,
    limit=100  # MAX IS 100!
)

for row in rows.items:
    # IMPORTANT: Extract value from wrapper
    value = row['input']['value']  # or use get_value()
```

### With WHERE Filter

```python
# Syntax: "column" (double quotes) = 'value' (single quotes)
rows = jamai.table.list_table_rows(
    'action', 'my_table',
    where='"status" = \'active\''
)

# LIKE pattern
where='"name" LIKE \'%Smith%\''

# AND conditions
where='"status" = \'active\' AND "type" = \'premium\''
```

### Select Specific Columns

```python
rows = jamai.table.list_table_rows(
    'action', 'my_table',
    columns=['input', 'output']  # ID, Updated at always included
)
```

### Full-Text Search

```python
rows = jamai.table.list_table_rows(
    'action', 'my_table',
    search_query="machine learning"
)
```

### Pagination (REQUIRED for >100 rows)

```python
def get_all_rows(table_id, table_type='action'):
    all_rows = []
    offset = 0
    while True:
        response = jamai.table.list_table_rows(table_type, table_id, offset=offset, limit=100)
        all_rows.extend(response.items)
        if len(response.items) < 100:
            break
        offset += 100
    return all_rows
```

### Get Single Row

```python
row = jamai.table.get_table_row('action', 'my_table', 'row-uuid')
```

***

## 6. UPDATE ROWS

```python
jamai.table.update_table_rows(
    'action',
    t.MultiRowUpdateRequest(
        table_id="my_table",
        data={
            "row-uuid-1": {"column1": "new_value"},
            "row-uuid-2": {"column1": "value", "column2": "value2"}
        }
    )
)
```

***

## 7. DELETE ROWS

```python
jamai.table.delete_table_rows(
    'action',
    t.MultiRowDeleteRequest(
        table_id="my_table",
        row_ids=["uuid1", "uuid2"]
    )
)
```

***

## 8. TABLE OPERATIONS

### List Tables

```python
tables = jamai.table.list_tables('action', count_rows=True)
for t in tables.items:
    print(f"{t.id}: {t.num_rows} rows")
```

### Get Schema

```python
table = jamai.table.get_table('action', 'my_table')
for col in table.cols:
    print(f"{col.id}: {col.dtype}")
```

### Delete Table

```python
jamai.table.delete_table('action', 'my_table')
```

### Duplicate Table

```python
# With data
new_table = jamai.table.duplicate_table('action', 'source', 'copy')

# Schema only
new_table = jamai.table.duplicate_table('action', 'source', 'copy', include_data=False)
```

### Check If Table Exists

```python
def table_exists(jamai, table_type, table_id):
    """Check if a table exists before creating/using it."""
    try:
        jamai.table.get_table(table_type, table_id)
        return True
    except Exception:
        return False

# Usage
if not table_exists(jamai, 'action', 'my_table'):
    jamai.table.create_action_table(...)
```

### Safe Table Creation (Delete if Exists)

```python
def ensure_table(jamai, table_type, table_id, create_func):
    """Delete existing table and create fresh."""
    try:
        jamai.table.delete_table(table_type, table_id)
    except Exception:
        pass  # Table didn't exist
    return create_func()

# Usage
table = ensure_table(jamai, 'action', 'my_table', lambda:
    jamai.table.create_action_table(t.ActionTableSchemaCreate(...))
)
```

***

## 9. KNOWLEDGE TABLES (RAG)

### Create

```python
table = jamai.table.create_knowledge_table(
    t.KnowledgeTableSchemaCreate(
        id="my_kb",
        cols=[],  # Title, Text auto-created
        embedding_model="ellm/BAAI/bge-m3"
    )
)
```

### Add Data

```python
jamai.table.add_table_rows(
    'knowledge',
    t.MultiRowAddRequest(
        table_id="my_kb",
        data=[{"Title": "Doc1", "Text": "Content here..."}],
        stream=False
    )
)
```

### Embed File

```python
response = jamai.table.embed_file("/path/to/doc.txt", "my_kb")
```

### Create RAG Action Table

```python
table = jamai.table.create_action_table(
    t.ActionTableSchemaCreate(
        id="rag_qa",
        cols=[
            t.ColumnSchemaCreate(id="question", dtype="str"),
            t.ColumnSchemaCreate(
                id="answer",
                dtype="str",
                gen_config=t.LLMGenConfig(
                    model="openai/gpt-5.2",
                    prompt="${question}",
                    rag_params=t.RAGParams(
                        table_id="my_kb",  # Link to Knowledge Table
                        k=3                 # Top k chunks
                    ),
                    max_tokens=200
                )
            )
        ]
    )
)
```

***

## 10. CHAT TABLES

### Create

```python
table = jamai.table.create_chat_table(
    t.ChatTableSchemaCreate(
        id="my_chatbot",
        cols=[
            t.ColumnSchemaCreate(id="User", dtype="str"),
            t.ColumnSchemaCreate(
                id="AI",
                dtype="str",
                gen_config=t.LLMGenConfig(
                    model="openai/gpt-5.2",
                    system_prompt="You are helpful.",
                    max_tokens=500
                )
            )
        ]
    )
)
```

### Chat (Streaming)

```python
completion = jamai.table.add_table_rows(
    'chat',
    t.MultiRowAddRequest(
        table_id="my_chatbot",
        data=[{"User": "Hello!"}],
        stream=True
    )
)

for chunk in completion:
    if chunk.output_column_name == "AI":
        print(chunk.text, end="", flush=True)
```

### How Chat History Works

Chat tables **automatically maintain conversation history**. Each row added becomes part of the context for subsequent rows.

```python
# Turn 1
jamai.table.add_table_rows('chat', t.MultiRowAddRequest(
    table_id="my_chatbot",
    data=[{"User": "My name is Alice"}],
    stream=False
))

# Turn 2 - AI remembers the name from Turn 1
jamai.table.add_table_rows('chat', t.MultiRowAddRequest(
    table_id="my_chatbot",
    data=[{"User": "What's my name?"}],
    stream=False
))

# AI will respond: "Your name is Alice"

# View conversation history (get_value defined in Quick Reference)
rows = jamai.table.list_table_rows('chat', 'my_chatbot', limit=100)
for row in rows.items:
    print(f"User: {get_value(row.get('User'))}")
    print(f"AI: {get_value(row.get('AI'))}\n")
```

**Note**: Each chat table is a separate conversation. Create multiple tables for multiple users/sessions.

***

## 11. FILE OPERATIONS

### Upload

```python
file_response = jamai.file.upload_file("/path/to/file.png")
s3_uri = file_response.uri  # s3://devcloud-file/...
```

### Get Presigned URL (for display)

```python
import requests

def get_presigned_url(s3_url, jamai):
    if not s3_url or not s3_url.startswith('s3://'):
        return None
    response = requests.post(
        "https://api.jamaibase.com/api/v2/files/url/raw",
        headers={'Authorization': f'Bearer {jamai.token}', 'Content-Type': 'application/json'},
        json={'uris': [s3_url]}
    )
    if response.status_code == 200:
        return response.json().get('urls', [None])[0]
    return None
```

***

## 12. PYTHON COLUMNS

### Basic Concept

The Python Column lets you generate or transform cell values using custom Python code. All upstream columns (columns to the left) are passed as a dictionary named `row`.

* Keys in `row` are column names (strings, case-sensitive)
* Values are the corresponding cell values for that row
* Assign result to `row["Python Column Name"]` to set the output

### Syntax

```python
t.PythonGenConfig(
    python_code="""
try:
    # Read from upstream columns
    value_a = row["Input Column A"]
    value_b = row["Input Column B"]

    # Do some processing
    result = f"{value_a} - processed with {value_b}"

    # Write to this column
    row["Python Column Name"] = result
except Exception as e:
    row["Python Column Name"] = f"ERROR: {str(e)}"
"""
)
```

### Preinstalled Libraries

The following libraries are available:

| Library          | Use Case               |
| ---------------- | ---------------------- |
| `aiohttp`        | Async HTTP client      |
| `audioop-lts`    | Audio operations       |
| `beautifulsoup4` | HTML/XML parsing       |
| `httpx`          | HTTP requests          |
| `matplotlib`     | Plotting/visualization |
| `numpy`          | Numerical computing    |
| `opencv-python`  | Computer vision        |
| `orjson`         | Fast JSON parsing      |
| `pandas`         | Data manipulation      |
| `Pillow`         | Image processing       |
| `pyyaml`         | YAML parsing           |
| `regex`          | Advanced regex         |
| `requests`       | HTTP requests          |
| `ruamel.yaml`    | YAML parsing           |
| `scikit-image`   | Image processing       |
| `simplejson`     | JSON parsing           |
| `soundfile`      | Audio file I/O         |
| `sympy`          | Symbolic math          |
| `tiktoken`       | Token counting         |

### Column Data Types

| dtype   | Description                                 |
| ------- | ------------------------------------------- |
| `str`   | Text output                                 |
| `file`  | Generic file                                |
| `image` | Image file (.jpeg, .jpg, .png, .gif, .webp) |
| `audio` | Audio file (.mp3, .wav)                     |

### Working with Images

When an upstream column contains an image, its value in `row` is raw binary data (bytes).

```python
t.ColumnSchemaCreate(
    id="processed_image",
    dtype="image",  # Output type is image
    gen_config=t.PythonGenConfig(
        python_code="""
from PIL import Image
import io

try:
    # 1. Access the input image bytes
    image_bytes = row["Input Image Column"]

    # 2. Open as PIL Image
    with Image.open(io.BytesIO(image_bytes)) as img:
        # 3. Process (example: convert to grayscale)
        img = img.convert("L")

        # 4. Save to bytes buffer
        output_buffer = io.BytesIO()
        img.save(output_buffer, format="PNG")

        # 5. Assign bytes to column
        row["processed_image"] = output_buffer.getvalue()
except Exception as e:
    row["processed_image"] = None
"""
    )
)
```

### Working with Audio

When an upstream column contains audio, its value in `row` is also raw binary data (bytes).

```python
t.ColumnSchemaCreate(
    id="processed_audio",
    dtype="audio",  # Output type is audio
    gen_config=t.PythonGenConfig(
        python_code="""
import soundfile as sf
import io

try:
    # 1. Read input audio bytes
    with io.BytesIO(row["Input Audio Column"]) as input_buffer:
        data, samplerate = sf.read(input_buffer)

    # 2. Process (example: reduce volume by half)
    data = data * 0.5

    # 3. Write to buffer
    output_buffer = io.BytesIO()
    sf.write(output_buffer, data, samplerate, format="WAV", subtype="PCM_16")

    # 4. Assign bytes to column
    row["processed_audio"] = output_buffer.getvalue()
except Exception as e:
    row["processed_audio"] = None
"""
    )
)
```

### Making Web Requests

Use `httpx` to fetch data from the web:

```python
t.PythonGenConfig(
    python_code="""
import httpx
from bs4 import BeautifulSoup

try:
    # 1. Access the URL
    url = row["url_column"]

    # 2. Fetch HTML content
    response = httpx.get(url)

    # 3. Parse with BeautifulSoup
    soup = BeautifulSoup(response.text, "html.parser")

    # 4. Extract data (example: first h1 tag)
    extracted_text = soup.find("h1").text

    # 5. Assign to column
    row["extracted_title"] = extracted_text
except Exception as e:
    row["extracted_title"] = f"ERROR: {str(e)}"
"""
)
```

### Example: Name Matching

```python
t.PythonGenConfig(
    python_code="""
from difflib import SequenceMatcher
try:
    a = row["declared"].upper()
    b = row["extracted"].upper()
    ratio = SequenceMatcher(None, a, b).ratio()
    row["match"] = f"{'MATCH' if ratio > 0.85 else 'MISMATCH'} ({ratio*100:.0f}%)"
except Exception as e:
    row["match"] = f"ERROR: {str(e)}"
"""
)
```

***

## 13. DIRECT API (No Tables)

### Chat Completions

```python
request = t.ChatRequest(
    model="openai/gpt-5.2",
    messages=[
        t.ChatEntry.system("You are helpful."),
        t.ChatEntry.user("Hello"),
    ],
    max_tokens=100,
    stream=False
)
completion = jamai.generate_chat_completions(request)
print(completion.text)
```

### Embeddings

```python
embeddings = jamai.generate_embeddings(
    t.EmbeddingRequest(
        model="ellm/BAAI/bge-m3",
        input=["Hello world"]
    )
)
print(len(embeddings.data[0].embedding))  # 1024
```

### Model Info

```python
# All models
models = jamai.model_info()

# By capability
chat_models = jamai.model_ids(capabilities=["chat"])
embed_models = jamai.model_ids(capabilities=["embed"])
```

***

## 14. VALUE EXTRACTION (CRITICAL)

Row values are WRAPPED. Always extract:

```python
# WRONG
name = row['name']  # Returns {'value': 'Alice'}

# CORRECT
name = row['name']['value']  # Returns 'Alice'

# HELPER FUNCTION (recommended)
def get_value(field):
    if isinstance(field, dict) and 'value' in field:
        return field['value']
    return field

name = get_value(row.get('name'))
```

### JSON with Confidence Scores

```python
icr_string = get_value(row.get('ocr_result'))
icr_data = json.loads(icr_string)

# Structure: {"field": {"value": "...", "confidence": 95}}
name = icr_data.get('name', {}).get('value')
confidence = icr_data.get('name', {}).get('confidence', 0)
```

***

## 15. ERROR HANDLING

### Common Errors Table

| Error              | Solution                                   |
| ------------------ | ------------------------------------------ |
| `limit > 100`      | Use pagination, max is 100                 |
| `Table not found`  | Check table name, create if needed         |
| `Value is dict`    | Use `get_value()` helper                   |
| `JSONDecodeError`  | Wrap in try-except                         |
| `LLM empty`        | Wait 15-30s or use streaming               |
| `WHERE syntax`     | Use `"col" = 'val'` (double/single quotes) |
| `Model overloaded` | Retry with exponential backoff             |

### Error Handling Pattern

```python
import time
from jamaibase import JamAI
from jamaibase.exceptions import JamAIError  # Base exception class

def safe_add_row(jamai, table_id, data, max_retries=3):
    """Add row with retry logic for transient failures."""
    for attempt in range(max_retries):
        try:
            response = jamai.table.add_table_rows(
                'action',
                t.MultiRowAddRequest(
                    table_id=table_id,
                    data=[data],
                    stream=False
                )
            )
            return response.rows[0]
        except Exception as e:
            error_msg = str(e).lower()
            if 'overloaded' in error_msg or 'rate' in error_msg:
                wait_time = (2 ** attempt) * 5  # 5s, 10s, 20s
                print(f"Retry {attempt+1}/{max_retries} in {wait_time}s...")
                time.sleep(wait_time)
            else:
                raise  # Re-raise non-retryable errors
    raise Exception(f"Failed after {max_retries} retries")
```

### Handle Empty LLM Response

```python
def get_llm_output(response, column_name, timeout=30):
    """Get LLM output, waiting if generation is slow."""
    output = response.rows[0].columns.get(column_name)
    if output and output.text:
        return output.text

    # If empty, wait and re-fetch
    import time
    time.sleep(timeout)
    row_id = response.rows[0].row_id
    row = jamai.table.get_table_row('action', 'my_table', row_id)
    return get_value(row.get(column_name))
```

### Streaming Error Handling

```python
def safe_stream(completion):
    """Handle streaming with error recovery."""
    try:
        for chunk in completion:
            if hasattr(chunk, 'text'):
                yield chunk.text
    except Exception as e:
        print(f"Stream error: {e}")
        yield "[Stream interrupted]"
```

***

## 16. ASYNC OPERATIONS

### Async Client

```python
import asyncio
from jamaibase import JamAIAsync
import jamaibase.types as t

async def main():
    jamai = JamAIAsync(token="...", project_id="...")

    # Async list tables
    tables = await jamai.table.list_tables('action')

    # Async add row
    response = await jamai.table.add_table_rows(
        'action',
        t.MultiRowAddRequest(
            table_id="my_table",
            data=[{"input": "Hello"}],
            stream=False
        )
    )
    print(response.rows[0].columns["output"].text)

asyncio.run(main())
```

### Parallel Processing with Async

```python
async def process_batch(jamai, table_id, items):
    """Process multiple items in parallel."""
    tasks = []
    for item in items:
        task = jamai.table.add_table_rows(
            'action',
            t.MultiRowAddRequest(
                table_id=table_id,
                data=[item],
                stream=False
            )
        )
        tasks.append(task)

    results = await asyncio.gather(*tasks, return_exceptions=True)
    return results
```

***

## 17. REQUEST TYPES

### Current vs Deprecated Methods

| Current (Use This)        | Deprecated (Avoid)    |
| ------------------------- | --------------------- |
| `t.MultiRowAddRequest`    | `t.RowAddRequest`     |
| `t.MultiRowUpdateRequest` | `t.RowUpdateRequest`  |
| `t.MultiRowDeleteRequest` | `t.RowDeleteRequest`  |
| `jamai.model_ids()`       | `jamai.model_names()` |

{% hint style="warning" %}
Deprecated methods still work but will show warnings. Update your code to use the current methods.
{% endhint %}

***

## 18. COMPLETE EXAMPLE

End-to-end example: Create table, add row, read result, cleanup.

```python
from jamaibase import JamAI
import jamaibase.types as t
from dotenv import load_dotenv

load_dotenv()

# Initialize (auto-loads JAMAI_TOKEN, JAMAI_PROJECT_ID from env)
jamai = JamAI()

# Helper function (use throughout your code)
def get_value(field):
    if isinstance(field, dict) and 'value' in field:
        return field['value']
    return field

# 1. Create table with LLM column
TABLE_ID = "qa_demo"

# Delete if exists
try:
    jamai.table.delete_table('action', TABLE_ID)
except:
    pass

jamai.table.create_action_table(
    t.ActionTableSchemaCreate(
        id=TABLE_ID,
        cols=[
            t.ColumnSchemaCreate(id="question", dtype="str"),
            t.ColumnSchemaCreate(
                id="answer",
                dtype="str",
                gen_config=t.LLMGenConfig(
                    model="openai/gpt-5.2",
                    system_prompt="You are a helpful assistant. Be concise.",
                    prompt="Question: ${question}",
                    max_tokens=100
                )
            )
        ]
    )
)
print(f"Created table: {TABLE_ID}")

# 2. Add row (non-streaming)
response = jamai.table.add_table_rows(
    'action',
    t.MultiRowAddRequest(
        table_id=TABLE_ID,
        data=[{"question": "What is Python?"}],
        stream=False
    )
)
row_id = response.rows[0].row_id
answer = response.rows[0].columns["answer"].text
print(f"Row ID: {row_id}")
print(f"Answer: {answer}")

# 3. Read rows
rows = jamai.table.list_table_rows('action', TABLE_ID, limit=100)
for row in rows.items:
    q = get_value(row.get('question'))
    a = get_value(row.get('answer'))
    print(f"Q: {q}\nA: {a}\n")

# 4. Cleanup
jamai.table.delete_table('action', TABLE_ID)
print("Cleanup complete")
```

***

## QUICK COPY-PASTE

```python
# === IMPORTS ===
from jamaibase import JamAI
import jamaibase.types as t

# === INIT ===
jamai = JamAI()  # Auto-loads from JAMAI_TOKEN, JAMAI_PROJECT_ID env vars

# === HELPER (ALWAYS USE) ===
def get_value(field):
    if isinstance(field, dict) and 'value' in field:
        return field['value']
    return field

# === ADD ROWS ===
response = jamai.table.add_table_rows(
    'action',
    t.MultiRowAddRequest(table_id="TABLE", data=[{"col": "val"}], stream=False)
)
row_id = response.rows[0].row_id
output = response.rows[0].columns["output_col"].text

# === LIST ROWS (MAX 100!) ===
rows = jamai.table.list_table_rows('action', 'TABLE', limit=100)
for row in rows.items:
    value = get_value(row.get('column'))

# === WITH WHERE FILTER ===
rows = jamai.table.list_table_rows('action', 'TABLE', where='"status" = \'active\'')

# === UPDATE ROWS ===
jamai.table.update_table_rows('action', t.MultiRowUpdateRequest(
    table_id="TABLE", data={"row-id": {"col": "new_val"}}
))

# === DELETE ROWS ===
jamai.table.delete_table_rows('action', t.MultiRowDeleteRequest(
    table_id="TABLE", row_ids=["row-id-1", "row-id-2"]
))

# === FILE UPLOAD ===
file_resp = jamai.file.upload_file("/path/to/file.png")
uri = file_resp.uri  # Use in row data

# === STREAMING ===
completion = jamai.table.add_table_rows(
    'action',
    t.MultiRowAddRequest(table_id="TABLE", data=[{"col": "val"}], stream=True)
)
for chunk in completion:
    if chunk.output_column_name == "output_col":
        print(chunk.text, end="", flush=True)
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.jamaibase.com/developer-reference/python-sdk-documentation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
