AI21
LiteLLM supports the following AI21 models:
- jamba-1.5-mini
- jamba-1.5-large
- j2-light
- j2-mid
- j2-ultra
tip
We support ALL AI21 models, just set model=ai21/<any-model-on-ai21> as a prefix when sending litellm requests.
See all litellm supported AI21 models here
API KEYSโ
import os 
os.environ["AI21_API_KEY"] = "your-api-key"
LiteLLM Python SDK Usageโ
Sample Usageโ
from litellm import completion 
# set env variable 
os.environ["AI21_API_KEY"] = "your-api-key"
messages = [{"role": "user", "content": "Write me a poem about the blue sky"}]
completion(model="ai21/jamba-1.5-mini", messages=messages)
LiteLLM Proxy Server Usageโ
Here's how to call a ai21 model with the LiteLLM Proxy Server
- Modify the config.yaml
model_list:
  - model_name: my-model
    litellm_params:
      model: ai21/<your-model-name>  # add ai21/ prefix to route as ai21 provider
      api_key: api-key                 # api key to send your model
- Start the proxy
$ litellm --config /path/to/config.yaml
- Send Request to LiteLLM Proxy Server
- OpenAI Python v1.0.0+
- curl
import openai
client = openai.OpenAI(
    api_key="sk-1234",             # pass litellm proxy key, if you're using virtual keys
    base_url="http://0.0.0.0:4000" # litellm-proxy-base url
)
response = client.chat.completions.create(
    model="my-model",
    messages = [
        {
            "role": "user",
            "content": "what llm are you"
        }
    ],
)
print(response)
curl --location 'http://0.0.0.0:4000/chat/completions' \
    --header 'Authorization: Bearer sk-1234' \
    --header 'Content-Type: application/json' \
    --data '{
    "model": "my-model",
    "messages": [
        {
        "role": "user",
        "content": "what llm are you"
        }
    ],
}'
Supported OpenAI Parametersโ
| param | type | AI21 equivalent | 
|---|---|---|
| tools | Optional[list] | tools | 
| response_format | Optional[dict] | response_format | 
| max_tokens | Optional[int] | max_tokens | 
| temperature | Optional[float] | temperature | 
| top_p | Optional[float] | top_p | 
| stop | Optional[Union[str, list]] | stop | 
| n | Optional[int] | n | 
| stream | Optional[bool] | stream | 
| seed | Optional[int] | seed | 
| tool_choice | Optional[str] | tool_choice | 
| user | Optional[str] | user | 
Supported AI21 Parametersโ
| param | type | AI21 equivalent | 
|---|---|---|
| documents | Optional[List[Dict]] | documents | 
Passing AI21 Specific Parameters -  documentsโ
LiteLLM allows you to pass all AI21 specific parameters to the litellm.completion function. Here is an example of how to pass the documents parameter to the litellm.completion function.
- LiteLLM Python SDK
- LiteLLM Proxy Server
response = await litellm.acompletion(
    model="jamba-1.5-large",
    messages=[{"role": "user", "content": "what does the document say"}],
    documents = [
        {
            "content": "hello world",
            "metadata": {
                "source": "google",
                "author": "ishaan"
            }
        }
    ]
)
import openai
client = openai.OpenAI(
    api_key="sk-1234",             # pass litellm proxy key, if you're using virtual keys
    base_url="http://0.0.0.0:4000" # litellm-proxy-base url
)
response = client.chat.completions.create(
    model="my-model",
    messages = [
        {
            "role": "user",
            "content": "what llm are you"
        }
    ],
    extra_body = {
        "documents": [
            {
                "content": "hello world",
                "metadata": {
                    "source": "google",
                    "author": "ishaan"
                }
            }
        ]
    }
)
print(response)
tip
We support ALL AI21 models, just set model=ai21/<any-model-on-ai21> as a prefix when sending litellm requests
See all litellm supported AI21 models here
AI21 Modelsโ
| Model Name | Function Call | Required OS Variables | 
|---|---|---|
| jamba-1.5-mini | completion('jamba-1.5-mini', messages) | os.environ['AI21_API_KEY'] | 
| jamba-1.5-large | completion('jamba-1.5-large', messages) | os.environ['AI21_API_KEY'] | 
| j2-light | completion('j2-light', messages) | os.environ['AI21_API_KEY'] | 
| j2-mid | completion('j2-mid', messages) | os.environ['AI21_API_KEY'] | 
| j2-ultra | completion('j2-ultra', messages) | os.environ['AI21_API_KEY'] |