> ## Documentation Index
> Fetch the complete documentation index at: https://koreai-agent-management-platform-dev.mintlify.app/llms.txt
> Use this file to discover all available pages before exploring further.

# External Models

Connect commercial and custom models to Agent Platform.

***

## Overview

External models are AI models hosted outside the platform. Once connected, they can be used across Agent Platform in Agentic Apps, Prompt Studio, Tools, and Evaluation Studio.

**Supported Providers (Easy Integration)**:

| Provider          | Authentication              | Tool Calling |
| ----------------- | --------------------------- | ------------ |
| OpenAI            | API Key                     | ✓            |
| Anthropic         | API Key                     | ✓            |
| Google            | API Key                     | ✓            |
| Cohere            | API Key                     | ✓            |
| Azure OpenAI      | API Key + Endpoint          | ✓            |
| Amazon Bedrock    | IAM Role ARN                | ✓            |
| Vertex AI         | API Key                     | ✓            |
| Microsoft Foundry | API Key / Service Principal | ✓            |

**Custom Models (API Integration)**: Connect any model via REST API endpoint.

For the complete list of supported models, see [Supported Models](/agent-platform/models/supported-models).

## Manage Connected Models

### View Models

* Go to **Models** → **External Models** to see all connected models.

### Manage Connections

Each model can have multiple connections with different API keys, enabling separate usage tracking and billing.

| Action           | Description                                       |
| ---------------- | ------------------------------------------------- |
| Inference Toggle | Enable/disable model availability across Platform |
| Edit             | Update API key or credentials                     |
| Delete           | Remove the connection                             |

When adding multiple API keys for the same model, each connection must have a unique name and API key. In Agentic Apps, you can assign specific connections at the Agent or Supervisor level.

## Add a Model via Easy Integration

Use Easy Integration for commercial providers with API keys or IAM roles.

### Standard Providers (OpenAI, Anthropic, Google, Cohere)

1. Go to **Models** → **External Models** → **Add a model**.
2. Select **Easy Integration** → click **Next**.
3. Choose your provider → click **Next**.
4. Select a model from the supported list.
5. Enter a **Connection name** and your **API key**.
6. Click **Confirm**.

The model is now available across Agent Platform.

### Amazon Bedrock

Bedrock uses IAM role-based authentication instead of API keys.

**Prerequisites**: Create an IAM role in AWS with Bedrock permissions and a trust policy allowing Agent Platform to assume the role. See [Configuring Amazon Bedrock](/agent-platform/models/configuring-aws) for IAM setup.

**Steps**:

1. Go to **Models** → **External Models** → **Add a model**.
2. Select **Easy Integration** → **AWS Bedrock** → **Next**.
3. Configure credentials and model details:

| Field                 | Description                              |
| --------------------- | ---------------------------------------- |
| IAM Role ARN          | Your IAM role with Bedrock permissions   |
| Trusted Principal ARN | Platform's AWS principal (pre-populated) |
| Model Name            | Internal identifier                      |
| Model ID              | Bedrock Model ID or Endpoint ID          |
| Region                | AWS region of the model                  |
| Headers               | Optional custom headers                  |

4. Configure model settings using [Default](#default-mode) or [Existing Provider Structures](#existing-provider-structures-mode).
5. Click **Confirm**.

### Vertex AI

Vertex AI uses API key authentication to access Gemini models (2.5 and 3.0 families) from your Google Cloud account.

**Prerequisites**: Create an API key in your Google Cloud account with Vertex AI API access.

* **New users**: Use the [express mode setup](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/start/express-mode/overview) to generate an API key automatically, then manage keys under **APIs & Services > Credentials**.
* **Existing users**: Enable the Vertex AI API, create a service account (`vertex-ai-runner`) with the **Vertex AI Platform Express User** role, create an API key linked to that service account under **APIs & Services > Credentials**, and store the key securely.

**Steps**:

1. Go to **Models** → **External Models** → **Add a model**.
2. Select **Easy Integration** → **Vertex AI** → **Next**.
3. Choose a configuration method:

**Option A: Manual Setup**

| Field           | Description                                              |
| --------------- | -------------------------------------------------------- |
| Model           | Select a Gemini model from the dropdown                  |
| Connection name | Internal identifier for this connection                  |
| API key         | Your Google Vertex AI API key                            |
| Project ID      | (Optional) Your Google Cloud project identifier          |
| Region          | (Optional) Google Cloud region where models are deployed |

Click **Confirm** to save.

**Option B: Import from cURL**

Enter a **Connection name**, paste a cURL command in the text area, click **Fetch** to extract the configuration, then click **Confirm**.

The model is now available across Agent Platform.

### Microsoft Foundry

Microsoft Foundry supports two authentication methods: entering credentials manually or using an Azure Active Directory Service Principal.

**Steps**:

1. Go to **Models** → **External Models** → **Add a model**.
2. Select **Easy Integration** → **Microsoft Foundry** → **Next**.
3. Choose an authentication method:

**Option A: Enter Manually**

Directly provide credentials from your model's Details page in Microsoft Foundry.

| Field           | Description                                     |
| --------------- | ----------------------------------------------- |
| Connection name | Unique name to identify this connection         |
| Target URI      | Endpoint URI from your model's Details page     |
| Key             | API key from your model's Details page          |
| Deployment name | Deployment name as defined in Microsoft Foundry |

**Option B: Use Service Principal**

Authenticate through an Azure Active Directory Service Principal. Requires a pre-configured Microsoft Foundry connection.

| Field           | Description                             |
| --------------- | --------------------------------------- |
| Connection name | Unique name to identify this connection |

If no connection exists, click **Configure Service Principal** and complete these steps:

1. In the [Azure Portal](https://portal.azure.com), go to **App registrations** → **+ New registration**. Enter a name, select account type, and click **Register**.

2. Copy the **Application (Client) ID** and **Directory (Tenant) ID** from the Overview page.

3. Go to **Certificates & secrets** → **+ New client secret**. Set an expiry and copy the **Value** immediately.

4. In your resource group, go to **Access control (IAM)** → **Add role assignment**. Assign a role (e.g., **Contributor**) and select your registered app.

5. Click **Configure Service Principal** in the Platform, enter a **Connection name**, and fill in Tenant ID, Application (Client) ID, Client Secret, and Subscription ID. Click **Test**, then **Save**.

6. Configure model settings under **Model configurations**. Enable the features your model supports:

| Feature               | Description                                               |
| --------------------- | --------------------------------------------------------- |
| Structured Response   | JSON-formatted outputs for Prompts and Tools              |
| Tool Calling          | Function calling for Agentic Apps and AI nodes            |
| Parallel Tool Calling | Multiple tool calls per request                           |
| Streaming             | Real-time token generation                                |
| Data Generation       | Synthetic data generation in Prompt Studio                |
| Modalities            | Text-to-Text, Text-to-Image, Image-to-Text, Audio-to-Text |

<Note>Tool calling must be enabled for the model to work in Agentic Apps.</Note>

Under **Body**, specify the model name and select a provider to set the API reference:

| Template                                                                              | Use When                                    |
| ------------------------------------------------------------------------------------- | ------------------------------------------- |
| OpenAI [Chat Completions](https://developers.openai.com/api/reference/resources/chat) | Model follows OpenAI chat API format        |
| Anthropic [Messages](https://platform.claude.com/docs/en/api/messages)                | Model follows Anthropic messages API format |

5. Click **Save as draft** to store without activating, or **Confirm** to finalize.

The model is now listed in the **External Models** tab and available in **Prompts**, **Tools**, and **Agentic Apps**.

## Add a Model via API Integration

Use API Integration for custom endpoints or self-hosted models.

<Note>For Agentic Apps compatibility, custom models must support tool calling and follow OpenAI or Anthropic request/response structures.</Note>

### Steps

1. Go to **Models** → **External Models** → **Add a model**.
2. Select **Custom Integration** → click **Next**.
3. Enter basic configuration:

| Field                 | Description                              |
| --------------------- | ---------------------------------------- |
| Connection Name       | Unique identifier                        |
| Model Endpoint URL    | Full API endpoint URL                    |
| Authorization Profile | Select configured auth profile or *None* |
| Headers               | Optional key-value pairs for requests    |

4. Configure model settings using [Default](#default-mode) or [Existing Provider Structures](#existing-provider-structures-mode).
5. Click **Confirm**

## Model Configuration Modes

When using API Integration or advanced Bedrock setup, choose one of these configuration modes:

### Default Mode

Manually configure request/response handling for complete control.

**1. Define Variables**

| Variable Type    | Description                                                      |
| ---------------- | ---------------------------------------------------------------- |
| Prompt           | Primary input text (required)                                    |
| System Prompt    | System instructions (optional)                                   |
| Examples         | Few-shot examples (optional)                                     |
| Custom Variables | Additional dynamic inputs with name, display name, and data type |

**2. Configure Request Body**

Create JSON payload using `{{variable}}` placeholders:

```json  theme={null}
{
  "model": "your-model-name",
  "messages": [
    {"role": "system", "content": "{{system.prompt}}"},
    {"role": "user", "content": "{{prompt}}"}
  ],
  "max_tokens": 1000,
  "temperature": 0.7
}
```

**3. Map Response JSON Paths**

Click **Test** to send a sample request, then configure extraction paths:

| Field         | Description                | Example                      |
| ------------- | -------------------------- | ---------------------------- |
| Output Path   | Location of generated text | `choices[0].message.content` |
| Input Tokens  | Input token count          | `usage.prompt_tokens`        |
| Output Tokens | Output token count         | `usage.completion_tokens`    |

### Existing Provider Structures Mode

Automatically apply pre-defined schemas from known providers. Recommended when your model follows a standard API format.

**1. Select Provider Template**

| Template                  | Use When                                    |
| ------------------------- | ------------------------------------------- |
| OpenAI (Chat Completions) | Model follows OpenAI chat API format        |
| Anthropic (Messages)      | Model follows Anthropic messages API format |
| Google (Gemini)           | Model follows Gemini API format             |

**2. Enter Model Name**

Specify the model identifier for request bodies.

**3. Enable Model Features**

Enable only features your model supports:

| Feature               | Description                                               |
| --------------------- | --------------------------------------------------------- |
| Structured Response   | JSON-formatted outputs for Prompts and Tools              |
| Tool Calling          | Function calling for Agentic Apps and AI nodes            |
| Parallel Tool Calling | Multiple tool calls per request                           |
| Streaming             | Real-time token generation for Agentic Apps               |
| Data Generation       | Synthetic data generation in Prompt Studio                |
| Modalities            | Text-to-Text, Text-to-Image, Image-to-Text, Audio-to-Text |

Under **Body**, specify the model name and select a provider to set the API reference:

| Template                                                                              | Use When                                    |
| ------------------------------------------------------------------------------------- | ------------------------------------------- |
| OpenAI [Chat Completions](https://developers.openai.com/api/reference/resources/chat) | Model follows OpenAI chat API format        |
| Anthropic [Messages](https://platform.claude.com/docs/en/api/messages)                | Model follows Anthropic messages API format |

<Warning>Enabling unsupported features may cause unexpected behavior.</Warning>

## Troubleshooting

| Issue                              | Solution                                                                                                                            |
| ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| Test fails                         | Verify endpoint URL and authentication                                                                                              |
| Empty response                     | Check JSON path mapping matches response structure                                                                                  |
| Model not in dropdowns             | Ensure Inference toggle is ON                                                                                                       |
| Tool calling not working           | Verify model supports it and feature is enabled                                                                                     |
| Bedrock connection fails           | Check IAM role ARN and trust policy configuration                                                                                   |
| Vertex AI auth error               | Ensure API key is valid and not an OAuth token; check that Vertex AI API is enabled for your project                                |
| Microsoft Foundry connection fails | Verify Target URI, API key, and deployment name; for Service Principal, confirm Tenant ID, Client ID, and Client Secret are correct |


Built with [Mintlify](https://mintlify.com).