Webinar: AI Agents in the Wild — Tuesday, January 20 at 9am PT / 12pm ET. Register now!
Setup Model Providers

Setup Model Providers

Configure your model provider credentials in Studio or Control Room. Select your provider below for specific setup instructions.

To use Azure AI Foundry models in Sema4.AI:

  1. Create an Azure AI Foundry resource in the Azure Portal (opens in a new tab)
  2. Deploy a model (e.g. GPT-5, GPT-5.2) in your Azure AI Foundry workspace
  3. Get your credentials:
    • Endpoint URL
    • Deployment name
    • API Key
  4. Add credentials in Sema4.AI:
    • Control Room: Configuration → LLMs → + LLM
    • Studio: Settings → Models → + Add New

Known issue with Endpoint URLs: Some versions of Azure Foundry produce a format that is missing necessary fields. An example we've seen includes: https://{yourResourceName}.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview

Sema4.AI expects this format: https://{yourResourceName}.openai.azure.com/openai/deployments/{deploymentId}/chat/completions?api-version={apiVersion}

To construct the correct URL:

  1. Get the resource name from the URL Azure provides
  2. Use the deployment name (name given to the instance of your model) from the model page in the Azure Portal
  3. Use API version 2025-04-01-preview

Note: Although the Sema4.AI URL format expect shows that chat completions endpoint, OpenAI reasoning models will use the responses endpoint.

Ensure your Azure subscription has access to the reasoning models you want to use. Some models may require requesting access through the Azure Portal. If Azure OpenAI verification for GPT-5 series models is taking time, you can use OpenAI directly as a workaround.

If you must use Azure, ensure you're on invoice-based billing and complete the verification process—expect about a week of lead time. See Troubleshooting for more details.