Prepare your environment
Before you can start deploying agents to Work Room, you need to set up your Control Room and Workspaces to have all the configurations ready.
While this preparatory phase may seem tedious and you may be tempted to skip it, it's important for your future work. You do it once now and reap the fruit of your efforts next time over.
For starters, here's an overview of the building blocks that make up a deployed agent. Agent's developer has determined what the agent does (Runbook, Actions, Knowledge) and how it works (LLM model, Agent infrastructure, Reasoning) - these are not changeable in Control Room at all.
As an AI Operator, you are in control of the runtime configuration of an agent, including which workspace (or many workspaces) it's deployed to, what are the LLM connections when used in production, the secrets and so forth.
Agent Compute and workspaces
Make sure to go through the unit previous section that explains how to organize Agent Compute environments and their workspaces.
It's vital to have Agent Compute environments and the corresponding workspaces set up correctly before you deploy first agents.
Configure LLMs
When agent creators build agents, they set them up using a certain LLM and a provider of the model. For example, GPT-4o
model provided by Microsoft Azure OpenAI Service. When you deploy the agent, you are required to assign it the same model.
Domain experts build agents in Studio, but the LLM configuration (such as API key and deployment URL) isn't saved in the agent package they publish to Control Room. That means you need to provide the agent an appropriate LLM configuration during the deployment.
The setup process is similar to what you may know from Studio.
To configure LLMs in Control Room:
- Select Configuration in the left-side navigation menu.
- In LLMs tab, click LLM.
- Type a name for the LLM configuration.
- Select a model.
- Fill in the required entries.
- Click Create.
Practical tips
- Depending on your LLM provider, different types of secrets may be required.
- For instance, with OpenAI provider, only API key suffices for the whole LLM configuration. With Azure OpenAI Service modesl, you need separate chat and embeddings endpoints, as well as separate keys for chat API and embeddings API.
- Ensure your LLM deployments have sufficient tokens-per-minute quota. The exact number varies depending on the use case.
- We recommend you create separate LLM configurations for testing and production. This allows for more precise quota management and monitoring.
- Be mindful of content filtering rules of your LLM deployment. In our experience, agents tend to perform better with fewer filtering rules.
Create secrets
Some agent actions might require secrets such as API keys, certificates, configurations, or service accounts. These can be configured as reusable secrets stored securely in AWS Secrets Manager or as one-off secrets for temporary use. The latter option is not recommended, but it can be useful when you need to do a quick test.
Secret-based authentication means that all agent users share the same API key or service account to access the target system. However, the API keys are not exposed to the end-users.
To save secrets in Control Room:
- Select Secrets in the left-side navigation menu.
- Click Secret in the top-right corner of the screen.
- Fill in Name and Description of your secret. These values are not secret.
- Fill in the secret in Value. This is the secret.
- Click Save.
Saving secrets in Control Room is a one-way street
Once you save a secret, you can't see it in Control Room.
Control Room doesn't have access to your secrets as they're stored in your private VPC. You can only create, overwrite, and delete them in Control Room.
Set up OAuth2 clients
Some agents use actions that require OAuth2 user-level authentication to access enterprise systems, such as Microsoft Entra ID or Google. We continuously add more OAuth2-supported action packages to expand compatibility with various platforms.
User-level authentication requires each end-user to authenticate with their own credentials to access data on the target system.
An example of user-level authentication is the Google Drive package that requires each user to authenticate with their own Google account credentials to access Google Drive resources.
To use OAuth2 authentication in production, you must create and configure your organization's OAuth client applications for the required platforms.
Optional integrations
You have two optional integrations that you may use for the agents:
- Email sending for feedback
- Observability using LangSmith
To set up either of the integrations, go to Configuration > Integrations in Control Room.
Email sending capability enables agents to send emails which, in turn, makes it possible for agent users to provide you feedback on the agent. You select recipients during agent deployment and point the feedback to appropriate people for each agent separately.
To set up the email sending capability, click Integration and select SMTP in the Type menu.
You need to know your organization's SMTP (opens in a new tab) server details. Ask your system administrators if unsure.
Observability with LangSmith
LangSmith provides powerful tracing and debugging capabilities, enabling you to monitor and analyze the performance of your agents and LLM interactions. Once configured, you can enable tracing for your agents during deployment, offering deeper insights into agent behavior and helping improve performance and reliability.
You can use LangSmith to determine the following:
- The number of times the LLM was called.
- The inputs/prompts/responses passed to the LLM.
- How actions were called and whether they were executed sequentially or in parallel.
- The total LLM cost of this query.
See a guide on how to create a LangSmith account and API key (opens in a new tab).
To set up observability, click Integration and select LangSmith in the Type menu.