You are currently on a page documenting the use of Azure OpenAI text completion models. The latest and most popular Azure OpenAI models are chat completion models.Unless you are specifically using
gpt-3.5-turbo-instruct
, you are probably looking for this page instead.openai
Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.
API configuration
You can configure theopenai
package to use Azure OpenAI using environment variables. The following is for bash
:
Azure Active Directory Authentication
There are two ways you can authenticate to Azure OpenAI:- API Key
- Azure Active Directory (AAD)
az login
to log in.
Add a role an Azure role assignment Cognitive Services OpenAI User
scoped to your Azure OpenAI resource. This will allow you to get a token from AAD to use with Azure OpenAI. You can grant this role assignment to a user, group, service principal, or managed identity. For more information about Azure OpenAI RBAC roles see here.
To use AAD in Python with LangChain, install the azure-identity
package. Then, set OPENAI_API_TYPE
to azure_ad
. Next, use the DefaultAzureCredential
class to get a token from AAD by calling get_token
as shown below. Finally, set the OPENAI_API_KEY
environment variable to the token value.
DefaultAzureCredential
class is an easy way to get started with AAD authentication. You can also customize the credential chain if necessary. In the example shown below, we first try Managed Identity, then fall back to the Azure CLI. This is useful if you are running your code in Azure, but want to develop locally.
Deployments
With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use. Note: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via theAzureChatOpenAI
class. For docs on Azure chat see Azure Chat OpenAI documentation.
Let’s say your deployment name is gpt-35-turbo-instruct-prod
. In the openai
Python API, you can specify this deployment with the engine
parameter. For example: