Overview

Azure Open AI Service offers industry-leading coding and language AI models that you can fine-tune to your specific needs for a variety of use cases. It provides access to Open AI’s models including the GPT-4, GPT-4 Turbo with Vision, GPT-3.5-Turbo, DALLE-3 and Embeddings model series with the security and enterprise capabilities of Azure.

Currently, Open AI service of Azure supports the following base models for deployments as shown below. These models are deployable for usage and provide different capabilities.

Azure Open AI Models

Refer to Azure Open AI Models for more details.

In this post, we will go through how to integrate Azure Open AI capabilities like Chat and Dall-E with Azure Functions which can be used for different scenarios as required.

Prerequisites

In order to use Azure Open AI with your Azure Functions, you need to first deploy this resource in your Azure Subscription. Go to Marketplace and deploy the below mentioned resource.

Untitled

For this integration, I will be using Standard S0 pricing tier and deploying this service in Australia East region.

Once deployed, you can easily navigate to the Azure Open AI studio from this new resource.

Untitled

You will also need a simple Azure Functions App project to add HTTP trigger functions for /Chat and /DallE endpoints that I will create in the next section.

 

Integrate Azure Open AI Services

Azure Open AI Chat Service provides capability to use generative AI for creating responses and images using variety of GPT-based and Dall-E models.

In order to integrate this service with an Azure Functions for generating responses, let’s first create a deployment using a GPT-4o model.

Deploy a GPT-4o Model

You can easily deploy models in Azure Open AI Studio.

Navigate to the studio and go to Deployments → Create New Deployment as shown below.

Untitled

Once you click on Create New Deployment, you can choose which model to deploy and corresponding name of it. This name is the unique identifier which can be used later for integration purposes. This section can also be used to tweak the base model if required. As shown below, I will be deploying GPT-4o latest available model with pre-filled configuration.

Untitled

Once model is successfully deployed, it can be used with Azure Functions. Let’s see in the next section how we can generate responses using this model.

Use GPT-4o Model with Azure Functions

Chat responses can be generated using Azure Open AI client library for .NET library.

Setup

Let’s add this library in our Azure Function App project as shown below. Do ensure that Include prerelease is set to checked as the latest .NET library being used for this post is in prerelease right now. The version is prerelease 2.0.0-beta.2 .

Untitled

Once library is installed, let’s use it with our function. Firstly, we need to get the following details from Azure Open AI resource for authentication and connectivity purposes.

  • Azure Open AI Endpoint
  • Azure Open AI Access Key

These values can be accessed from the Azure Open AI resource as shown below.

Untitled

Once extracted, these values can be added in the local.settings.json as shown below.

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
    "AZURE_OPENAI_ENDPOINT": "<endpoint>",
    "AZURE_OPENAI_KEY": "<access key>"
  }
}

 

Execution

Once the above setup is completed, you can use the .NET library to generate chat responses. The below code can be used for this purpose.

[Function("Chat")]
public async Task<HttpResponseData> Run([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req)
{
    string? endpoint = GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
    string? key = GetEnvironmentVariable("AZURE_OPENAI_KEY");
    var content = await new StreamReader(req.Body).ReadToEndAsync();
    var requestInfo = JsonSerializer.Deserialize<RequestModel>(content);

    try
    {
        if (endpoint != null && key != null && requestInfo != null)
        {
            AzureKeyCredential credential = new(key);
            AzureOpenAIClient azureClient = new(new Uri(endpoint), credential);
            // Use the appropriate deployed model name
            ChatClient chatClient = azureClient.GetChatClient("gpt-4o-deploy");

            ChatCompletion completion = chatClient.CompleteChat(
              new ChatMessage[] {
                 new SystemChatMessage(requestInfo.Text)
              },
              // parameters for tuning
              new ChatCompletionOptions()
              {
                  Temperature = (float)0.7,
                  MaxTokens = 800,
                  FrequencyPenalty = 0,
                  PresencePenalty = 0,
              }
            );

            var result = new
            {
                data = new
                {
                    role = completion.Role,
                    text = completion.Content[0].Text
                }
            };
            var response = req.CreateResponse(HttpStatusCode.OK);
            await response.WriteAsJsonAsync(result);
            return response;
        }
        else
        {
            var result = new
            {
                error = "No access Key found"
            };
            var response = req.CreateResponse(HttpStatusCode.InternalServerError);
            await response.WriteAsJsonAsync(result, HttpStatusCode.InternalServerError);
            return response;
        }
    }
    catch (Exception ex)
    {
        var result = new
        {
            error = ex.Message
        };
        var response = req.CreateResponse(HttpStatusCode.InternalServerError);
        await response.WriteAsJsonAsync(result, HttpStatusCode.InternalServerError);
        return response;
    }
}

With the above code, /Chat endpoint can be used to generate responses using GPT-4o deployed model on Azure Open AI services as shown below.

Untitled

Use DALL.E 3 with Azure Functions

Similarly, this library can be used for using other Azure Open AI services like DALL.E. Firstly, you would need to deploy a DALL.E 3 model re-using the similar steps as shown below.

Untitled

Once deployed, this model can be used using the same .NET library with the code shown below.

    [Function("DallE")]
    public async Task<HttpResponseData> Run([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req)
    {
        string? endpoint = GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
        string? key = GetEnvironmentVariable("AZURE_OPENAI_KEY");
        var content = await new StreamReader(req.Body).ReadToEndAsync();
        var requestInfo = JsonSerializer.Deserialize<RequestModel>(content);

        try
        {
            if (endpoint != null && key != null && requestInfo != null)
            {
                AzureKeyCredential credential = new(key);
                AzureOpenAIClient azureClient = new(new Uri(endpoint), credential);
                ImageClient client = azureClient.GetImageClient("Dalle3");

                ClientResult<GeneratedImage> imageResult = await client.GenerateImageAsync(requestInfo.Text, new()
                {
                    Quality = GeneratedImageQuality.Standard,
                    Size = GeneratedImageSize.W1024xH1024,
                    ResponseFormat = GeneratedImageFormat.Uri
                });

                // Image Generations responses provide URLs you can use to retrieve requested images
                GeneratedImage image = imageResult.Value;

                var result = new
                {
                    data = image.ImageUri
                };
                var response = req.CreateResponse(HttpStatusCode.OK);
                await response.WriteAsJsonAsync(result);
                return response;
            }
            else
            {
                var result = new
                {
                    error = "No access Key found"
                };
                var response = req.CreateResponse(HttpStatusCode.InternalServerError);
                await response.WriteAsJsonAsync(result, HttpStatusCode.InternalServerError);
                return response;
            }
        }
        catch (Exception ex)
        {
            var result = new
            {
                error = ex.Message
            };
            var response = req.CreateResponse(HttpStatusCode.InternalServerError);
            await response.WriteAsJsonAsync(result, HttpStatusCode.InternalServerError);
            return response;
        }
    }
}

With the above function, /DallE endpoint can be used now to generate images based on the provided text.

Untitled

The generated URL can be used to download the image as provided by the model. For the above response, the model is able to generate the below image which is very impressive!

Untitled

Wrapping Up

In this post, I demonstrated how different Azure Open AI services can be integrated with Azure Functions seamlessly using the prerelease .NET client library of Azure Open AI. There is active work happening on the library which will add more robust features and introduce more integration capabilities with the services provided by Azure Open AI.

 

Leave a comment