Sentinel Playbook and Azure OpenAI 

TL;DR – Sentinel automation playbooks using a custom Logic App connector that uses the new Chat API with gpt-3.5-turbo and gpt-4. This time with Azure OpenAI vs OpenAI.

After RSA and a few weeks of DTO, I finally had a chance to document this for anyone that inquired about my previous post, Sentinel and OpenAI Chat API with gpt-3.5-turbo and gpt-4, but now using Azure OpenAI. In this post I am documenting the few differences between using Azure OpenAI vs OpenAI.

The custom connector

I am still using a custom connector, for the same reasons as the previous post, security of the authentication token. The differences with this custom connector are the following:

1. Host – The connection with Azure OpenAI is made to https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions per the documentation. So, in my case my resource name is azureopenai-afaber, so the host name I used is azureopenai-afaber.openai.azure.com, as shown below

2. Authentication – Azure OpenAI offers two types of authentication, API Key or Azure AD. In my test I am still using an API key, so the HTTP header parameter I need to use for Azure OpenAI is api-key (vs Authorization with OpenAI). If you were using AAD, you would use Authorization with a bearer token, but that’s not the case in my test.

3. Request – Two differences here. First, there is no model within the body because as you see below, “Models can only have one deployment at a time” in Azure OpenAI.

So, when I specify the deployment in the URL, https://azureopenai-afaber.openai.azure.com/openai/deployments/gpt-35-turbo/chat/completions, it will specify the deployment, which only has one model. Yes, I named my deployment the name of the model, but you can use any name that works for you.

I still use Import from sample for both the Request and the Response using the same command I tested, as shown below. I am also adding max_tokens in my request, which is optional, but I like to use it.

The second difference is the api-version, which is required. If you don’t specify the api-version query parameter, then you receive a 404 error.

And the body will not have the model parameter in the Request, as I mentioned above.

And the Response as imported, will look like this:

For additional information on the required parameters and the supported api-version values, please see the documentation.

The playbook

For my Azure OpenAI playbook I used the basis of the playbook that I described in my previous post, but I expanded it a little based on a session that I delivered with a partner at RSA, so you’ll notice this playbook has a few differences, such as a LOT more detail in the system message and a few additional steps. However, overall, the playbook is pretty much doing the same thing as the previous post.

The differences are based on the connector parameters I noted above.

For example, I now have to use the api-version as shown below:

And I don’t need to use the model parameter, for the reasons I mentioned above.

Also, when I stored the api-key connection, I don’t use “Bearer YOUR_API_KEY“, which I had to with both the OpenAI connector and my OpenAI custom connector, as I mentioned on my original post testing with the OpenAI connector. If had used Azure AD, that should still be required.

Optional

This part is completely optional, and you don’t need to make any changes mentioned here to work with Azure OpenAI. However, you might have noticed that I added a few more steps and a LOT more detail in the system message role with this new playbook, so I wanted to explain why I did that.

This playbook was created to work with a specific connector in Sentinel, so I wanted the answers provided by Azure OpenAI to be a lot more specific and accurate. So, what I did was improve my system message with as much detail as possible. Yes, I am still restricted by the tokens I can use.

And when I ask it to generate a KQL query, I am feeding it the actual schema of the tables, as shown below:

The result of that are tasks that are a lot more accurate and that give the SOC analysts much more relevant information.

Of course, with Azure OpenAI, you also have the ability to fine-tune your model, so you can add your specific dataset to make it even more powerful. For more on that, please review the Azure OpenAI documentation.

Final Note

Yes, I have been testing with gpt-35-turbo and I haven’t been able to test with gpt-4, but I am on the waitlist, so I will let you know if I see any differences when testing with the gpt-4 model. I can see from the API description that the parameters are the same for both these models, but you never know until you try. So, I’ll keep you posted!

One thought on “Sentinel Playbook and Azure OpenAI 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: