Sentinel and OpenAI Chat API with gpt-3.5-turbo and gpt-4

TL;DR – Sentinel automation playbooks using a custom Logic App connector that uses the new Chat API with gpt-3.5-turbo and gpt-4.

I went down the deep end on this one😊. A few of my partners are testing these integrations to be ready for current AI services and upcoming services. This blog post is about my experience building a new custom connector to use with my Sentinel playbooks (Logic Apps), which are part of the SOAR features within Sentinel. Specifically, I wanted a connector that would work with the new API, where I can use the gpt-3.5-turbo and gpt-4. models. Also, I used the public OpenAI service, because that’s what I have access to, but it should be equivalent with Azure OpenAI.

Why a custom connector?

First, the existing Logic App OpenAI connector only supports up to GPT-3 API, that’s because the API endpoint is different. So, while I could use text-davinci-003 model with https://api.openai.com/v1/completions, I can’t use it with the gpt-3.5-turbo and gpt-4 models. Those models connect to https://api.openai.com/v1/chat/completions.

Yes, I could probably just use the generic HTTP action, but that doesn’t allow me to store the API key safely. That’s especially important for partners that manage customer workspaces and some of the playbooks exist on the customer’s subscriptions. So, that’s how I ended up with a new custom connector, which was surprisingly straightforward.

My new custom connector

To create the new custom connector, just search for it within the search bar, as shown below.

To create, you just need to specify a subscription, resource group, name, and region. When you select the region, make sure to select the one where your Logic App (Sentinel playbook) will be created. Once it’s created, just click on Edit to specify the details.

Under General information, specify your host, which in this case is ‘api.openai.com‘. You can also select a custom icon and a description, as shown below.

On the Security tab, you will add an authentication type, which is API Key, as shown below

In the Definition tab, you will create a new action, which I called Completion. To define the Request, click on Import from sample and then use the data from the OpenAI API documentation to Create chat completion. Here, I didn’t just copy and pasted the sample one, because I wanted to also include the max_tokens parameter, so I made sure I added that parameter. So, the value I used was from my test below:

And it is a Post, so it looks like this. Then you can import it.

This is so the body ends up with that max_tokens parameter:

Then I had to define the Response, which I again copied and pasted from my test above. So, it looks like this.

And when it imports, the response looks like this:

And that’s it for the custom connector! Now, on to testing my Logic App, i.e., Sentinel playbook.

Using the custom connector

When creating your Sentinel playbook (Azure Logic App), you’ll find the connector under the Custom tab.

Then you’ll find the one and only action I created, called Completion.

And the reason why I went with the custom connector is so that I could store my API key safely. So, I created a connection.

So, if I ever need to update it, I can do so from the navigation shown below.

Notice, that I can’t see the value, I can only update it.

To create the steps in my playbook, I need to add the parameters that I included in my definition, as shown below.

And I can specify the model, which in this test is gpt-3.5-turbo, as well as the max_tokens, which I am using 200 for this test, but you can adjust as needed.

This is where I went down the deep end 😊. So, this new endpoint uses roles, and this is the definition ChatGPT gave me on those:

  1. ‘system’: This role is used to set the context and provide high-level instructions at the beginning of the conversation. It helps set the behavior of the assistant.
  2. ‘user’: This role represents you, the person asking questions or providing instructions to the assistant.
  3. ‘assistant’: This role represents the AI model, providing responses and information based on the context and instructions given.

And the way to use them makes a difference in the behavior, because the role of assistant is there so that it can remember previous responses in a continued conversation. So, you need to include those responses as content for the assistant role. This is what maintains the context and ensures that the AI model can provide coherent answers to follow-up questions. A typical conversation will have one system role content, followed by at least one user role content and maybe some assistant role content in between. However, this makes a lot more sense with an example, so let’s do that!

A playbook

This playbook gets a set of recommended steps from OpenAI to investigate a Sentinel incident. The steps are then updated to the incident as tasks. You can certainly update data into Sentinel as comments, or update the severity, etc. as I’ve done in previous blog posts with the older API. However, I wanted to explain the more complex scenario of tasks, because that’s where remembering the previous responses and having that context makes it that much more powerful.

This is what my playbook looks like all together. I’ll expand as I go, but I wanted to show what it looks like all together, so you can see the difference between that Initial Completion and the other completions I do within the each of the sections, i.e., the For each sections. Those ‘For each’ sections are all replicas, just different steps being updated as tasks. Then at the end I update the incident with a tag.

The Initial Completion looks like this.

It has a system role content that is just setting that initial context or behavior, which in my case is “You are a helpful Microsoft Sentinel assistant specialized in cybersecurity. Help the user with their security incident questions.” It then has one user role content, which in my case is using the Sentinel Incident Description to ask for recommendations, which I specifically request to be 3 steps.

That next For each section is where it gets a bit complex.

Let’s start with the 4 messages (red above). The first message is a system role, that’s again setting the context and it uses the same value as before, because that’s what I still need. It’s followed by a user content, which again matches the same value as the Initial Completion. Now, I add that assistant content, this is where I get to feed it the answer it provided during the Initial Completion. So, now it knows what answer it gave me earlier. And then I close with a user role content, which is where I tell it that I want only step 1.

So, then I can add that task to the Sentinel incident, using the action from the Sentinel connector. And here I add a content variable (blue arrow), which is the output from the 4th message above. How do I know which one? Because I get these two options and I chose the one associated with Step 1 Completion OpenAI GPT3.5, which is the one I need.

The other option here is the one I used above (green arrow), which is the content output from the Initial Completion.

All 3 sections are exactly the same, except I use steps 2 and 3 from the response.

Running the playbook

This is what it looks like when it runs. I run the playbook at incident level, because that’s the trigger I used.

I see it completed

And here are the tasks it updated:

A little more details

I can also see the Activity log was updated, including the tag I added, which was the last step in my playbook.

And that’s it!

I hope this information is useful to anyone else testing the possibilities, as I am. I understand these services are not perfect, they are still very new, but I see the immense potential they have to help us be that much more efficient. I also hope that by understanding how some of these endpoints and integrations work within OpenAI, that we are also able to see the possibilities and maybe even inspire creativity within the community. At the very least, I believe testing these integrations can get us all ready for those upcoming services, such as Security Copilot, which I can’t wait to test!

3 thoughts on “Sentinel and OpenAI Chat API with gpt-3.5-turbo and gpt-4

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: