Featured

Federating AWS with Azure AD

TL;DR – For an enterprise level authentication and authorization solution, federate AWS single-accounts with Azure AD.

Security best practices dictate that AWS root accounts should be used only on rare occasions. All root accounts should enable MFA, remove any access keys, and set up monitoring to alert in case the root account is used. For day-to-day work users should access their AWS services with their IAM users and the best practice is to federate that access with a reliable identity provider (IdP), such as Azure AD.

There are two main options to federate authentication for AWS accounts. In this blog I will show you the two options and I’ll explain why I prefer one over the other.

(1) AWS SSO

The first option is to federate AWS SSO. This is configured with the AWS SSO instance within the AWS Organization. As a reminder, AWS Organizations allow administrators to manage several AWS accounts. The single sign-on integration is done between AWS SSO and the Azure tenant. With this configuration, the users in Azure AD are assigned to the AWS SSO enterprise application, so they are not assigned to a specific AWS account. The assignment of users to the specific permission sets is done within AWS SSO. Those permission sets are what determine the user’s specific role(s) within the specific AWS accounts.

And this is the end-user experience when federating AWS SSO:

From MyApps, the user clicks on the AWS SSO enterprise application that was assigned to them in Azure AD, then they are presented with an AWS SSO menu of accounts and roles that were assigned to them via AWS SSO, which they can then click on to access the account with that specific role.

Please keep in mind the following details when using this setup:

  • Users and groups have to exist locally in AWS SSO, so this solution will provision users to AWS SSO when they are assigned to the AWS SSO enterprise application.
  • In a similar manner, users are disabled (not deleted) when they are removed from the AWS SSO enterprise application.
  • Since the roles are assigned within AWS SSO, Azure AD is not aware of which roles are assigned to which users. This becomes important if you need specific Conditional Access policies or specific access reviews and/or access packages within Identity Governance.
  • Supports SP and IdP initiated login, since the users exist locally on AWS SSO.
(2) AWS Single-Account Access

The second option is to federate the AWS Single Account. This is configured with each individual AWS account. The integration is done between the AWS account and the Azure tenant. Therefore, when the users in Azure AD are assigned to the AWS account enterprise application, they are assigned to a specific AWS account. Azure AD is fully aware of the specific account the users are assigned to as well as the specific AWS roles they are assigned to.

And this is the end-user experience when federating a single account:

From MyApps, the user clicks on the specific AWS single-account enterprise application that was assigned to them in Azure AD, then they are presented with the option of the roles that were assigned to them for that account, which they can then select to access the account with that specific role.

Please keep in mind the following details when using this setup:

  • Users and groups do NOT exist locally on AWS. That’s right, users and groups do not need to be provisioned or deprovisioned in AWS.
  • The provisioning configuration ensures roles created in AWS are synchronized to Azure AD, so they can be assigned to users.
  • Azure AD is fully aware of which roles are assigned to which users for specific accounts.
  • This configuration allows implementation of Conditional Access policies for the specific AWS accounts.
  • Only supports IdP initiated login, since the users do not exist locally in AWS.
  • To ensure AWS CloudTrail data accuracy, add the source identity attribute to identify the user responsible for AWS actions performed while assuming IAM roles.
  • When CLI access is required, the temporary credential can be generated using the AssumeRoleWithSAML CLI command. This will last as long as the session is valid (default is 12 hours).
Drumroll, please…

By now you probably guessed which option I lean towards. The AWS Single-Account access configuration should be selected for enterprises that have specific compliance requirements that include identity governance or any organization that wants to implement a zero trust model, since “least privileged access” is at the foundation.

There are several benefits to this configuration.

  • The lack of users in AWS means that users or their entitlements do not have to be removed in AWS when employees are terminated. 
  • The configuration allows the tracking of the specific AWS roles within Azure AD, which means access packages* can be created and then automatically assigned or be made available to be requested by users with their appropriate approvals.
  • Those access packages can also have associated access reviews* to ensure access is removed when no longer needed.
  • Specific Conditional Access* policies can be created for the specific AWS accounts. For example, you may require access to production AWS accounts only from compliant devices, but maybe the rules are not as tight for development AWS accounts.
  • Having one central identity governance solution means organizations have the ability to meet compliance, auditing, and reporting requirements. This also means that the principles of least privilege and segregation of duties* can realistically be enforced.

Some organizations have tried to manage AWS accounts with AWS SSO and implement some level of identity governance using security groups. However, as the organizations grow, it becomes unnecessarily complex and challenging to meet compliance requirements for the reasons described in detail in my previous post.

* More on those topics in follow-up posts.

My adventures with Sentinel and the OpenAI Logic App Connector

TL;DR – Sentinel automation playbooks using the OpenAI Logic App connector.

A few of my partners have been brainstorming ways to integrate OpenAI with Microsoft Sentinel, so I set out to do my own research (read: playing). I read a few blogs where people were using the OpenAI connector to update comments and even add incident tasks, which was impressive and inspiring! However, I wanted to go a little further. I had two main goals during my initial testing:

  1. I wanted to separate the tasks. The original testing I saw being done with Sentinel was inspiring and impressive, but the steps that OpenAI recommended were all together in one task. I wanted to separate them into distinct tasks because that’s one of the great qualities about this tasks feature, being able to see the progress of those tasks as they are completed.
  2. I wanted to update additional information within the incident, not just update comments and add tasks. Specifically, I wanted to adjust the severity depending on the information that OpenAI was able to provide. And I wanted to add a tag that noted this incident had been updated by OpenAI.

By the way, I am using two connectors, The Microsoft Sentinel Logic App and the OpenAI Logic App connectors. Please note, when using the OpenAI Logic App connector, the key needs to be entered as “Bearer YOUR_API_KEY“, as noted in the documentation. Otherwise, you will get a 401 error.

A quick warning before you continue reading, this blog post is about testing what is possible, which may not be perfectly accurate or ideal for production scenarios at this time.

Separating the tasks

My initial goal was to separate the tasks. This is what I mean by separating the tasks. When I followed the steps from the initial blogs I read, I could see the tasks were being added into one task, as shown below.

So, this is how I separated the tasks. First, my prompt tells OpenAI specifically that I am looking for 3 steps and that I *only* want to see the first step, and I’ll let it know when I am ready for the next steps. I am only using the Incident Description in the prompt, but you can probably add additional information, such as the title, entities, tactics, techniques, etc.

So, once I get the output from that first step, I then feed it to “Add task to incident action“, and I name that step “Task no.1 from OpenAI“. Notice that I am passing it within a “For each” container, the reason for this is because I really just need the text within it, otherwise I’ll get some of the other information that comes with the output, and it just doesn’t look pretty. 🙂

And then I add similar actions for the next steps. However, this time the prompt says ‘Ready for the 2nd step…

And finally, the last step in my test.

Now, when I run my playbook, it ends up adding separate tasks that look like this.

Now instead of having all the steps in one task, I get separate tasks that an analyst can now check to complete, which would allow me to see the progress as the tasks are completed.

I still have some challenges because the behavior from the ChatGPT UI is not quite the same as when I use the API, but I was able to make it work using the prompts noted above. There’s probably more work to be done in that area.

Updating additional information

I covered the items on the left branch of my Logic App above. Now, let’s move to the right branch.

I am getting information from OpenAI on what the severity for my incident should be for this type of incident. Again, for my test this is based on Incident Description, but you can probably add additional information, such as the title, entities, tactics, techniques, etc.

And again, I am passing the out within a “For each” container, because I really just need the text within, and it wouldn’t work if I just use choices, because the format of the severity attribute would not be correct. Additionally, I am also adding a tag to highlight that the severity has been set according to the information received from OpenAI. Finally, I am also updating the status to ‘Active‘, this is really just to make my testing easier.

So, when I run the playbook, I can see in my activity log that the severity and status have been updated, and the tag was added, as shown below.

And I can see them updated as well.

Final thoughts

If you want this playbook to trigger automatically, don’t forget to assign your playbook’s managed identity the Microsoft Sentinel Responder Role. You can do this within Logic Apps, as shown below.

And if you need your SOC analysts to do this for customers, check out my previous blog post on this topic.

I am just getting started testing this integration, but I can already see the potential it has to help SOC analysts. This specific scenario may not be the right playbook for common incidents, but it may provide a head start with uncommon incidents. As usual, I hope this blog post is useful and I hope it sparks some ideas about how to use these features for your own requirements.

MSSPs and Identity: Q&A

TL;DR – Follow-up to the previous blog post to answer common questions

After I published the last blog post on MSSPs and Identity, I received various questions, and I thought it would be useful to answer the most common ones via this follow-up post. Let’s jump right in!

What is the difference between delegating access for Sentinel and/or Defender for Cloud (MDC) vs delegating access for Microsoft 365 Defender?

As I shared on previous posts, you can delegate access to Sentinel and to MDC using Azure Lighthouse. For the list of Azure subscription level roles, please reference the Azure built-in roles. But you cannot use Azure Lighthouse to delegate access for Microsoft 365 Defender. That’s because both Sentinel and MDC have permissions at Azure subscription level, whereas Microsoft 365 Defender has permissions at tenant level. In the diagram below, the Microsoft 365 Defender roles exist in the dark blue area, which is tenant level, while the Sentinel and MDC roles exist in the light blue area, which is subscription level.

For the list of tenant level roles, please reference the Azure AD built-in roles. As an MSSP, your customers can grant you access to their Microsoft 365 Defender tenants using either B2B or GDAP.

What is the difference between B2B and GDAP?

There are probably quite a few differences, but the one that MSSPs probably care the most about is that B2B collaboration users are represented in the customer’s directory, typically as guest users. Some partners have compliance requirements that do not allow that type of configuration. Luckily, in the case of GDAP, there is no guest user in the customer’s tenant. However, customers can still view sign-ins from partners by querying for ‘Cross tenant access type: Service provider‘, as shown below.

Also, GDAP is configured via Partner Center, so it’s exclusively for partners that are Cloud Solution Providers (CSPs). There is a great document that includes security best practices for CSPs, I highly encourage partners to review those. I especially encourage partners to take advantage of the free Azure AD P2 subscription.

Personally, I use B2B for all my testing because I don’t have access to Partner Center. If, like me, you are working on a POC, B2B is a good option to simulate the behavior. Furthermore, if you are working on a POC, I recommend you try a new feature called cross-tenant synchronization, which is in public preview currently. It allows me to automatically provision users to my customer tenants (as guests) without having to invite them. With the configuration I am using, I just add them to a group, i.e. ‘SOC team’, and then that triggers the provisioning to the target tenant (customer tenant). Again, this is good for POCs, I would not recommend this for production scenarios.

What happens when my SOC team is working on an incident in Sentinel and there’s a link to the Microsoft 365 Defender alerts? How does it know which customer tenant the incident is associated with?

If you hover over the link in Sentinel, you’ll notice that it includes a tid (tenant id) value in the URL.

So, when you click on the link, you are redirected to the correct incident for the correct customer, as shown below. This will work as long as your user has been granted the necessary access on that tenant via B2B or GDAP.

I noticed the MDC documentation references a Security Admin, is that the same as the Security Administrator for Microsoft 365 Defender?

No, the Security Admin that grants permissions to MDC (and Defender for IoT) is at subscription level, whereas the Security Administrator that grants permissions to Microsoft 365 Defender is at tenant level.

Does Azure Lighthouse allow a customer to delegate access to two different partners (or tenants)?

Yes! I get this question because some partners have different tenants for users that are managing customer resources for different reasons. For example, there may be an MSSP tenant that just exists to manage security for customers and there may be a different tenant that exists to manage non-security services. In that case, partners may need to configure access for one customer but delegate different levels of access to different tenants. And, yes, it works as expected. It will just show as two different offers, as shown below:

Do I need a separate subscription for Sentinel?

A separate subscription is recommended for the Microsoft Sentinel workspace, and the main reason is permissions. If you think about it, this subscription will include very privileged data, so you want to implement tight controls over which users can access and make changes to the resources in the security subscription.

Can a partner create a subscription for a customer?

As a CSP, partners can create a subscription for their customers. Keep in mind, this subscription will still need to be associated with the customer’s tenant. This is very important. The billing of the subscription can be via the partner, which is possible for CSPs. However, the tenant associated with that subscription still needs to be the customer’s tenant. This is important because you have certain features, like ingestion of Microsoft Defender 365 data, UEBA, etc. that will need to be configured for that customer tenant. As you know, the data is always ingested into the customer’s subscription.

A customer can have any number of subscriptions associated with their main tenant and not all of them need to be billed in the same manner. That means, you can have a customer with 25 subscriptions associated with their tenant and the customer can be billed directly for 24 of those subscriptions, and one can be billed via the partner, as a CSP.

Is Microsoft 365 Lighthouse an option for MSSPs to gain access to Microsoft 365 Defender?

Microsoft 365 Lighthouse is a solution specifically for managing small- and medium-sized business (SMB) customers. CSPs will need to configure GDAP prior to onboarding customers to Microsoft 365 Lighthouse. It allows CSPs to manage some features within Microsoft 365 Defender and take certain actions. Please check the list of requirements, including the limit on the size of the tenant, which at the time of writing this blog is 2500 licensed users.

That’s it for now!

I hope these answers are useful. Keep those questions coming! As always, if I don’t know the answer, I’ll go find out and then we’ll both learn. 🙂

MSSPs and Identity

TL;DR – Identity configuration recommendations for MSSPs.

I’ve had this conversation with most of my partners at one point or another, which is probably because most of my partners are MSSPs. I’ve discussed this also during my Sentinel Deep Dive sessions for MSSPs, including the latest. It just keeps coming up and so I figured it would be easier to just publish this blog post.

I’ll warn you, dear reader, this blog post is my opinion, based on my personal experience, and I am happy to share my reasons.

The Challenge

MSSPs or Managed Security Service Providers have the responsibility to manage security services for their customers. Whether it is to access customers’ Sentinel workspaces or MDC via Lighthouse, as I’ve described here and here, or managing customers’ Defender 365 tenants, via GDAP or B2B, the MSSP identities have to exist somewhere. The challenge is deciding if your MSSP identities will exist within your corporate tenant or if you will create a separate MSSP tenant to manage your customers.

The Options

The Microsoft Sentinel Technical Playbook for MSSPs includes a section on “Azure AD tenant topologies” where they go over the pros and cons of the Single Identity Model vs the Multiple Identities Model. By the way, I encourage all MSSPs to read the entire whitepaper, since it’s a great resource.

Single Identity Model

Single Identity Model is where the MSSP corporate identity is used to access customers’ security services.

I can see this model working for POCs, where you are just testing the configuration, but not to access real customers. Yes, it is supported, but I would not recommend this as a final configuration goal. The reason is attacks on your corporate identities will not only risk your corporate data, but also your customers’ data. And vice-versa, customer attacks may also spread to your corporate resources.

Multiple Identities Model

Multiple Identities Model is where a new/separate tenant is deployed to manage the identities that have access to customers’ security services.

This model reduces the blast radius associated with any credential, device, or hybrid infrastructure compromise due to the common risks associated with the corporate tenant where employee accounts are used for day-to-day activities, including Microsoft 365 services. Think Zero Trust, specifically the assume breach principle. As a reminder, identity isolation is also one of the published CSP security best practices.

This is the ideal model to protect both your corporate resources as well as your customers’ resources. And if you are supporting or planning to support government organizations or organizations that need to meet government compliance requirements, then this is the model you will need to follow. Keep reading for more information on this. And, yes, it requires more work, but it is possible to configure and to automate as much as possible, including the JML (Joiners-Movers-Leavers) process. For a full overview of JML, please see my posts starting with part 1 here.

In-between?

Potentially there is also a third option, you could have separate identities within the same corporate tenant. This is not an option discussed on the whitepaper, but it is still an option to be considered. This is similar to what is explained on the Microsoft documentation about Protecting Microsoft 365 from on-premises attacks. The scenario described in the linked documentation is specifically targeted to use cloud-only accounts for Azure AD and Microsoft 365 privileged roles, which is exactly what the MSSP SOC analysts will have assigned to them on the customers’ tenants.

I see this option as a bare minimum for an MSSP, maybe one that doesn’t have the resources to manage a separate tenant and that has no plans to support customers that have to meet government compliance requirements. Although, you will still need additional configuration and a process to manage those accounts as user entitlements.

Compliance Requirements

Many MSSPs have authorizations that require them to meet compliance requirements among them government compliance requirements, such as FedRAMP. For those organizations to earn, and continue to hold, those authorizations they need have separate identities within the enclave. That means, not your regular corporate identity.

Here is a quote from my colleague, Rick Kotlarz, to expand on this topic:

In respect to U.S. Government / FedRAMP Information Systems, networks that have varying levels of security classification/impact levels always require separate identities. Those identities must also then be managed by an identity and access management system which is categorized at the same security classification/impact level.

Complete isolation of identities is not always required. One scenario where this isn’t the case, is when one or more network enclaves of the same security classification/impact level are federated or have a trust. Another scenario is when two or more network enclaves of varying security classification/impact levels implement a data diode. These are sometimes referred to as Cross Domain Solutions, that provide a bridge with limited data capabilities between these two networks. Typically, data is only permitted to travel from a lower security classification/impact level upward and is not bidirectional.

Because both scenarios require multiple levels of security leadership to accept the underlying risk of trusting an external identity provider operating outside of their purview, existence of these two scenarios are few and far between.

Furthermore, while some systems operating within a FedRAMP authorized environment may be Internet connected, they often are only authorized to support inbound data pulled from the Internet via highly restricted sources (e.g., Windows Updates).

Impact level reference: https://www.fedramp.gov/understanding-baselines-and-impact-levels/

I’ve worked with large MSSPs that followed the same entitlements management process for their government and for their commercial customer access because of the risk isolation I mentioned above. However, there was one main difference is the automation of the JML process. For commercial, they could just trigger a workflow to create/modify/delete the account on the MSSP tenant. For the government instance, they used a queue-based system, where the source would create a message in a mid-point area, that would then be picked up by the gov side process later. Basically, it would be a pull from the gov side, as opposed to a push to the gov side.

How?

Once MSSPs (and their auditors) come to the conclusion that the best, and sometimes only, option is the multiple identities model, then the “How?” questions begin. I am discussing below the most common questions I receive. I am sharing what I normally share with my partners, but I would love to hear other ideas.

How do we make sure these separate tenant accounts are removed once when the employees are terminated?

This is by far the no.1 question I get, and I completely agree, it should be the no.1 concern. This is where I go back to the JML (Joiners-Movers-Leavers) process. These external tenant accounts need to be tracked throughout the employee lifecycle, just like any other entitlement.

This is where Entra Identity Governance solutions, such as Lifecycle Workflows, can make this an automated process. There are a few existing templates, such as “Offboard an employee” or “Pre-Offboarding of an employee“, which trigger based on the emploeeeLeaveDateTime attribute, which you can then configure with rules based on specific attributes, such as department (i.e. ‘SOC’), or jobTitle, or a combination of attributes. This workflow will then execute a set of tasks, such as removing the user from groups and even running a custom task extension, which is basically a Logic App. You can configure that Logic App to take the steps to disable or remove that user from groups or access packages, etc. on the MSSP tenant, or any other steps you need to take to clean up that account.

How do we onboard and audit users on the separate MSSP tenant?

You can take similar actions for provisioning, by using tools such as Lifecycle Workflows joiner templates, as well as Access Packages withing Entra IGA Entitlement Management. Access Packages are groups of resources that are packaged together and can be assigned to or requested by users. You can create these within the MSSP tenant.

This just makes it easier to assign only those permissions the analysts will need within the tenant. You can also include in this package the MSSP tenant group membership they will need to access those customers’ resources as configured using Azure Lighthouse. Access packages also allow you to configure an expiration of those permissions, which can be extended upon request.

You can even use PIM, Privileged Identity Management, for managing those privileged groups. Additionally, you can also include access reviews for either the group membership or for the access packages, so you can consistently ensure only the right people have the right access and only for the amount of time they need it. Yes, our goal is still Zero Trust, and specifically here the principle of least privilege.

One thing to keep in mind for provisioning of users into an MSSP tenant on Azure Gov, is the compliance requirements to ensure the enclave is still isolated. Please see my note above on Compliance Requirements.

How will device based conditional access policies be implemented on the MSSP tenant?

The ultimate goal would be to have PAWs, Privileged Access Workstations, and those can be physical or virtual. I’ve seen organizations use jump-boxes (some call them bastions) in the past for this purpose. I am not an expert on cloud PCs, but it may be another option to explore given the ability to configure Conditional Access policies on those. You could then use Conditional Access policies with Identity Protection to protect those users and sign-ins in the same way you do for your corporate resources.

You can also use those conditional access policies to ensure MSSP SOC analysts only authenticate using phishing resistant methods, which in some cases may be a compliance requirement.

Summary

Configuring, maintaining, and monitoring a separate tenant means additional work, but it is the right thing to do in order to protect your customers’ resources as well as your corporate resources. Even if you don’t have compliance requirements that force you to select a more secure configuration, I would still highly encourage you to consider it. I know you will not regret it!

Update: There is a follow-up post to answer some questions that came up after this blog was published. Please reference MSSPs and Identity: Q&A

Sentinel Repositories

TL;DR – A quick introduction to Sentinel Repositories.

There’s a Puerto Rican saying ‘Nadie aprende por cabeza ajena‘, which loosely translates to ‘Nobody learns from someone else’s head (read: brain)‘. I am writing this blog post because I really hope you give this feature a try, so you can see how easy and useful it can be.

I recently attended a call where quite a high number of attendees had not yet tested the Microsoft Sentinel Repositories feature. Anyone that has attended any of my Sentinel Deep Skilling sessions knows that I am a huge fan of this feature. In fact, I show it live at every session. You can catch one of those sessions here. If you don’t want to watch all two wonderful hours of Sentinel fun, and you just want to see the repositories feature, you can skip to 1:13:45.

Configuration

It’s a pretty straightforward concept. As an MSSP or as a team that has a centralized management for several Sentinel workspaces, you can distribute content from a centralized repository. Currently Repositories supports Azure DevOps or GitHub repositories.

All you need to do is create a connection on those workspaces to your repository from the workspaces that you want to connect. If you need a sample repository to connect, please use the Sample Content Repository provided.

You just need to be able to authenticate to that repository in order to create a connection. So, yes, it can be a private repository.

The feature currently supports the six artifacts listed below: Analytic rules, Automation rules, Hunting queries, Parsers, Playbooks, and Workbooks.

There are also customization options. For example, the default configuration will only push new or modified artifacts since the last commit, but the trigger can be modified. You can also modify the specific folder that is synchronized.

Recently, a new feature was added where you can now use configuration files to prioritize some content and maybe exclude other content. This can be very useful for MSSPs that want to configure certain content for all customers, but specific content to specific customers. If you want to read more about this topic, I highly recommend you review the Microsoft Sentinel Technical Playbook for MSSPs, which you can find here: https://aka.ms/mssentinelmssp.

How it works

In this example I have a few analytic rules that are already synchronized to my workspace. I can tell they were synchronized by the Repositories feature because the ‘Source name‘ is ‘Repositories‘.

I want to push a new analytic rule to all the workspaces connected to this repository, so I commit the new file to this repository.

And I can immediately see that a new action has been triggered, as shown below.

I can further click on that workflow run to see how it progresses.

When it’s finally completed, I see the complete job message below. If there are any errors, I can still go and expand any section to get the details from that run.

And when I check back in my workspace(s), I can see the new analytic rule was added as expected.

One last thing, if you are thinking ‘how can I export these artifacts?‘, here is a script created by one of the Sentinel PMs, which has been very useful for a few of the partners I am working with.

I encourage you to give this feature a try, especially if you are a partner that is already managing customers or looking to get started. Even customers that are centrally managing a variety of workspaces from various departments within an organization can find this tool to be highly beneficial. Have fun!

Safely integrate playbooks with custom APIs when there is no pre-built Logic App connector.

TL;DR – How to create a custom logic app connector, so you can store your API key securely and use it within your playbooks, when there is no pre-built connector.

I’ve had this discussion with at least three different partners recently, so I am publishing this blog to share with anyone else that may have the same question, since it seems to be a popular one recently.

As you know Logic Apps are used for automation within various Microsoft security services, including Sentinel (playbooks), Defender for Cloud (workflow automation), and others. Most of the time there is an existing connector, but sometimes SOC teams need to connect to custom developed APIs, and that’s where this scenario comes in. In those cases, we still need to store those API keys in a secure manner. That’s especially the case if you are managing playbooks within your customer’s workspaces (MSSP architecture) and you need to ensure any connection information is stored in a secure manner.

Create the custom connector

Note: “Custom connectors are RESTful APIs that can be hosted anywhere, as long as a well-documented Swagger is available and conforms to OpenAPI standards. A custom connector can also be created for a SOAP API using the WSDL that describes the SOAP service.

To create the custom connector, go to the Azure portal and search for “Logic apps custom connector“.

You will just need to enter the default information, name, region, resource group, etc.

Note: Make sure your new Logic App Custom Connector exists in the same region your playbooks exist, otherwise you won’t be able to find it.

Once that’s done you will need to ‘Edit‘ your connector to specify the import mode, where you can use an OpenAPI definition, or a Postman collection, as shown below. You can also modify the icon for your connector. The host and base URL will be filled in automatically based on your imported info.

Here is where we’ll choose the authentication type, which in my case is an API key.

And then I can select the various Actions and Triggers I will need for my connector, as shown below.

For additional information and options in this section, please reference the Logic Apps documentation.

Using the connector within a playbook

Now that I have a connector and at least one action, I can create a new Logic App where I can connect to my custom API. I’ll find it under the ‘Custom‘ tab when choosing an operation, as shown below.

Once I select the action, I’ll be presented with the menu to enter my new connection name and the API key, as show below.

Once I save my creatively named connection, ConnectABC, my API key will be stored securely.

I can also configure any additional parameters that I need for the connection, as shown below.

The connection I created is now stored under the ‘API connections‘ blade within my logic app menu, as shown below.

And it’s not a value that I can see, which is why it’s secure! However, I can update it if I need to do so later on, as shown below:

I am still learning about Logic Apps and the incredible flexibility they offer as part of our SOAR features within the Microsoft security services. I hope you find this information useful and continue to explore with me!

Review any “Don’t know” reviewees prior to the end of an access review

TL;DR – Steps to create access reviews that meet strict compliance requirements by allowing auditors to review any “Don’t know” reviewees prior to the end of a review.

This is a short blog post to document the steps to create an access review that ensures strict compliance requirements around attestation are met. This is a scenario that came up last week while I was delivering one of the Rockstar training sessions and I can’t believe I haven’t documented this yet, so here it is!

The challenge

Many auditors have a requirement to ensure reviewers of access to groups, applications, etc. provide definitive answers. The challenge here is that Access Reviews within Entra Identity Governance provide an option for reviewers to choose “Approve“, “Deny“, or “Don’t know”. So, you can see how “Don’t know” does not meet that definitive answer requirement.

The solution

Fortunately, Access Reviews also provides a feature that allows us to configure multi-stage access reviews, with up to three stages. And within that feature, there is the option to only move to the next stage the “Reviewees marked as “Don’t Know“.

The results

During the first stage, the reviewer gets to review all the members of the group, as shown below. And as you can see, the reviewer, Adele, has approved 5 of them, denied 1, and selected “Don’t know” for 2 of them.

We can also see this from the admin portal as well, as shown below.

So, when the first stage is completed and the next stage approver gets the list to review, this is what they see.

They can see that in the previous stage Adele selected “Don’t know” for 2 of them, which are the ones this approver gets to review. So, this is where the auditor would step in to make that final decision and therefore prevent any reviewees that were neither approved, nor denied.

At the end of both stages, you end up with all users either being denied or approved, and none of them have “Don’t know” as the outcome.

I hope this post is useful and clarifies the options available to meet this specific requirement to provide definitive answers during access reviews.

Defender for IoT: OT sensor POC

TL;DR – Steps to configure a virtual OT sensor to use for a Defender for IoT POC.

I promised one of my partners that I would document the steps I followed to build my Defender for IoT OT sensor demo instance. I had to build a virtual appliance because (A) I don’t have access to any of the physical appliances and (B) even if I did, I don’t think any of the nearby facilities would allow me to just plug in to their network. So, that’s what I am doing in this post!

My setup and documentation

I am using an Azure VM and within that I create a local Hyper-V VM, because the sensor is an appliance, so it can’t just be an Azure VM on its own. The instructions I followed are really a compilation of steps. I started with the official documentation, which was very helpful, but I also combined with some information from this hands-on-lab. The lab uses the older version 10, but still included some screenshots and information that I thought was helpful. And I also had help from my colleague, Nick, who helped me brainstorm a few issues we hit along the way. Thanks!

Hyper-V Pre-Configuration

The instructions to pre-configure the VM are pretty much straight from the hands-on-lab. I am using a 4×8 because that works great with the Azure VM size I chose. And you’ll see this will map to the Ruggedized sensor option later on. Also, please keep in mind that I am not connecting my sensor to any other management console for this POC.

In my case, my Windows 11 host Ethernet adapter is assigned an IP of 10.7.0.4, therefore I used 192.168.0.0/24 as the network scope of the “NATSwitch”. And I created the two switches as shown below:

New-VMSwitch -SwitchName "NATSwitch" -SwitchType Internal
New-VMSwitch -SwitchName "MySwitch" -SwitchType Internal

I stored the network adapter information in a variable and then assigned an IP address to the NATSwitch (in my case 192.168.0.1).

$s1 = Get-NetAdapter -name "vEthernet (NATSwitch)"

New-NetIPAddress -IPAddress 192.168.0.1 -PrefixLength 24 -InterfaceIndex $s1.ifIndex

Additionally, you will need to set the IP and DNS on that adapter which can be done manually or by using the following:

Set-NetIPAddress -InterfaceIndex $s1.ifIndex -IPAddress 192.168.0.1 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceIndex $s1.ifIndex -ServerAddresses 8.8.8.8

And I created the new NAT network:

New-NetNat -Name MyNATnetwork -InternalIPInterfaceAddressPrefix 192.168.0.0/24

The final configuration looks like this:

Create new Hyper-V VM

The steps to create the Hyper-V VM are pretty much straight from the documentation. The Hyper-V VM is then created using the image that you download from the portal, as shown below:

And specifically, step 1 of the onboarding process, as shown below.

You will use that image file when you create your Hyper-V VM, by selecting Install an operating system from a bootable CD/DVD-ROM within Installation Options and then selecting Image file (.iso), where you use the specific directory where you copied the file to in the Azure VM.

The settings on your Hyper-V VM should end up looking something like this. Please note, I have an extra switch, but you only need two.

Now you can connect to the VM, so you can complete the next steps.

The first screen you will see will give you the option to select a language, after that you will see a selection of installation options. As mentioned earlier, I chose to go with the Ruggedized just based on the size of my POC.

The installation will take a few minutes and then you’ll see the first options to configure your sensor. In my case for the monitor interface, I chose eht1, based on the configuration of my switches.

And I chose eth0 for the management interface.

A few notes on the other options that followed:

  • I skipped erspan, this just for Cisco devices.
  • I chose 192.168.0.50 for my sensor IP, based on my network configuration above.
  • I used the default backup directory.
  • I used 255.255.255.0 for subnet mask
  • I used 8.8.8.8 for DNS
  • I used 192.168.0.1 for gateway IP
  • No proxy

And after another few minutes you will see the passwords. Take a picture!! You will not see this again.

Final steps

In order to configure your sensor, you will need to download the registration file that will be generated on step 3 from the portal when you click on “Register“.

Back in the Azure VM, I can now open a new browser session and connect to https://192.168.0.50 (my sensor IP). Please note the first time you connect you can configure a certificate to be used for this connection, or you can just use the default one, which is what I am using and why it shows as Not secure in the browser. Also, the first time you connect you will be required to upload that file you downloaded during the registration process (step 3).

And once that is done you will be able to login with the cyberx user and password from the screenshot you took when the installation was complete.

If you need to connect to the sensor to run CLI commands, you can connect using the support user and password from the screenshot you took when the installation was completed. You can run commands, such as the ones shown below.

Also, if you see this error when connecting to the host, don’t be concerned. I have this error showing up every time I restart the VM and so far, everything is working as expected.

adiot0: can not register netvsc VF receive handler (err = -16)
A few optional steps

This is my demo instance, so I really don’t have anything connected to monitor. Therefore, I need to gather my data from PCAP files. In order to do that, I first need to make a small update to be able to play those files.

Navigate to System settings, and then select Advanced Configurations:

Select Pcaps from the drop down and then update the settings highlighted below.

This will enable the Play PCAP option shown below.

And now you can begin exploring your findings. You can see the Device map, as shown below.

Alerts

And even generate a detailed Risk Assessment report to provide to your customers.

I hope you take the time to explore this very powerful service. In a future blog post I’ll cover the integration with Microsoft Sentinel and the better together story of how these two services come together to bridge the gap between OT and IT.

Azure Lighthouse and Sentinel: Assigning access to managed identities in the customer tenant

TL;DR – MSSP – To trigger playbooks in the customer tenants sometimes you need to assign the managed identities of those playbooks permissions to execute actions within the customer tenant. This post covers the steps to configure the access required to assign those roles and the steps to assign the roles as well.

After I wrote the previous blog post, “Delegate access using Azure Lighthouse for a Sentinel POC”, I received many questions from current and future partners as well. I updated the previous blogs with additional clarification in some areas, but other questions were a bit more complex. In this post I want to address one of those questions, which the existing documentation addresses without specific examples, so it can be a challenge. Hopefully, this post helps other partners trying to configure the same access.

First, the template modifications

When you configure the initial Azure Lighthouse template to delegate access from the customer workspace to the MSSP, there are some modifications required in order to allow the MSSP users and groups to be able to assign the access. You have to modify the template manually, and you have to specifically add the “User Access Administrator” role (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9) and the “delegatedRoleDefinitionIds” parameter, which is required to specify the roles that the MSSP user or group will be able to assign to the managed identities within the customer workspace.

If you are worried about the very powerful “User Access Administrator” role being assigned to the MSSP, keep in mind that it’s a different version of that role. As the documentation states, it’s “only for the limited purpose of assigning roles to a managed identity in the customer tenant. No other permissions typically granted by this role will apply”.

Once that template is uploaded, your customer’s delegation will look like this:

As you can see above, my “LH Sentinel Contributors” group now “Can assign access to” 4 different roles: “Microsoft Sentinel Reader“, “Log Analytics Contributor“, “Contributor“, and “Reader“.

Why do I need these permissions?

Why would I even need these permissions? In my case, I have a playbook that I need to trigger on my customer’s workspace, the “SNOW-CreateAndUpdateIncident” playbook. And that playbook needs some permissions to be able to execute correctly.

If I was executing it locally on my MSSP tenant, then I would just assign the roles directly by going to the Identity blade within the specific logic app.

But if I try to do this from the Sentinel portal for one of my customer’s workspaces, I get an error like the one below.

So, I need a way to assign these roles in my customer’s workspace to trigger those playbooks locally from my customer’s workspace.

Now, I can assign roles

Per the documentation this role assignment needs to be done using the API, and it needs to be done using a specific version of the API, 2019-04-01-preview or later.

You will need the following values to populate the parameters in the body:

delegatedManagedIdentityResourceId

Note: These are both in the customer’s workspace, which you will have access to, as long as you have Logic App Contributor role included in your Azure Lighthouse template.

principalId

You will use those values to populate the parameters within the body of the API call. The following parameters are required:

  • roleDefinitionId” – This is the role definition in the customer’s workspace, so make sure you use the customer subscription id value.
  • principalType” – In our case this will be “ServicePrincipal“, since it’s a managed identity.
  • delegatedManagedIdentityResourceId” – This is the Resource Id of the delegated managed identity resource in the customer’s tenant. This is also the reason we need to use API version 2019-04-01-preview or later. You can just copy and paste from the playbook “Properties” tab, as shown above.
  • principalId” – This is the object id of the managed identity in the customer’s workspace, as shown above.

The file will look like this:

Note: 8d289c81-5878-46d4-8554-54e1e3d8b5cb is the value for Microsoft Sentinel Reader, which is the role I am trying to assign to this managed identity.

As noted in the API documentation, to assign a role, you will need a “GUID tool to generate a unique identifier that will be used for the role assignment identifier“, which is part of the value used in the command below to call the API. The value you see below in the API call, 263f98c1-334b-40c1-adec-0b1527560da1, is the value I generated with a GUID tool. You can use any, I used this one.

And once you run the command, you’ll see output similar to the one shown below, which means the role was successfully assigned.

And that’s it! Now the managed identity associated with the Sentinel playbook (logic app) in the customer’s workspace has the required permissions to be able to execute within the customer’s workspace.

Delegate access using Azure Lighthouse for a Sentinel POC

TL;DR – Steps to delegate access to users on a different tenant for a Sentinel POC using Azure Lighthouse.

I include this live demo in every webinar I deliver about Microsoft Sentinel, but today a partner asked me for documented step-by-step instructions, which I wasn’t able to find, so I am creating this post.

Most MSSPs need to create a POC to test Microsoft Sentinel, where they configure one workspace as the MSSP and a few other workspaces as customers. To be clear, the documentation is great about the correct way this in a real scenario, where partners need access to their customers’ workspaces, but for a POC, a partner doesn’t need to publish a managed service offer, they just need do this using an ARM template.

From the MSSP tenant

Navigate to “My Customers” and click on “Create ARM Template” as shown below:

Name your offer and choose if you want your customers to delegate access at “Subscription” level or “Resource group” level, then “Add authorization“.

You can choose to delegate access for a User, Group, or Service principal. I usually recommend you use Group over User, because the MSSP team members will change with time.

You can choose to assign the role “Permanent” or “Eligible”. If you’ve worked with PIM (Privileged Identity Management) previously, then you are familiar with the process. The eligible option will require activation before the role can be used. For eligible, you can also choose a specific maximum duration, and whether multifactor authentication and/or approval is required to activate.

In order to see your customers in the “Customers” blade later, you will need to include “Reader” role, as shown below. Click “View template” to be able to download it.

Download the ARM template file.

From the customer tenant

Before you import the template, ensure you have the correct permissions on the subscription. You can follow the steps here to ensure you can deploy the ARM template.

Click “Add offer” and select “Add via template”, as shown below.

Drop the template file you created or upload, as shown below.

Once the file is uploaded, you’ll be able to see it, as shown below:

You can also see the “Role assignments” that were delegated to the MSSP tenant, as shown below.

And if the customer tenant needs to delegate access to new subscriptions, they can do so by clicking on the ‘+’ button, as shown below.

And selecting any other subscriptions or resource groups that need to be delegated.

Back to the MSSP tenant

Now you can see your new customer from the “Customers” blade, as shown below.

Since the delegation included Sentinel Contributors, now you can manage the customer tenant workspace from the Microsoft Sentinel navigation within the MSSP tenant, as shown below.

Bonus: Since you have reader access, you can also see the subscription from Defender for Cloud, Environment settings. You can always delegate additional roles, if you need to manage MDC for this tenant.

Quick note on delegations at Resource Group level. I’ve seen instances with Resource Group delegations, where the ability to update the global filter takes a little while to allow you to select the newly added tenant and subscription that is associated with the resource group. However, after waiting for those updates to kick in, you should be able to modify the filter by selecting the filter from the blue menu bar, as shown below, and updating to include all directories and all subscriptions.

In my opinion, a POC is the best way to experience the wide variety of features within Microsoft Sentinel. You can even use the free trial that is available for 31 days. Another great resource that I always recommend for teams starting to get familiar with Microsoft Sentinel is the Sentinel Training Lab, which is available from the Content Hub blade in Sentinel. Finally, for MSSPs, http://aka.ms/azsentinelmssp is an invaluable resource to get a good overview of the recommended architecture.

A few of my favorite MDCA features

TL;DR – Just a few of my favorite MDCA features, which you may already be paying for.

I previously mentioned my strong belief that Sentinel and MDC are best buddies. Similarly, I firmly belief MDCA (Microsoft Defender for Cloud Apps) is definitely a member of the MDE squad. If you are using MDE (Microsoft Defender for Endpoint) and you haven’t tested MDCA, you may be surprised how well they work together and guess what? You may already be paying for it!

MDCA, which was previously known as MCAS (Microsoft Cloud App Security), is a CASB (I am going for a record number of acronyms in this post!), which stands for Cloud Access Security Broker. In an over simplified way, the job of a CASB is to enforce security policies. I think MDCA does that and more, and quite honestly, I am continuously discovering new features. In this post I am going over a quick list of some of my favorite features.

Cloud Discovery / Shadow IT

MDCA can discover applications (31,000 on the last count) through various means:

  • As part of the MDE squad, it can integrate with MDE to get data from managed Windows devices, as shown above. This integration also gives you the power to block apps as well. More on that a little later.
  • Log Collector over syslog or FTP and various Docker images are available.
  • Can also natively integrate with leading SWGs (Secure Web Gateways) and proxies, such as Zscaler, iboss, Open Systems, Menlo Security, Corrata, etc. (no need for Log Collectors)
  • You will also see data from the Cloud App Security Proxy, so that means even if it’s not from a managed Windows device, you will get some data from the other devices as well, as shown below.

And I can also create policies that will alert for Shadow IT, such as the ones shown below:

Block Apps

There are a few ways apps can be blocked as well. One of those is through the integration with MDE. I configured a few applications as unsanctioned for testing purposes, as shown below.

So, when I try to access one of those applications from a managed Windows device, I receive the following error:

And it’s not just Edge! See the error message below from Chrome on the same device:

I can also “Generate block script” for various types of appliances, as shown below:

Here is an example based on the applications I’ve set as unsanctioned:

Ban OAuth Apps

Solorigate anyone? MDCA can help you monitor OAuth apps in various ways, as shown below, where you can discover and either ‘approve‘ or ‘ban‘ risky apps.

Once you mark an app as ‘banned’, the Enterprise Application is updated with “Enabled for users to sign-in?” set to “No”. I also noticed that the application disappeared from “MyApps” for those users that were previously assigned to the application.

You can also create policies that will automatically revoke risky apps, as shown below.

Conditional Access App Control

So, technically this is done with another member of the squad, Conditional Access. The same Conditional Access we know and love that controls initial access is also capable of controlling access within a session when it works with MDCA.

I have a few very powerful policies, as shown below.

I won’t cover the first one “Confidential Project Falcon Sensitivity Label Policy“, because I dedicated a full blog post to that one, you can find it here: Restrict downloads for sensitive (confidential) documents to only compliant devices.

The second “Block sending of messages based on real-time content inspection – wow” is a way to prevent Teams messages based on a specific word and in this case from a non-compliant device. In my example, I want to block the word ‘wow’. Maybe ‘wow’ is my new super secret project and I only want people discussing it from compliant devices. So, if you try to send a message with the word ‘wow‘ from a non-compliant device, you would see the following:

Yes, the message is customizable :). And it prevents the message from being sent, as shown below:

Next, “Block sending of messages based on real-time content inspection – SSN“, it’s very similar to above, except, it’s not just a word, but rather a pattern, an SSN pattern. So, the user would see a similar message and it won’t be sent either.

Note: This is not real data, it’s just sample data used for DLP testing purposes.

Next, “Block upload based on real-time content inspection – CCN and SSN“, it’s similar, but now I am checking for content within files that are uploaded, whether it’s being attached to an email, being uploaded to a SharePoint site, etc.

Finally, “Proxy – Block sensitive files download – SSN”, it’s similar, but upon download.

Information Protection

Ok, so you saw some information projection above, but there’s more!

One of the policies above is “File containing PII detected in the cloud (built-in DLP engine)“, which automatically labeled a file, based on the contents, as shown below:

Threat Protection

There are some pretty powerful possible controls within this area, as shown below:

But I have chosen to show you how this “Mass download by a single user” policy works. Note that I have adjusted some of the values, so I can generate an alert for my test.

Because I know you may be thinking ‘but this is all within Microsoft services‘. So, check this out! This alert was generated by a user that downloaded files from an AWS S3 bucket, as shown below:

Honorary Mention 1 – App Governance

App Governance is technically an add-on, but I think it’s pretty cool, so I am including it. Note that this is now under the new Cloud Apps menu in security.microsoft.com.

App governance uses machine learning and AI to detect anomalies in OAuth apps. It can alert you on applications from an unverified publisher that have been consented to by your users. It can also alert on overprivileged applications, with permissions that are not even used, and various other anomalies.

Honorary Mention 2 – Security Posture for SaaS Apps

Security Posture for SaaS apps is super new, still in preview, but I can see the incredible potential. Currently, only available for Salesforce and ServiceNow, but I am sure more are to come. It makes recommendations on security best practices within those SaaS applications, as shown below:

More

I’ve only described some of my favorite features within MDCA. MDCA also integrates pretty closely with MDI (Microsoft Defender for Identity) and various other Microsoft and 3rd party security services. There is a lot more to MDCA than I included here, but I hope this post gives you an idea of how this service can help you secure your organization.