Featured

Federating AWS with Azure AD

TL;DR – For an enterprise level authentication and authorization solution, federate AWS single-accounts with Azure AD.

Security best practices dictate that AWS root accounts should be used only on rare occasions. All root accounts should enable MFA, remove any access keys, and set up monitoring to alert in case the root account is used. For day-to-day work users should access their AWS services with their IAM users and the best practice is to federate that access with a reliable identity provider (IdP), such as Azure AD.

There are two main options to federate authentication for AWS accounts. In this blog I will show you the two options and I’ll explain why I prefer one over the other.

(1) AWS SSO

The first option is to federate AWS SSO. This is configured with the AWS SSO instance within the AWS Organization. As a reminder, AWS Organizations allow administrators to manage several AWS accounts. The single sign-on integration is done between AWS SSO and the Azure tenant. With this configuration, the users in Azure AD are assigned to the AWS SSO enterprise application, so they are not assigned to a specific AWS account. The assignment of users to the specific permission sets is done within AWS SSO. Those permission sets are what determine the user’s specific role(s) within the specific AWS accounts.

And this is the end-user experience when federating AWS SSO:

From MyApps, the user clicks on the AWS SSO enterprise application that was assigned to them in Azure AD, then they are presented with an AWS SSO menu of accounts and roles that were assigned to them via AWS SSO, which they can then click on to access the account with that specific role.

Please keep in mind the following details when using this setup:

  • Users and groups have to exist locally in AWS SSO, so this solution will provision users to AWS SSO when they are assigned to the AWS SSO enterprise application.
  • In a similar manner, users are disabled (not deleted) when they are removed from the AWS SSO enterprise application.
  • Since the roles are assigned within AWS SSO, Azure AD is not aware of which roles are assigned to which users. This becomes important if you need specific Conditional Access policies or specific access reviews and/or access packages within Identity Governance.
  • Supports SP and IdP initiated login, since the users exist locally on AWS SSO.
(2) AWS Single-Account Access

The second option is to federate the AWS Single Account. This is configured with each individual AWS account. The integration is done between the AWS account and the Azure tenant. Therefore, when the users in Azure AD are assigned to the AWS account enterprise application, they are assigned to a specific AWS account. Azure AD is fully aware of the specific account the users are assigned to as well as the specific AWS roles they are assigned to.

And this is the end-user experience when federating a single account:

From MyApps, the user clicks on the specific AWS single-account enterprise application that was assigned to them in Azure AD, then they are presented with the option of the roles that were assigned to them for that account, which they can then select to access the account with that specific role.

Please keep in mind the following details when using this setup:

  • Users and groups do NOT exist locally on AWS. That’s right, users and groups do not need to be provisioned or deprovisioned in AWS.
  • The provisioning configuration ensures roles created in AWS are synchronized to Azure AD, so they can be assigned to users.
  • Azure AD is fully aware of which roles are assigned to which users for specific accounts.
  • This configuration allows implementation of Conditional Access policies for the specific AWS accounts.
  • Only supports IdP initiated login, since the users do not exist locally in AWS.
  • To ensure AWS CloudTrail data accuracy, add the source identity attribute to identify the user responsible for AWS actions performed while assuming IAM roles.
  • When CLI access is required, the temporary credential can be generated using the AssumeRoleWithSAML CLI command. This will last as long as the session is valid (default is 12 hours).
Drumroll, please…

By now you probably guessed which option I lean towards. The AWS Single-Account access configuration should be selected for enterprises that have specific compliance requirements that include identity governance or any organization that wants to implement a zero trust model, since “least privileged access” is at the foundation.

There are several benefits to this configuration.

  • The lack of users in AWS means that users or their entitlements do not have to be removed in AWS when employees are terminated. 
  • The configuration allows the tracking of the specific AWS roles within Azure AD, which means access packages* can be created and then automatically assigned or be made available to be requested by users with their appropriate approvals.
  • Those access packages can also have associated access reviews* to ensure access is removed when no longer needed.
  • Specific Conditional Access* policies can be created for the specific AWS accounts. For example, you may require access to production AWS accounts only from compliant devices, but maybe the rules are not as tight for development AWS accounts.
  • Having one central identity governance solution means organizations have the ability to meet compliance, auditing, and reporting requirements. This also means that the principles of least privilege and segregation of duties* can realistically be enforced.

Some organizations have tried to manage AWS accounts with AWS SSO and implement some level of identity governance using security groups. However, as the organizations grow, it becomes unnecessarily complex and challenging to meet compliance requirements for the reasons described in detail in my previous post.

* More on those topics in follow-up posts.

Sentinel Repositories

TL;DR – A quick introduction to Sentinel Repositories.

There’s a Puerto Rican saying ‘Nadie aprende por cabeza ajena‘, which loosely translates to ‘Nobody learns from someone else’s head (read: brain)‘. I am writing this blog post because I really hope you give this feature a try, so you can see how easy and useful it can be.

I recently attended a call where quite a high number of attendees had not yet tested the Microsoft Sentinel Repositories feature. Anyone that has attended any of my Sentinel Deep Skilling sessions knows that I am a huge fan of this feature. In fact, I show it live at every session. You can catch one of those sessions here. If you don’t want to watch all two wonderful hours of Sentinel fun, and you just want to see the repositories feature, you can skip to 1:13:45.

Configuration

It’s a pretty straightforward concept. As an MSSP or as a team that has a centralized management for several Sentinel workspaces, you can distribute content from a centralized repository. Currently Repositories supports Azure DevOps or GitHub repositories.

All you need to do is create a connection on those workspaces to your repository from the workspaces that you want to connect. If you need a sample repository to connect, please use the Sample Content Repository provided.

You just need to be able to authenticate to that repository in order to create a connection. So, yes, it can be a private repository.

The feature currently supports the six artifacts listed below: Analytic rules, Automation rules, Hunting queries, Parsers, Playbooks, and Workbooks.

There are also customization options. For example, the default configuration will only push new or modified artifacts since the last commit, but the trigger can be modified. You can also modify the specific folder that is synchronized.

Recently, a new feature was added where you can now use configuration files to prioritize some content and maybe exclude other content. This can be very useful for MSSPs that want to configure certain content for all customers, but specific content to specific customers. If you want to read more about this topic, I highly recommend you review the Microsoft Sentinel Technical Playbook for MSSPs, which you can find here: https://aka.ms/mssentinelmssp.

How it works

In this example I have a few analytic rules that are already synchronized to my workspace. I can tell they were synchronized by the Repositories feature because the ‘Source name‘ is ‘Repositories‘.

I want to push a new analytic rule to all the workspaces connected to this repository, so I commit the new file to this repository.

And I can immediately see that a new action has been triggered, as shown below.

I can further click on that workflow run to see how it progresses.

When it’s finally completed, I see the complete job message below. If there are any errors, I can still go and expand any section to get the details from that run.

And when I check back in my workspace(s), I can see the new analytic rule was added as expected.

One last thing, if you are thinking ‘how can I export these artifacts?‘, here is a script created by one of the Sentinel PMs, which has been very useful for a few of the partners I am working with.

I encourage you to give this feature a try, especially if you are a partner that is already managing customers or looking to get started. Even customers that are centrally managing a variety of workspaces from various departments within an organization can find this tool to be highly beneficial. Have fun!

Safely integrate playbooks with custom APIs when there is no pre-built Logic App connector.

TL;DR – How to create a custom logic app connector, so you can store your API key securely and use it within your playbooks, when there is no pre-built connector.

I’ve had this discussion with at least three different partners recently, so I am publishing this blog to share with anyone else that may have the same question, since it seems to be a popular one recently.

As you know Logic Apps are used for automation within various Microsoft security services, including Sentinel (playbooks), Defender for Cloud (workflow automation), and others. Most of the time there is an existing connector, but sometimes SOC teams need to connect to custom developed APIs, and that’s where this scenario comes in. In those cases, we still need to store those API keys in a secure manner. That’s especially the case if you are managing playbooks within your customer’s workspaces (MSSP architecture) and you need to ensure any connection information is stored in a secure manner.

Create the custom connector

Note: “Custom connectors are RESTful APIs that can be hosted anywhere, as long as a well-documented Swagger is available and conforms to OpenAPI standards. A custom connector can also be created for a SOAP API using the WSDL that describes the SOAP service.

To create the custom connector, go to the Azure portal and search for “Logic apps custom connector“.

You will just need to enter the default information, name, region, resource group, etc.

Note: Make sure your new Logic App Custom Connector exists in the same region your playbooks exist, otherwise you won’t be able to find it.

Once that’s done you will need to ‘Edit‘ your connector to specify the import mode, where you can use an OpenAPI definition, or a Postman collection, as shown below. You can also modify the icon for your connector. The host and base URL will be filled in automatically based on your imported info.

Here is where we’ll choose the authentication type, which in my case is an API key.

And then I can select the various Actions and Triggers I will need for my connector, as shown below.

For additional information and options in this section, please reference the Logic Apps documentation.

Using the connector within a playbook

Now that I have a connector and at least one action, I can create a new Logic App where I can connect to my custom API. I’ll find it under the ‘Custom‘ tab when choosing an operation, as shown below.

Once I select the action, I’ll be presented with the menu to enter my new connection name and the API key, as show below.

Once I save my creatively named connection, ConnectABC, my API key will be stored securely.

I can also configure any additional parameters that I need for the connection, as shown below.

The connection I created is now stored under the ‘API connections‘ blade within my logic app menu, as shown below.

And it’s not a value that I can see, which is why it’s secure! However, I can update it if I need to do so later on, as shown below:

I am still learning about Logic Apps and the incredible flexibility they offer as part of our SOAR features within the Microsoft security services. I hope you find this information useful and continue to explore with me!

Review any “Don’t know” reviewees prior to the end of an access review

TL;DR – Steps to create access reviews that meet strict compliance requirements by allowing auditors to review any “Don’t know” reviewees prior to the end of a review.

This is a short blog post to document the steps to create an access review that ensures strict compliance requirements around attestation are met. This is a scenario that came up last week while I was delivering one of the Rockstar training sessions and I can’t believe I haven’t documented this yet, so here it is!

The challenge

Many auditors have a requirement to ensure reviewers of access to groups, applications, etc. provide definitive answers. The challenge here is that Access Reviews within Entra Identity Governance provide an option for reviewers to choose “Approve“, “Deny“, or “Don’t know”. So, you can see how “Don’t know” does not meet that definitive answer requirement.

The solution

Fortunately, Access Reviews also provides a feature that allows us to configure multi-stage access reviews, with up to three stages. And within that feature, there is the option to only move to the next stage the “Reviewees marked as “Don’t Know“.

The results

During the first stage, the reviewer gets to review all the members of the group, as shown below. And as you can see, the reviewer, Adele, has approved 5 of them, denied 1, and selected “Don’t know” for 2 of them.

We can also see this from the admin portal as well, as shown below.

So, when the first stage is completed and the next stage approver gets the list to review, this is what they see.

They can see that in the previous stage Adele selected “Don’t know” for 2 of them, which are the ones this approver gets to review. So, this is where the auditor would step in to make that final decision and therefore prevent any reviewees that were neither approved, nor denied.

At the end of both stages, you end up with all users either being denied or approved, and none of them have “Don’t know” as the outcome.

I hope this post is useful and clarifies the options available to meet this specific requirement to provide definitive answers during access reviews.

Defender for IoT: OT sensor POC

TL;DR – Steps to configure a virtual OT sensor to use for a Defender for IoT POC.

I promised one of my partners that I would document the steps I followed to build my Defender for IoT OT sensor demo instance. I had to build a virtual appliance because (A) I don’t have access to any of the physical appliances and (B) even if I did, I don’t think any of the nearby facilities would allow me to just plug in to their network. So, that’s what I am doing in this post!

My setup and documentation

I am using an Azure VM and within that I create a local Hyper-V VM, because the sensor is an appliance, so it can’t just be an Azure VM on its own. The instructions I followed are really a compilation of steps. I started with the official documentation, which was very helpful, but I also combined with some information from this hands-on-lab. The lab uses the older version 10, but still included some screenshots and information that I thought was helpful. And I also had help from my colleague, Nick, who helped me brainstorm a few issues we hit along the way. Thanks!

Hyper-V Pre-Configuration

The instructions to pre-configure the VM are pretty much straight from the hands-on-lab. I am using a 4×8 because that works great with the Azure VM size I chose. And you’ll see this will map to the Ruggedized sensor option later on. Also, please keep in mind that I am not connecting my sensor to any other management console for this POC.

In my case, my Windows 11 host Ethernet adapter is assigned an IP of 10.7.0.4, therefore I used 192.168.0.0/24 as the network scope of the “NATSwitch”. And I created the two switches as shown below:

New-VMSwitch -SwitchName "NATSwitch" -SwitchType Internal
New-VMSwitch -SwitchName "MySwitch" -SwitchType Internal

I stored the network adapter information in a variable and then assigned an IP address to the NATSwitch (in my case 192.168.0.1).

$s1 = Get-NetAdapter -name "vEthernet (NATSwitch)"

New-NetIPAddress -IPAddress 192.168.0.1 -PrefixLength 24 -InterfaceIndex $s1.ifIndex

And I created the new NAT network:

New-NetNat -Name MyNATnetwork -InternalIPInterfaceAddressPrefix 192.168.0.0/24

The final configuration looks like this:

Create new Hyper-V VM

The steps to create the Hyper-V VM are pretty much straight from the documentation. The Hyper-V VM is then created using the image that you download from the portal, as shown below:

And specifically, step 1 of the onboarding process, as shown below.

You will use that image file when you create your Hyper-V VM, by selecting Install an operating system from a bootable CD/DVD-ROM within Installation Options and then selecting Image file (.iso), where you use the specific directory where you copied the file to in the Azure VM.

The settings on your Hyper-V VM should end up looking something like this. Please note, I have an extra switch, but you only need two.

Now you can connect to the VM, so you can complete the next steps.

The first screen you will see will give you the option to select a language, after that you will see a selection of installation options. As mentioned earlier, I chose to go with the Ruggedized just based on the size of my POC.

The installation will take a few minutes and then you’ll see the first options to configure your sensor. In my case for the monitor interface, I chose eht1, based on the configuration of my switches.

And I chose eth0 for the management interface.

A few notes on the other options that followed:

  • I skipped erspan, this just for Cisco devices.
  • I chose 192.168.0.50 for my sensor IP, based on my network configuration above.
  • I used the default backup directory.
  • I used 255.255.255.0 for subnet mask
  • I used 8.8.8.8 for DNS
  • I used 192.168.0.1 for gateway IP
  • No proxy

And after another few minutes you will see the passwords. Take a picture!! You will not see this again.

Final steps

In order to configure your sensor, you will need to download the registration file that will be generated on step 3 from the portal when you click on “Register“.

Back in the Azure VM, I can now open a new browser session and connect to https://192.168.0.50 (my sensor IP). Please note the first time you connect you can configure a certificate to be used for this connection, or you can just use the default one, which is what I am using and why it shows as Not secure in the browser. Also, the first time you connect you will be required to upload that file you downloaded during the registration process (step 3).

And once that is done you will be able to login with the cyberx user and password from the screenshot you took when the installation was complete.

If you need to connect to the sensor to run CLI commands, you can connect using the support user and password from the screenshot you took when the installation was completed. You can run commands, such as the ones shown below.

Also, if you see this error when connecting to the host, don’t be concerned. I have this error showing up every time I restart the VM and so far, everything is working as expected.

adiot0: can not register netvsc VF receive handler (err = -16)
A few optional steps

This is my demo instance, so I really don’t have anything connected to monitor. Therefore, I need to gather my data from PCAP files. In order to do that, I first need to make a small update to be able to play those files.

Navigate to System settings, and then select Advanced Configurations:

Select Pcaps from the drop down and then update the settings highlighted below.

This will enable the Play PCAP option shown below.

And now you can begin exploring your findings. You can see the Device map, as shown below.

Alerts

And even generate a detailed Risk Assessment report to provide to your customers.

I hope you take the time to explore this very powerful service. In a future blog post I’ll cover the integration with Microsoft Sentinel and the better together story of how these two services come together to bridge the gap between OT and IT.

Azure Lighthouse and Sentinel: Assigning access to managed identities in the customer tenant

TL;DR – MSSP – To trigger playbooks in the customer tenants sometimes you need to assign the managed identities of those playbooks permissions to execute actions within the customer tenant. This post covers the steps to configure the access required to assign those roles and the steps to assign the roles as well.

After I wrote the previous blog post, “Delegate access using Azure Lighthouse for a Sentinel POC”, I received many questions from current and future partners as well. I updated the previous blogs with additional clarification in some areas, but other questions were a bit more complex. In this post I want to address one of those questions, which the existing documentation addresses without specific examples, so it can be a challenge. Hopefully, this post helps other partners trying to configure the same access.

First, the template modifications

When you configure the initial Azure Lighthouse template to delegate access from the customer workspace to the MSSP, there are some modifications required in order to allow the MSSP users and groups to be able to assign the access. You have to modify the template manually, and you have to specifically add the “User Access Administrator” role (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9) and the “delegatedRoleDefinitionIds” parameter, which is required to specify the roles that the MSSP user or group will be able to assign to the managed identities within the customer workspace.

If you are worried about the very powerful “User Access Administrator” role being assigned to the MSSP, keep in mind that it’s a different version of that role. As the documentation states, it’s “only for the limited purpose of assigning roles to a managed identity in the customer tenant. No other permissions typically granted by this role will apply”.

Once that template is uploaded, your customer’s delegation will look like this:

As you can see above, my “LH Sentinel Contributors” group now “Can assign access to” 4 different roles: “Microsoft Sentinel Reader“, “Log Analytics Contributor“, “Contributor“, and “Reader“.

Why do I need these permissions?

Why would I even need these permissions? In my case, I have a playbook that I need to trigger on my customer’s workspace, the “SNOW-CreateAndUpdateIncident” playbook. And that playbook needs some permissions to be able to execute correctly.

If I was executing it locally on my MSSP tenant, then I would just assign the roles directly by going to the Identity blade within the specific logic app.

But if I try to do this from the Sentinel portal for one of my customer’s workspaces, I get an error like the one below.

So, I need a way to assign these roles in my customer’s workspace to trigger those playbooks locally from my customer’s workspace.

Now, I can assign roles

Per the documentation this role assignment needs to be done using the API, and it needs to be done using a specific version of the API, 2019-04-01-preview or later.

You will need the following values to populate the parameters in the body:

delegatedManagedIdentityResourceId

Note: These are both in the customer’s workspace, which you will have access to, as long as you have Logic App Contributor role included in your Azure Lighthouse template.

principalId

You will use those values to populate the parameters within the body of the API call. The following parameters are required:

  • roleDefinitionId” – This is the role definition in the customer’s workspace, so make sure you use the customer subscription id value.
  • principalType” – In our case this will be “ServicePrincipal“, since it’s a managed identity.
  • delegatedManagedIdentityResourceId” – This is the Resource Id of the delegated managed identity resource in the customer’s tenant. This is also the reason we need to use API version 2019-04-01-preview or later. You can just copy and paste from the playbook “Properties” tab, as shown above.
  • principalId” – This is the object id of the managed identity in the customer’s workspace, as shown above.

The file will look like this:

Note: 8d289c81-5878-46d4-8554-54e1e3d8b5cb is the value for Microsoft Sentinel Reader, which is the role I am trying to assign to this managed identity.

As noted in the API documentation, to assign a role, you will need a “GUID tool to generate a unique identifier that will be used for the role assignment identifier“, which is part of the value used in the command below to call the API. The value you see below in the API call, 263f98c1-334b-40c1-adec-0b1527560da1, is the value I generated with a GUID tool. You can use any, I used this one.

And once you run the command, you’ll see output similar to the one shown below, which means the role was successfully assigned.

And that’s it! Now the managed identity associated with the Sentinel playbook (logic app) in the customer’s workspace has the required permissions to be able to execute within the customer’s workspace.

Delegate access using Azure Lighthouse for a Sentinel POC

TL;DR – Steps to delegate access to users on a different tenant for a Sentinel POC using Azure Lighthouse.

I include this live demo in every webinar I deliver about Microsoft Sentinel, but today a partner asked me for documented step-by-step instructions, which I wasn’t able to find, so I am creating this post.

Most MSSPs need to create a POC to test Microsoft Sentinel, where they configure one workspace as the MSSP and a few other workspaces as customers. To be clear, the documentation is great about the correct way this in a real scenario, where partners need access to their customers’ workspaces, but for a POC, a partner doesn’t need to publish a managed service offer, they just need do this using an ARM template.

From the MSSP tenant

Navigate to “My Customers” and click on “Create ARM Template” as shown below:

Name your offer and choose if you want your customers to delegate access at “Subscription” level or “Resource group” level, then “Add authorization“.

You can choose to delegate access for a User, Group, or Service principal. I usually recommend you use Group over User, because the MSSP team members will change with time.

You can choose to assign the role “Permanent” or “Eligible”. If you’ve worked with PIM (Privileged Identity Management) previously, then you are familiar with the process. The eligible option will require activation before the role can be used. For eligible, you can also choose a specific maximum duration, and whether multifactor authentication and/or approval is required to activate.

In order to see your customers in the “Customers” blade later, you will need to include “Reader” role, as shown below. Click “View template” to be able to download it.

Download the ARM template file.

From the customer tenant

Before you import the template, ensure you have the correct permissions on the subscription. You can follow the steps here to ensure you can deploy the ARM template.

Click “Add offer” and select “Add via template”, as shown below.

Drop the template file you created or upload, as shown below.

Once the file is uploaded, you’ll be able to see it, as shown below:

You can also see the “Role assignments” that were delegated to the MSSP tenant, as shown below.

And if the customer tenant needs to delegate access to new subscriptions, they can do so by clicking on the ‘+’ button, as shown below.

And selecting any other subscriptions or resource groups that need to be delegated.

Back to the MSSP tenant

Now you can see your new customer from the “Customers” blade, as shown below.

Since the delegation included Sentinel Contributors, now you can manage the customer tenant workspace from the Microsoft Sentinel navigation within the MSSP tenant, as shown below.

Bonus: Since you have reader access, you can also see the subscription from Defender for Cloud, Environment settings. You can always delegate additional roles, if you need to manage MDC for this tenant.

Quick note on delegations at Resource Group level. I’ve seen instances with Resource Group delegations, where the ability to update the global filter takes a little while to allow you to select the newly added tenant and subscription that is associated with the resource group. However, after waiting for those updates to kick in, you should be able to modify the filter by selecting the filter from the blue menu bar, as shown below, and updating to include all directories and all subscriptions.

In my opinion, a POC is the best way to experience the wide variety of features within Microsoft Sentinel. You can even use the free trial that is available for 31 days. Another great resource that I always recommend for teams starting to get familiar with Microsoft Sentinel is the Sentinel Training Lab, which is available from the Content Hub blade in Sentinel. Finally, for MSSPs, http://aka.ms/azsentinelmssp is an invaluable resource to get a good overview of the recommended architecture.

A few of my favorite MDCA features

TL;DR – Just a few of my favorite MDCA features, which you may already be paying for.

I previously mentioned my strong belief that Sentinel and MDC are best buddies. Similarly, I firmly belief MDCA (Microsoft Defender for Cloud Apps) is definitely a member of the MDE squad. If you are using MDE (Microsoft Defender for Endpoint) and you haven’t tested MDCA, you may be surprised how well they work together and guess what? You may already be paying for it!

MDCA, which was previously known as MCAS (Microsoft Cloud App Security), is a CASB (I am going for a record number of acronyms in this post!), which stands for Cloud Access Security Broker. In an over simplified way, the job of a CASB is to enforce security policies. I think MDCA does that and more, and quite honestly, I am continuously discovering new features. In this post I am going over a quick list of some of my favorite features.

Cloud Discovery / Shadow IT

MDCA can discover applications (31,000 on the last count) through various means:

  • As part of the MDE squad, it can integrate with MDE to get data from managed Windows devices, as shown above. This integration also gives you the power to block apps as well. More on that a little later.
  • Log Collector over syslog or FTP and various Docker images are available.
  • Can also natively integrate with leading SWGs (Secure Web Gateways) and proxies, such as Zscaler, iboss, Open Systems, Menlo Security, Corrata, etc. (no need for Log Collectors)
  • You will also see data from the Cloud App Security Proxy, so that means even if it’s not from a managed Windows device, you will get some data from the other devices as well, as shown below.

And I can also create policies that will alert for Shadow IT, such as the ones shown below:

Block Apps

There are a few ways apps can be blocked as well. One of those is through the integration with MDE. I configured a few applications as unsanctioned for testing purposes, as shown below.

So, when I try to access one of those applications from a managed Windows device, I receive the following error:

And it’s not just Edge! See the error message below from Chrome on the same device:

I can also “Generate block script” for various types of appliances, as shown below:

Here is an example based on the applications I’ve set as unsanctioned:

Ban OAuth Apps

Solorigate anyone? MDCA can help you monitor OAuth apps in various ways, as shown below, where you can discover and either ‘approve‘ or ‘ban‘ risky apps.

Once you mark an app as ‘banned’, the Enterprise Application is updated with “Enabled for users to sign-in?” set to “No”. I also noticed that the application disappeared from “MyApps” for those users that were previously assigned to the application.

You can also create policies that will automatically revoke risky apps, as shown below.

Conditional Access App Control

So, technically this is done with another member of the squad, Conditional Access. The same Conditional Access we know and love that controls initial access is also capable of controlling access within a session when it works with MDCA.

I have a few very powerful policies, as shown below.

I won’t cover the first one “Confidential Project Falcon Sensitivity Label Policy“, because I dedicated a full blog post to that one, you can find it here: Restrict downloads for sensitive (confidential) documents to only compliant devices.

The second “Block sending of messages based on real-time content inspection – wow” is a way to prevent Teams messages based on a specific word and in this case from a non-compliant device. In my example, I want to block the word ‘wow’. Maybe ‘wow’ is my new super secret project and I only want people discussing it from compliant devices. So, if you try to send a message with the word ‘wow‘ from a non-compliant device, you would see the following:

Yes, the message is customizable :). And it prevents the message from being sent, as shown below:

Next, “Block sending of messages based on real-time content inspection – SSN“, it’s very similar to above, except, it’s not just a word, but rather a pattern, an SSN pattern. So, the user would see a similar message and it won’t be sent either.

Note: This is not real data, it’s just sample data used for DLP testing purposes.

Next, “Block upload based on real-time content inspection – CCN and SSN“, it’s similar, but now I am checking for content within files that are uploaded, whether it’s being attached to an email, being uploaded to a SharePoint site, etc.

Finally, “Proxy – Block sensitive files download – SSN”, it’s similar, but upon download.

Information Protection

Ok, so you saw some information projection above, but there’s more!

One of the policies above is “File containing PII detected in the cloud (built-in DLP engine)“, which automatically labeled a file, based on the contents, as shown below:

Threat Protection

There are some pretty powerful possible controls within this area, as shown below:

But I have chosen to show you how this “Mass download by a single user” policy works. Note that I have adjusted some of the values, so I can generate an alert for my test.

Because I know you may be thinking ‘but this is all within Microsoft services‘. So, check this out! This alert was generated by a user that downloaded files from an AWS S3 bucket, as shown below:

Honorary Mention 1 – App Governance

App Governance is technically an add-on, but I think it’s pretty cool, so I am including it. Note that this is now under the new Cloud Apps menu in security.microsoft.com.

App governance uses machine learning and AI to detect anomalies in OAuth apps. It can alert you on applications from an unverified publisher that have been consented to by your users. It can also alert on overprivileged applications, with permissions that are not even used, and various other anomalies.

Honorary Mention 2 – Security Posture for SaaS Apps

Security Posture for SaaS apps is super new, still in preview, but I can see the incredible potential. Currently, only available for Salesforce and ServiceNow, but I am sure more are to come. It makes recommendations on security best practices within those SaaS applications, as shown below:

More

I’ve only described some of my favorite features within MDCA. MDCA also integrates pretty closely with MDI (Microsoft Defender for Identity) and various other Microsoft and 3rd party security services. There is a lot more to MDCA than I included here, but I hope this post gives you an idea of how this service can help you secure your organization.

With a little help from MDC

TL;DR – Testing the new MDC governance rules to automatically assign and track owners for recommendations.

I was telling one my partners this week that Sentinel and Microsoft Defender for Cloud (MDC) are best buddies. I have written about some of that nice integration in a previous blog. This week I read about a new MDC feature that I think is going to be a huge help especially to those security professionals tracking pending remediation, recommendations, and security exceptions (hi Roberto!).

Governance rules

This new feature was included in the Microsoft Defender for Cloud RSA announcements, and it is very well documented in our official documentation. To configure, navigate to the “Environment settings” blade and select either an Azure subscription (as shown below), an AWS account, or a GCP project (more on those a little later).

Then you can see the new “Governance rules (preview)” blade, as shown below. For this test I configured a rule that will assign all the “MFA” recommendations for this subscription to a specific user.

I selected the user I want to own those specific recommendations. And I also set a remediation timeframe. I could also choose a grace period, which means it won’t affect the secure score for that amount of time, but I didn’t enable it for my test. And the icing on the cake are those notifications, for the items that are open and overdue.

Now, when I look at those recommendations, I can see the owner, due date, and wether it’s on time or not. Neat!

But wait, there’s more!

It’s not just for Azure subscriptions, you can do the same for AWS accounts and GCP projects that are connected to MDC. In the example below I have chosen to assign all ‘CloudFront’ recommendations to a specific user, as shown below:

Also, when you first create the rule, it will ask you if you want to apply the rule to any existing recommendations, as shown below. For my example, I chose to apply it.

In the same manner, I can now see the recommendation, the owner, due date, and the status, which is currently, ‘On time‘.

And you can also update the owner and the ETA, because sometimes life happens.

And if you have an extension, you can see that information as well, as shown below.

I know this new feature will be very useful and will automate some of the hassle associated with tracking the security recommendations. It’s simple to configure, but a huge help for all those security teams working to protect their organizations.

Disguising data

TL;DR – Testing the new ingestion time transformation features in Microsoft Sentinel.

When I read the word “transform“, I immediately think of Optimus Prime and Bumblebee, you know “robots in disguise” (I can hear the song too!), doesn’t everyone? But that’s not the transformation I am blogging about today. This week I’ve been testing a new feature in Microsoft Sentinel that allows you to configure rules to transform data upon ingestion. It’s a feature many of my partners have requested previously, so I gave it a shot and I was really amazed how easy it was to configure.

Transforming AWSCloudTrail

AWSCloudTrail is the largest data source I have in my tenant, so I figured I would try to test the scenario where I filter out some data that may not be as useful for security purposes. However, while I was at it, I might as well test adding of some custom fields, because that may come in handy as well. If you are looking for the current list of tables that supports this feature, it’s here.

You can find this new features by navigating to the shiny new “Tables” blade within the Log Analytics Workspaces menu, as shown below:

In my case I am testing with the AWSCloudTrail table, so I chose to “Create transformation” within the menu of the table, and then I gave my new rule a very creative name “TransformationDCR“.

I then specified the columns I don’t want to ingest and the ones I want to add within the “Transformation editor“. In my case, I don’t want to see “OperationName” because all the rows have the same value, “CloudTrail“, so I use project-away to filter that out. But just for testing purposes, I am adding two new columns, “MFAUsed_CF” and “User_CF“. The ‘_CF‘, which I *think* stands for custom field, is only needed for non-custom tables.

The KQL query I am using above is:

source
| project-away Type
| extend MFAUsed_CF = tostring(parse_json(AdditionalEventData).MFAUsed)
| extend User_CF = iif(UserIdentityUserName == "", UserIdentityType, UserIdentityUserName)

By the way, if you don’t add the “_CF”, you will see the following error: “New columns added to the table must end with ‘_CF’ (for example, ‘Category_CF’),” as shown below:

And that’s it!

Once the change takes effect, I can start to see where the data in “OperationName” is no longer showing and I can start seeing values in my new columns, “MFAUsed_CF” and “User_CF“, as shown below:

Masking data during ingestion

You can also configure Data Collection Rules for custom tables, by using the “New custom log” option under “Create“, as shown below:

I am choosing the ingenious name of “SampleData” for my custom table and the DCR will be have the innovative name of “SampleDataMaskDCR“, as shown below:

Please note, the data I am using here is just sample data I use for DLP tests, it’s not valid data.

When I upload the json file, it automatically warns me about the missing timestamp.

It then automatically adds that first line (TimeGenerated), which I will leave as is, since I just need it for testing purposes. Then I added one of the masking samples found in the library of transformations to mask the SSN values, as shown below:

And that’s it!

I can see my new table is created, as shown below:

Also, if I want to check the DCRs created, I can see that information from the Monitor portal, as shown below:

More info

If you want to give this a shot, there are a couple tutorials in the Microsoft documentation that I found very useful. There is also a Tech Community blog and a library of transformations that gives you a head start with some of the most common scenarios, not just for filtering, but also for masking data, which some of my partners have previously requested. Happy testing!

My adventures (so far) with verifiable credentials.

TL;DR – Sharing my initial experience with verifiable credentials.

Update: Verifiable Credentials is now Microsoft Entra Verifiable ID and is now part of the Microsoft Entra family.

This week I’ve been playing around with verifiable credentials. Yes, playing – that’s exactly what it felt like, because learning about verifiable credentials has been a lot of fun! I’ve been following developments related to decentralized identifiers (DIDs) for a while because I can see that this is the technology that will finally allow people to have full control of their identities.

This week I’ve been following the instructions in the Microsoft documentation, which I have to say are pretty detailed and helpful. But I did run into a few issues, likely due to my lack of experience with verifiable credentials and decentralized identities in general. In this blog post I am documenting my experience and the few issues I ran into, hoping that if someone else runs into the same issues, they can find a possible solution. Please note, I am not going into details about the concepts, since those are well documented already. However, if you are interested in learning more, there is also a fantastic whitepaper and an FAQ that can be very helpful.

First error

The first error I encountered was when I was creating the credentials. The error was “We couldn’t read your files. Make sure your user has the “Storage Blob Data Readerrole assigned in the selected container” as shown below:

In my case I had two problems. The first problem was that when I created the storage account, I had chosen to enable network access only from my IP, which does not work. It has to be enabled from all networks at storage account level, even though the container is private.

The second issue I had was that I originally assigned the ‘Storage Blob Data Reader‘ at container level, but it had to be at storage account level.

And with that it will be inherited by the container as well, as shown below.

Second error

My second error was when I was building the project, where I received these errors:

error NU1100: Unable to resolve 'Microsoft.AspNetCore.Session (>= 2.2.0)' for 'net5.0'.
error NU1100: Unable to resolve 'Microsoft.Identity.Web (>= 1.18.0)' for 'net5.0'.
error NU1100: Unable to resolve 'Newtonsoft.Json (>= 13.0.1)' for 'net5.0'.

The fix was to add a new package source to my NuGet configuration files:

dotnet nuget add source --name nuget.org https://api.nuget.org/v3/index.json
Third and final error

Finally, I thought I had everything just right, but when I tried to scan the QR code with the Microsoft Authenticator, which is my credential repository, I received this message: “Sorry, you can’t go any further. Tap OK to go back

Selecting “View Details” gave me the option to “View logs”, which had a TON of details. Among them the following error (thanks Matthijs!) “Unable to access KeyVault resource with given credentials“. The solution was to add permissions that were missing from my key vault, as shown below:

The key permissions here are ‘get’ and ‘sign’.

Now it’s working!

After those issues were addressed, I could follow the entire flow as described in the tutorial.

When I scan the QR code to issue a credential, I see the following:

And in the authenticator app, I see the following:

This is where I enter the pin that is shown in the application, where I scanned the QR code. One quick note here, the reason I see the little “Verified” blue box above is because I followed the steps to verify my domain, as noted below:

So, after that I can see my credential has been issued, as shown below.

And I can also verify my credential. Once I scan the QR code to verify the credential, I see the following:

And the authenticator gives me (as the holder of this credential) the option to ‘allow‘ this application to confirm that I am indeed an “AFABER PowerPuff VC Expert“, as shown below:

And later I can also see that the activity associated with this credential, as shown below:

There are many reasons to become familiar with Verifiable Credentials. They are private because they allow holders to share only the information they want to share about themselves. They are tamper-proof, so they are secure. They are available when needed and they are portable, because they are stored in the holder’s credential repository. It’s the best of all worlds! I hope you are inspired to go test these new features, so I am not the only one having all the fun!