Federating AWS with Azure AD

TL;DR – For an enterprise level authentication and authorization solution, federate AWS single-accounts with Azure AD.

Security best practices dictate that AWS root accounts should be used only on rare occasions. All root accounts should enable MFA, remove any access keys, and set up monitoring to alert in case the root account is used. For day-to-day work users should access their AWS services with their IAM users and the best practice is to federate that access with a reliable identity provider (IdP), such as Azure AD.

There are two main options to federate authentication for AWS accounts. In this blog I will show you the two options and I’ll explain why I prefer one over the other.


The first option is to federate AWS SSO. This is configured with the AWS SSO instance within the AWS Organization. As a reminder, AWS Organizations allow administrators to manage several AWS accounts. The single sign-on integration is done between AWS SSO and the Azure tenant. With this configuration, the users in Azure AD are assigned to the AWS SSO enterprise application, so they are not assigned to a specific AWS account. The assignment of users to the specific permission sets is done within AWS SSO. Those permission sets are what determine the user’s specific role(s) within the specific AWS accounts.

And this is the end-user experience when federating AWS SSO:

From MyApps, the user clicks on the AWS SSO enterprise application that was assigned to them in Azure AD, then they are presented with an AWS SSO menu of accounts and roles that were assigned to them via AWS SSO, which they can then click on to access the account with that specific role.

Please keep in mind the following details when using this setup:

  • Users and groups have to exist locally in AWS SSO, so this solution will provision users to AWS SSO when they are assigned to the AWS SSO enterprise application.
  • In a similar manner, users are disabled (not deleted) when they are removed from the AWS SSO enterprise application.
  • Since the roles are assigned within AWS SSO, Azure AD is not aware of which roles are assigned to which users. This becomes important if you need specific Conditional Access policies or specific access reviews and/or access packages within Identity Governance.
  • Supports SP and IdP initiated login, since the users exist locally on AWS SSO.
(2) AWS Single-Account Access

The second option is to federate the AWS Single Account. This is configured with each individual AWS account. The integration is done between the AWS account and the Azure tenant. Therefore, when the users in Azure AD are assigned to the AWS account enterprise application, they are assigned to a specific AWS account. Azure AD is fully aware of the specific account the users are assigned to as well as the specific AWS roles they are assigned to.

And this is the end-user experience when federating a single account:

From MyApps, the user clicks on the specific AWS single-account enterprise application that was assigned to them in Azure AD, then they are presented with the option of the roles that were assigned to them for that account, which they can then select to access the account with that specific role.

Please keep in mind the following details when using this setup:

  • Users and groups do NOT exist locally on AWS. That’s right, users and groups do not need to be provisioned or deprovisioned in AWS.
  • The provisioning configuration ensures roles created in AWS are synchronized to Azure AD, so they can be assigned to users.
  • Azure AD is fully aware of which roles are assigned to which users for specific accounts.
  • This configuration allows implementation of Conditional Access policies for the specific AWS accounts.
  • Only supports IdP initiated login, since the users do not exist locally in AWS.
  • To ensure AWS CloudTrail data accuracy, add the source identity attribute to identify the user responsible for AWS actions performed while assuming IAM roles.
  • When CLI access is required, the temporary credential can be generated using the AssumeRoleWithSAML CLI command. This will last as long as the session is valid (default is 12 hours).
Drumroll, please…

By now you probably guessed which option I lean towards. The AWS Single-Account access configuration should be selected for enterprises that have specific compliance requirements that include identity governance or any organization that wants to implement a zero trust model, since “least privileged access” is at the foundation.

There are several benefits to this configuration.

  • The lack of users in AWS means that users or their entitlements do not have to be removed in AWS when employees are terminated. 
  • The configuration allows the tracking of the specific AWS roles within Azure AD, which means access packages* can be created and then automatically assigned or be made available to be requested by users with their appropriate approvals.
  • Those access packages can also have associated access reviews* to ensure access is removed when no longer needed.
  • Specific Conditional Access* policies can be created for the specific AWS accounts. For example, you may require access to production AWS accounts only from compliant devices, but maybe the rules are not as tight for development AWS accounts.
  • Having one central identity governance solution means organizations have the ability to meet compliance, auditing, and reporting requirements. This also means that the principles of least privilege and segregation of duties* can realistically be enforced.

Some organizations have tried to manage AWS accounts with AWS SSO and implement some level of identity governance using security groups. However, as the organizations grow, it becomes unnecessarily complex and challenging to meet compliance requirements for the reasons described in detail in my previous post.

* More on those topics in follow-up posts.

Azure Lighthouse and Sentinel: Assigning access to managed identities in the customer tenant

TL;DR – MSSP – To trigger playbooks in the customer tenants sometimes you need to assign the managed identities of those playbooks permissions to execute actions within the customer tenant. This post covers the steps to configure the access required to assign those roles and the steps to assign the roles as well.

After I wrote the previous blog post, “Delegate access using Azure Lighthouse for a Sentinel POC”, I received many questions from current and future partners as well. I updated the previous blogs with additional clarification in some areas, but other questions were a bit more complex. In this post I want to address one of those questions, which the existing documentation addresses without specific examples, so it can be a challenge. Hopefully, this post helps other partners trying to configure the same access.

First, the template modifications

When you configure the initial Azure Lighthouse template to delegate access from the customer workspace to the MSSP, there are some modifications required in order to allow the MSSP users and groups to be able to assign the access. You have to modify the template manually, and you have to specifically add the “User Access Administrator” role (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9) and the “delegatedRoleDefinitionIds” parameter, which is required to specify the roles that the MSSP user or group will be able to assign to the managed identities within the customer workspace.

If you are worried about the very powerful “User Access Administrator” role being assigned to the MSSP, keep in mind that it’s a different version of that role. As the documentation states, it’s “only for the limited purpose of assigning roles to a managed identity in the customer tenant. No other permissions typically granted by this role will apply”.

Once that template is uploaded, your customer’s delegation will look like this:

As you can see above, my “LH Sentinel Contributors” group now “Can assign access to” 4 different roles: “Microsoft Sentinel Reader“, “Log Analytics Contributor“, “Contributor“, and “Reader“.

Why do I need these permissions?

Why would I even need these permissions? In my case, I have a playbook that I need to trigger on my customer’s workspace, the “SNOW-CreateAndUpdateIncident” playbook. And that playbook needs some permissions to be able to execute correctly.

If I was executing it locally on my MSSP tenant, then I would just assign the roles directly by going to the Identity blade within the specific logic app.

But if I try to do this from the Sentinel portal for one of my customer’s workspaces, I get an error like the one below.

So, I need a way to assign these roles in my customer’s workspace to trigger those playbooks locally from my customer’s workspace.

Now, I can assign roles

Per the documentation this role assignment needs to be done using the API, and it needs to be done using a specific version of the API, 2019-04-01-preview or later.

You will need the following values to populate the parameters in the body:


Note: These are both in the customer’s workspace, which you will have access to, as long as you have Logic App Contributor role included in your Azure Lighthouse template.


You will use those values to populate the parameters within the body of the API call. The following parameters are required:

  • roleDefinitionId” – This is the role definition in the customer’s workspace, so make sure you use the customer subscription id value.
  • principalType” – In our case this will be “ServicePrincipal“, since it’s a managed identity.
  • delegatedManagedIdentityResourceId” – This is the Resource Id of the delegated managed identity resource in the customer’s tenant. This is also the reason we need to use API version 2019-04-01-preview or later. You can just copy and paste from the playbook “Properties” tab, as shown above.
  • principalId” – This is the object id of the managed identity in the customer’s workspace, as shown above.

The file will look like this:

Note: 8d289c81-5878-46d4-8554-54e1e3d8b5cb is the value for Microsoft Sentinel Reader, which is the role I am trying to assign to this managed identity.

As noted in the API documentation, to assign a role, you will need a “GUID tool to generate a unique identifier that will be used for the role assignment identifier“, which is part of the value used in the command below to call the API. The value you see below in the API call, 263f98c1-334b-40c1-adec-0b1527560da1, is the value I generated with a GUID tool. You can use any, I used this one.

And once you run the command, you’ll see output similar to the one shown below, which means the role was successfully assigned.

And that’s it! Now the managed identity associated with the Sentinel playbook (logic app) in the customer’s workspace has the required permissions to be able to execute within the customer’s workspace.

Delegate access using Azure Lighthouse for a Sentinel POC

TL;DR – Steps to delegate access to users on a different tenant for a Sentinel POC using Azure Lighthouse.

I include this live demo in every webinar I deliver about Microsoft Sentinel, but today a partner asked me for documented step-by-step instructions, which I wasn’t able to find, so I am creating this post.

Most MSSPs need to create a POC to test Microsoft Sentinel, where they configure one workspace as the MSSP and a few other workspaces as customers. To be clear, the documentation is great about the correct way this in a real scenario, where partners need access to their customers’ workspaces, but for a POC, a partner doesn’t need to publish a managed service offer, they just need do this using an ARM template.

From the MSSP tenant

Navigate to “My Customers” and click on “Create ARM Template” as shown below:

Name your offer and choose if you want your customers to delegate access at “Subscription” level or “Resource group” level, then “Add authorization“.

You can choose to delegate access for a User, Group, or Service principal. I usually recommend you use Group over User, because the MSSP team members will change with time.

You can choose to assign the role “Permanent” or “Eligible”. If you’ve worked with PIM (Privileged Identity Management) previously, then you are familiar with the process. The eligible option will require activation before the role can be used. For eligible, you can also choose a specific maximum duration, and whether multifactor authentication and/or approval is required to activate.

In order to see your customers in the “Customers” blade later, you will need to include “Reader” role, as shown below. Click “View template” to be able to download it.

Download the ARM template file.

From the customer tenant

Before you import the template, ensure you have the correct permissions on the subscription. You can follow the steps here to ensure you can deploy the ARM template.

Click “Add offer” and select “Add via template”, as shown below.

Drop the template file you created or upload, as shown below.

Once the file is uploaded, you’ll be able to see it, as shown below:

You can also see the “Role assignments” that were delegated to the MSSP tenant, as shown below.

And if the customer tenant needs to delegate access to new subscriptions, they can do so by clicking on the ‘+’ button, as shown below.

And selecting any other subscriptions or resource groups that need to be delegated.

Back to the MSSP tenant

Now you can see your new customer from the “Customers” blade, as shown below.

Since the delegation included Sentinel Contributors, now you can manage the customer tenant workspace from the Microsoft Sentinel navigation within the MSSP tenant, as shown below.

Bonus: Since you have reader access, you can also see the subscription from Defender for Cloud, Environment settings. You can always delegate additional roles, if you need to manage MDC for this tenant.

Quick note on delegations at Resource Group level. I’ve seen instances with Resource Group delegations, where the ability to update the global filter takes a little while to allow you to select the newly added tenant and subscription that is associated with the resource group. However, after waiting for those updates to kick in, you should be able to modify the filter by selecting the filter from the blue menu bar, as shown below, and updating to include all directories and all subscriptions.

In my opinion, a POC is the best way to experience the wide variety of features within Microsoft Sentinel. You can even use the free trial that is available for 31 days. Another great resource that I always recommend for teams starting to get familiar with Microsoft Sentinel is the Sentinel Training Lab, which is available from the Content Hub blade in Sentinel. Finally, for MSSPs, http://aka.ms/azsentinelmssp is an invaluable resource to get a good overview of the recommended architecture.

A few of my favorite MDCA features

TL;DR – Just a few of my favorite MDCA features, which you may already be paying for.

I previously mentioned my strong belief that Sentinel and MDC are best buddies. Similarly, I firmly belief MDCA (Microsoft Defender for Cloud Apps) is definitely a member of the MDE squad. If you are using MDE (Microsoft Defender for Endpoint) and you haven’t tested MDCA, you may be surprised how well they work together and guess what? You may already be paying for it!

MDCA, which was previously known as MCAS (Microsoft Cloud App Security), is a CASB (I am going for a record number of acronyms in this post!), which stands for Cloud Access Security Broker. In an over simplified way, the job of a CASB is to enforce security policies. I think MDCA does that and more, and quite honestly, I am continuously discovering new features. In this post I am going over a quick list of some of my favorite features.

Cloud Discovery / Shadow IT

MDCA can discover applications (31,000 on the last count) through various means:

  • As part of the MDE squad, it can integrate with MDE to get data from managed Windows devices, as shown above. This integration also gives you the power to block apps as well. More on that a little later.
  • Log Collector over syslog or FTP and various Docker images are available.
  • Can also natively integrate with leading SWGs (Secure Web Gateways) and proxies, such as Zscaler, iboss, Open Systems, Menlo Security, Corrata, etc. (no need for Log Collectors)
  • You will also see data from the Cloud App Security Proxy, so that means even if it’s not from a managed Windows device, you will get some data from the other devices as well, as shown below.

And I can also create policies that will alert for Shadow IT, such as the ones shown below:

Block Apps

There are a few ways apps can be blocked as well. One of those is through the integration with MDE. I configured a few applications as unsanctioned for testing purposes, as shown below.

So, when I try to access one of those applications from a managed Windows device, I receive the following error:

And it’s not just Edge! See the error message below from Chrome on the same device:

I can also “Generate block script” for various types of appliances, as shown below:

Here is an example based on the applications I’ve set as unsanctioned:

Ban OAuth Apps

Solorigate anyone? MDCA can help you monitor OAuth apps in various ways, as shown below, where you can discover and either ‘approve‘ or ‘ban‘ risky apps.

Once you mark an app as ‘banned’, the Enterprise Application is updated with “Enabled for users to sign-in?” set to “No”. I also noticed that the application disappeared from “MyApps” for those users that were previously assigned to the application.

You can also create policies that will automatically revoke risky apps, as shown below.

Conditional Access App Control

So, technically this is done with another member of the squad, Conditional Access. The same Conditional Access we know and love that controls initial access is also capable of controlling access within a session when it works with MDCA.

I have a few very powerful policies, as shown below.

I won’t cover the first one “Confidential Project Falcon Sensitivity Label Policy“, because I dedicated a full blog post to that one, you can find it here: Restrict downloads for sensitive (confidential) documents to only compliant devices.

The second “Block sending of messages based on real-time content inspection – wow” is a way to prevent Teams messages based on a specific word and in this case from a non-compliant device. In my example, I want to block the word ‘wow’. Maybe ‘wow’ is my new super secret project and I only want people discussing it from compliant devices. So, if you try to send a message with the word ‘wow‘ from a non-compliant device, you would see the following:

Yes, the message is customizable :). And it prevents the message from being sent, as shown below:

Next, “Block sending of messages based on real-time content inspection – SSN“, it’s very similar to above, except, it’s not just a word, but rather a pattern, an SSN pattern. So, the user would see a similar message and it won’t be sent either.

Note: This is not real data, it’s just sample data used for DLP testing purposes.

Next, “Block upload based on real-time content inspection – CCN and SSN“, it’s similar, but now I am checking for content within files that are uploaded, whether it’s being attached to an email, being uploaded to a SharePoint site, etc.

Finally, “Proxy – Block sensitive files download – SSN”, it’s similar, but upon download.

Information Protection

Ok, so you saw some information projection above, but there’s more!

One of the policies above is “File containing PII detected in the cloud (built-in DLP engine)“, which automatically labeled a file, based on the contents, as shown below:

Threat Protection

There are some pretty powerful possible controls within this area, as shown below:

But I have chosen to show you how this “Mass download by a single user” policy works. Note that I have adjusted some of the values, so I can generate an alert for my test.

Because I know you may be thinking ‘but this is all within Microsoft services‘. So, check this out! This alert was generated by a user that downloaded files from an AWS S3 bucket, as shown below:

Honorary Mention 1 – App Governance

App Governance is technically an add-on, but I think it’s pretty cool, so I am including it. Note that this is now under the new Cloud Apps menu in security.microsoft.com.

App governance uses machine learning and AI to detect anomalies in OAuth apps. It can alert you on applications from an unverified publisher that have been consented to by your users. It can also alert on overprivileged applications, with permissions that are not even used, and various other anomalies.

Honorary Mention 2 – Security Posture for SaaS Apps

Security Posture for SaaS apps is super new, still in preview, but I can see the incredible potential. Currently, only available for Salesforce and ServiceNow, but I am sure more are to come. It makes recommendations on security best practices within those SaaS applications, as shown below:


I’ve only described some of my favorite features within MDCA. MDCA also integrates pretty closely with MDI (Microsoft Defender for Identity) and various other Microsoft and 3rd party security services. There is a lot more to MDCA than I included here, but I hope this post gives you an idea of how this service can help you secure your organization.

With a little help from MDC

TL;DR – Testing the new MDC governance rules to automatically assign and track owners for recommendations.

I was telling one my partners this week that Sentinel and Microsoft Defender for Cloud (MDC) are best buddies. I have written about some of that nice integration in a previous blog. This week I read about a new MDC feature that I think is going to be a huge help especially to those security professionals tracking pending remediation, recommendations, and security exceptions (hi Roberto!).

Governance rules

This new feature was included in the Microsoft Defender for Cloud RSA announcements, and it is very well documented in our official documentation. To configure, navigate to the “Environment settings” blade and select either an Azure subscription (as shown below), an AWS account, or a GCP project (more on those a little later).

Then you can see the new “Governance rules (preview)” blade, as shown below. For this test I configured a rule that will assign all the “MFA” recommendations for this subscription to a specific user.

I selected the user I want to own those specific recommendations. And I also set a remediation timeframe. I could also choose a grace period, which means it won’t affect the secure score for that amount of time, but I didn’t enable it for my test. And the icing on the cake are those notifications, for the items that are open and overdue.

Now, when I look at those recommendations, I can see the owner, due date, and wether it’s on time or not. Neat!

But wait, there’s more!

It’s not just for Azure subscriptions, you can do the same for AWS accounts and GCP projects that are connected to MDC. In the example below I have chosen to assign all ‘CloudFront’ recommendations to a specific user, as shown below:

Also, when you first create the rule, it will ask you if you want to apply the rule to any existing recommendations, as shown below. For my example, I chose to apply it.

In the same manner, I can now see the recommendation, the owner, due date, and the status, which is currently, ‘On time‘.

And you can also update the owner and the ETA, because sometimes life happens.

And if you have an extension, you can see that information as well, as shown below.

I know this new feature will be very useful and will automate some of the hassle associated with tracking the security recommendations. It’s simple to configure, but a huge help for all those security teams working to protect their organizations.

Disguising data

TL;DR – Testing the new ingestion time transformation features in Microsoft Sentinel.

When I read the word “transform“, I immediately think of Optimus Prime and Bumblebee, you know “robots in disguise” (I can hear the song too!), doesn’t everyone? But that’s not the transformation I am blogging about today. This week I’ve been testing a new feature in Microsoft Sentinel that allows you to configure rules to transform data upon ingestion. It’s a feature many of my partners have requested previously, so I gave it a shot and I was really amazed how easy it was to configure.

Transforming AWSCloudTrail

AWSCloudTrail is the largest data source I have in my tenant, so I figured I would try to test the scenario where I filter out some data that may not be as useful for security purposes. However, while I was at it, I might as well test adding of some custom fields, because that may come in handy as well. If you are looking for the current list of tables that supports this feature, it’s here.

You can find this new features by navigating to the shiny new “Tables” blade within the Log Analytics Workspaces menu, as shown below:

In my case I am testing with the AWSCloudTrail table, so I chose to “Create transformation” within the menu of the table, and then I gave my new rule a very creative name “TransformationDCR“.

I then specified the columns I don’t want to ingest and the ones I want to add within the “Transformation editor“. In my case, I don’t want to see “OperationName” because all the rows have the same value, “CloudTrail“, so I use project-away to filter that out. But just for testing purposes, I am adding two new columns, “MFAUsed_CF” and “User_CF“. The ‘_CF‘, which I *think* stands for custom field, is only needed for non-custom tables.

The KQL query I am using above is:

| project-away Type
| extend MFAUsed_CF = tostring(parse_json(AdditionalEventData).MFAUsed)
| extend User_CF = iif(UserIdentityUserName == "", UserIdentityType, UserIdentityUserName)

By the way, if you don’t add the “_CF”, you will see the following error: “New columns added to the table must end with ‘_CF’ (for example, ‘Category_CF’),” as shown below:

And that’s it!

Once the change takes effect, I can start to see where the data in “OperationName” is no longer showing and I can start seeing values in my new columns, “MFAUsed_CF” and “User_CF“, as shown below:

Masking data during ingestion

You can also configure Data Collection Rules for custom tables, by using the “New custom log” option under “Create“, as shown below:

I am choosing the ingenious name of “SampleData” for my custom table and the DCR will be have the innovative name of “SampleDataMaskDCR“, as shown below:

Please note, the data I am using here is just sample data I use for DLP tests, it’s not valid data.

When I upload the json file, it automatically warns me about the missing timestamp.

It then automatically adds that first line (TimeGenerated), which I will leave as is, since I just need it for testing purposes. Then I added one of the masking samples found in the library of transformations to mask the SSN values, as shown below:

And that’s it!

I can see my new table is created, as shown below:

Also, if I want to check the DCRs created, I can see that information from the Monitor portal, as shown below:

More info

If you want to give this a shot, there are a couple tutorials in the Microsoft documentation that I found very useful. There is also a Tech Community blog and a library of transformations that gives you a head start with some of the most common scenarios, not just for filtering, but also for masking data, which some of my partners have previously requested. Happy testing!

My adventures (so far) with verifiable credentials.

TL;DR – Sharing my initial experience with verifiable credentials.

Update: Verifiable Credentials is now Microsoft Entra Verifiable ID and is now part of the Microsoft Entra family.

This week I’ve been playing around with verifiable credentials. Yes, playing – that’s exactly what it felt like, because learning about verifiable credentials has been a lot of fun! I’ve been following developments related to decentralized identifiers (DIDs) for a while because I can see that this is the technology that will finally allow people to have full control of their identities.

This week I’ve been following the instructions in the Microsoft documentation, which I have to say are pretty detailed and helpful. But I did run into a few issues, likely due to my lack of experience with verifiable credentials and decentralized identities in general. In this blog post I am documenting my experience and the few issues I ran into, hoping that if someone else runs into the same issues, they can find a possible solution. Please note, I am not going into details about the concepts, since those are well documented already. However, if you are interested in learning more, there is also a fantastic whitepaper and an FAQ that can be very helpful.

First error

The first error I encountered was when I was creating the credentials. The error was “We couldn’t read your files. Make sure your user has the “Storage Blob Data Readerrole assigned in the selected container” as shown below:

In my case I had two problems. The first problem was that when I created the storage account, I had chosen to enable network access only from my IP, which does not work. It has to be enabled from all networks at storage account level, even though the container is private.

The second issue I had was that I originally assigned the ‘Storage Blob Data Reader‘ at container level, but it had to be at storage account level.

And with that it will be inherited by the container as well, as shown below.

Second error

My second error was when I was building the project, where I received these errors:

error NU1100: Unable to resolve 'Microsoft.AspNetCore.Session (>= 2.2.0)' for 'net5.0'.
error NU1100: Unable to resolve 'Microsoft.Identity.Web (>= 1.18.0)' for 'net5.0'.
error NU1100: Unable to resolve 'Newtonsoft.Json (>= 13.0.1)' for 'net5.0'.

The fix was to add a new package source to my NuGet configuration files:

dotnet nuget add source --name nuget.org https://api.nuget.org/v3/index.json
Third and final error

Finally, I thought I had everything just right, but when I tried to scan the QR code with the Microsoft Authenticator, which is my credential repository, I received this message: “Sorry, you can’t go any further. Tap OK to go back

Selecting “View Details” gave me the option to “View logs”, which had a TON of details. Among them the following error (thanks Matthijs!) “Unable to access KeyVault resource with given credentials“. The solution was to add permissions that were missing from my key vault, as shown below:

The key permissions here are ‘get’ and ‘sign’.

Now it’s working!

After those issues were addressed, I could follow the entire flow as described in the tutorial.

When I scan the QR code to issue a credential, I see the following:

And in the authenticator app, I see the following:

This is where I enter the pin that is shown in the application, where I scanned the QR code. One quick note here, the reason I see the little “Verified” blue box above is because I followed the steps to verify my domain, as noted below:

So, after that I can see my credential has been issued, as shown below.

And I can also verify my credential. Once I scan the QR code to verify the credential, I see the following:

And the authenticator gives me (as the holder of this credential) the option to ‘allow‘ this application to confirm that I am indeed an “AFABER PowerPuff VC Expert“, as shown below:

And later I can also see that the activity associated with this credential, as shown below:

There are many reasons to become familiar with Verifiable Credentials. They are private because they allow holders to share only the information they want to share about themselves. They are tamper-proof, so they are secure. They are available when needed and they are portable, because they are stored in the holder’s credential repository. It’s the best of all worlds! I hope you are inspired to go test these new features, so I am not the only one having all the fun!

No, really, you don’t need that access

TL;DR – CloudKnox initial setup and the incredible value it brings to organizations and the security professionals working hard to keep them secure.

Update: CloudKnox is now Microsoft Entra Permissions Management and is now part of the Microsoft Entra family.

If you’ve been working in security longer than you care to admit or just a month, then at some point you’ve found yourself trying to implement least privilege and doing your very best to explain to ‘Overpermissioned Dave‘ that really, he doesn’t need ALL those permissions. Ultimately, no one wants to be the owner of the account that is misused by an attacker. I spent a portion of this week learning about CloudKnox, a Cloud Infrastructure Entitlement Management (CIEM) solution, and I can already see the huge value it can bring to any conversation around permissions.


The Microsoft documentation (and videos!) walk you through the initial configuration, which is clear and very helpful, so I won’t spend any time covering that. However, I will say that if you are planning to give this a try, the time is now! CloudKnox is free while it’s on preview, so you’ll have some time to onboard your Azure subscriptions, AWS accounts, and GCP projects without associated costs.

When you initially go to enable CloudKnox, you will see the following:

As shown above, the preview will stop working after it goes GA, unless you contact your sales team to license the released version.

Once your configuration is done you can just come back to the Azure AD Overview blade to find the link to CloudKnox, as shown below.

You will also notice a new service principal, as shown below:

Onboarding Azure, AWS, and GCP.

Again, the Microsoft documentation does an excellent job of walking you through the Azure, AWS, and GCP onboarding process. However, in my case I wasn’t fully paying attention when I did the configuration for one of my AWS accounts, so the controller was showing as disabled, as shown below.

Note: I also noticed that when you do the initial setup, it will show up as Disabled until it syncs up completely. In my case I forgot to flip the default flag when creating the stack in AWS, so I had to update it after the initial configuration.

I was able to update it by creating a change set for my stack in AWS, specifically the one for the Member Account Role, for which I just used the default name of mciem-collection-role, as you can see below. I want the EnableController to be set to true because I want to be able to trigger remediations from CloudKnox into my AWS account, but this is up to the organization.

Then I came back to CloudKnox and just selected “Edit Configuration” under “Data Collectors” tab for my AWS account and then I clicked on “Verify Now & Save“, as shown below.

After that when I go into “Authorization Systems” tab, now my controller status shows “Enabled” for both my accounts.

I also ran into an odd issue when onboarding GCP, that I think may be related to recent authentication flow security changes that causes “gcloud auth login” to fail with this error: “Error 400: invalid_request Missing required parameter: redirect_uri”. The fix for me was to use “gcloud auth login –no-launch-browser”.

So, what information do I get from CloudKnox?

After you onboard your various Azure subscriptions, AWS accounts, and GCP projects, the first thing you will get is access to the PCI or Permission Creep Index. PCI is basically your risk level based on how overpermissioned your identities and resources are. Below you can see the PCI for my various subscriptions, accounts, and projects.


I don’t have a lot of resources in my various subscriptions and accounts, but I can already see the potential to restrict those permissions. For example, in one of my AWS accounts I have this user I creatively named Administrator that has been granted 10793 permissions and has used exactly 0 of those!

This type of information would clearly show ‘Overpermissioned Dave‘ that really, he doesn’t need ALL those permissions.


In this blog I just wanted to share an initial overview of the potential of CloudKnox. There is a LOT more you can do with this tool, including the ability to take immediate remediation actions to right size permissions by removing policies, creating new custom roles, even scheduling permissions for users only when they need them. You can also create autopilot rules to automatically delete unused roles and permissions. I hope you give it a try soon and let me know how it goes!

Cross-tenant workload identities with a single secret

TL;DR – You can have cross-tenant workload identities authenticating using the secret or certificate from their home tenant.

File this one under ‘you learn something new every day‘. I always thought that since application registrations are global and service principals are local, that any multi-tenant application would have to authenticate locally. The documentation describes a multi-tenant applications as those with an audience of “accounts in any Azure Active Directory“, which makes perfect sense for the typical multi-tenant interactive application that allows end-users to access applications in their home tenant, where they also authenticate.

Well, it turns out, multi-tenant also means an application ID can authenticate in another tenant (tenant B) with the credentials that were originally created in the home tenant (tenant A). This is useful for background services or daemon processes applications that require application permissions (as opposed to delegated permissions). See my previous blog on this subject.

Register the application in tenant A

For example, I create an application, creatively named ‘MultiTenantApp’, in tenant A (Contoso).

I then add some permissions. In this case I am just testing to verify the domains in the tenant, using the domains endpoint, since that will later confirm I am connected to the right tenant. So, I am using Domain.Read.All.

And then I add a secret to this application, which will only ever exist in Tenant A (Contoso).

So, to verify I can login using the app-id via the CLI command.

az login --service-principal -u <app-id> -p <password-or-cert> --tenant <tenant>

And I can list the domains as shown below. I just want to verify I am connected to the correct tenant, which for now it’s the domain on the tenant A (Contoso).

Create the service principal on tenant B

So, now to create the service principal (without any secrets) in tenant B (PowerPuff), I use the browser to navigate to:


I am logging in as the Global Administrator, so I can consent to the permissions, as shown below:

I then verify the service principal (Enterprise Applications) is created with the correct permissions, since I consented as the Global Administrator above.

The step above is only done once, because I just need to create the service principal and consent to the permissions in tenant B (PowerPuff) one time.

Now I can test

I login with the app-id and the secret that was created in tenant A (Contoso), but I will access tenant B (PowerPuff).

And it works! I can authenticate with the app-id and secret from Contoso, as long as the Service Principal exists on any other tenant. This means I don’t need to keep secrets for each and every tenant that I need to access to run this background application.

Leave it open and they will come

TL;DR – A story of how I left an RDP port wide open (oops!) and MDC and Sentinel came to my rescue when my resource was attacked.

Nope! I didn’t do this on purpose. Normally, I do a lot of testing to generate alerts and incidents in MDC, Sentinel, Defender 365 etc., but this one was not planned, not part of any test to generate alerts. This was a real attack.

Recently, I launched some resources as part of a lab to test Azure Purview, because… well, data governance. One of the resources included in the lab was a SQL Server on Azure VM and, unfortunately, the network configuration left the RDP port open. I was lucky to have previously configured Microsoft Defender for Cloud (MDC) and integrated it with Microsoft Sentinel.

The first thing I noticed

The Sentinel incidents started coming, this is how I first noticed there was something wrong. These were incidents and alerts that I did not expect within this workspace.

Digging in a little deeper on one of the alerts (‘Traffic detected from IP addresses recommended for blocking‘), I can see the entity associated was my Purview lab server and there was inbound traffic from IPs that may be malicious.

I could also see a link to go to Defender for Cloud and get more info, so I did. I could see that 32 different IPs tried to connect to my resource on port 3389.

I can see more details about those IPs as well.

And I could see actions I could take to “Mitigate the threat” and to “Prevent future attacks” as well. I could also trigger an automated response, which would mean running a Logic App, but in this case, the “Enforce rule” action available meant it would not require a Logic App.

When I clicked on “Enforce rule“, it gave me the option to immediately update the network configuration to block port 3389 for that server. This recommendation came from the adaptive network hardening recommendations, so it is based on a machine learning algorithm that “factors in actual traffic, known trusted configuration, threat intelligence, and other indicators of compromise, and then provides recommendations to allow traffic only from specific IP/port tuples“. That means the recommended rule will take into account any legitimate sources.

In my case I didn’t need any of the ports open from the Internet, so I would update it to deny all traffic on that port from the Internet, which is what clicking on “Enforce” did, as shown below.

A SOC analyst having the ability to make it stop right there by just clicking one button can mean their effectiveness is that much more valuable. Also, the fact that the recommendation they are following comes from the evaluation of the legitimate traffic as determined by the machine learning algorithm, means it is likely the way it should have been configured initially for the specific requirements of that resource. And if you wanted to automate this response in the future, you can do that from either Sentinel (playbooks) and/or MDC (workflow automation) using Logic Apps.

The other alert

You probably noticed there were two different types of alerts on Sentinel, the other one was ‘Suspicious authentication activity‘, as shown below.

Digging in deeper on the MDC side (by following the links from Sentinel above), I could see all the details about the various users the attacker tried to use. I could also see the actions recommended by MDC, which include the various recommendations to prevent future attacks. Because MDC is both a Cloud Workload Protection (CWP) solution and a Cloud Security Posture Management (CSPM) solution, so it can alert during an attack, as well as prevent attacks. The same policies can even prevent resources from being configured incorrectly initially, but that’s a subject for another blog.

The original setup

You are probably wondering ‘where are all these alerts and recommendations coming from and why are they showing up in Microsoft Sentinel’? They are coming from Microsoft Defender for Cloud because I enabled the enhanced security services for my resources, as shown below.

This enables these workload protections and various other features, as described in the documentation. And that allows it to alert me when these attacks happen. You can also see the same alerts shown here to the right, under “Most prevalent security alerts“.

And they are showing up in Microsoft Sentinel because I have the Defender for Cloud connector enabled for the subscription where my resource is located and some analytic rules enabled as well.

Multi-cloud and Hybrid

And if you are thinking, but this only works on Azure resources, you will be pleasantly surprised to know that MDC is multi-cloud. So, everything I just shared here can apply to resources on AWS and GCP natively, and on other clouds as well as on-prem resources using Azure Arc.

Just a little further

Since I was able to see these alerts in Sentinel, I was also able to run some playbooks to go to RiskIQ and find out if they knew anything about these IPs that were trying to connect to my resource. If you want to know more about this setup, please reference my previous post, RiskIQ Illuminate Content hub solution within Microsoft Sentinel. And here’s how that information shows up in Sentinel for this specific case.

As you can see above, after running that playbook, the incident was updated to ‘Active‘, the Severity was raised from ‘Low‘ to ‘Medium‘, a tag was added ‘RiskIQ Suspicious‘, and 33 comments were added with more information about the various IPs that were included in the incident. In this case, I had already resolved the issue from MDC, but in other situations a SOC analyst can also make use of this data to correlate to other open incidents.

While this incident was a complete surprise to me, it was a great opportunity to see the power of these Microsoft Security services working together to make a resolution to these types of attacks that much easier for any SOC analyst.

Sorting out the Azure Activity Connector in Microsoft Sentinel

TL;DR – Just a few tips and tricks for configuring the Azure Activity Connector in Microsoft Sentinel.

Personally, I learn by doing, so whenever anyone asks me what’s the best way to learn Microsoft Sentinel, I point them to the Training Lab, which is available right from the Sentinel Content hub.

It’s a wonderful tool to learn the basics and get comfortable with Microsoft Sentinel. As people move through the training lab modules, one of the most common questions I get is when they reach module 2 and they need to configure the Azure Activity connector, because they follow the steps, but the connector still remains ‘not connected’ (not green, as shown below).

Normally, it is a straight forward configuration, however, if you are using a subscription that was previously configured to send logs to another source or if you set the scope to a higher level (i.e. root management group), then it may not be updated as expected immediately.

First, you need to ensure you check the box to ‘Create a remediation task’ so that when the new policy is assigned to your existing subscriptions, they are updated to send logs to the specified Log Analytics workspace, the one that is configured for Microsoft Sentinel.

Then, to verify the subscription was updated to send logs to the correct workspace, navigate to your subscription, select the ‘Activity log’ blade, and then click on ‘Export Activity Logs’, as shown below:

Ensure the ‘Log Analytics workspace’ value is the same workspace you configured with Microsoft Sentinel:

If it isn’t, go ahead an update it. Once you update it and the workspace starts receiving data, you should see the status of the connector change to green, while also showing the time the last log was received, as shown below:

You should still check the Azure policy assignment to ensure only the expected policies are assigned and configured at the correct level, which may be management group level, subscription, or resource group level. Also, ensure the parameters specify the workspace that is configured with Microsoft Sentinel.

If you haven’t tried out the Training Lab, I highly recommend you do. You can use the free trial that is available for the first 31 days. Have fun learning!