RiskIQ Illuminate Content hub solution within Microsoft Sentinel

TL;DR – An overview of RiskIQ Illuminate solution available through Microsoft Sentinel Content hub.

The last few months I have been spending quite a bit of time with Microsoft Sentinel, to the point that a day hasn’t gone by that I don’t at least mumble the word ‘Sentinel’. It’s truly an impressive service and it’s quite intuitive. 

We have been receiving questions on RiskIQ and Microsoft Sentinel, specifically around the new RiskIQ Illuminate solution that is available in the Microsoft Sentinel Content hub. This blog will go through the process to configure and test this solution.

Install RiskIQ Illuminate Content hub solution

Install of the solution is just a click away, just click on ‘Install’. Yes, it’s that easy!

As you can see above the solution comes with 27 playbooks (currently). These playbooks will basically go find out if RiskIQ knows anything about the entities (hosts or IPs) associated with specific incidents. And if so, then it enriches the incident by increasing the severity, adding some useful tags, and comments with links to the information found on RiskIQ. This ensures the SOC analysts working these incidents have this very valuable information easily available when they need it.

RiskIQ Community Account

In order for the playbooks to have access to RiskIQ you will need a RiskIQ community account with access to Illuminate. Follow these steps to configure it:

  1. Register to create an account on the RiskIQ community, if you don’t already have one.
  2. Activate the Illuminate trial. Click on ‘Upgrade’, then follow the steps to activate the trial.
  3. Once you activate the trial, you need to get your API key through the Account Settings page.
Configure Playbooks

After installation of the solution, you’ll see the RiskIQ playbooks through the Automation blade as shown below.

To ensure the playbooks have access to both RiskIQ and Sentinel you will have to ensure the associated API connections show as “connected”.

We’ll first start with the API connection to RiskIQ. Click on the “RiskIQ-Base” playbook:

Then select the API Connection ‘riskiq-shared’

Then enter the API key information you got from the RiskIQ community account settings page and save.

Now, for the rest of the playbooks you need to authorize the associated API connections. Click on the playbook, for example, “RiskIQ-Automated-Triage-Incident-Trigger”, then click on the associated API Connection as shown below:

Click to Authorize, which will prompt you to login with your user with required permissions. And don’t forget to ‘Save’.

Repeat those steps for the remaining associated API connections for the remaining playbooks.


To test the playbooks I created a watchlist that included some of the IPs that were listed as IOCs for the RiskIQ: UNC1151/GhostWriter Phishing Attacks Target Ukrainian Soldiers report.

And then I created an analytic rule that just reads from the watchlist, as shown below.

I also configured entity mapping for the IP address as shown below:

While I am here, notice that this incident (150) was automatically created with a ‘medium’ severity, since that’s what I configured in the analytic rule. Now I can run the playbooks from the incidents blade as shown below.

Or I can schedule an automation rule that will trigger the playbooks to run automatically based on a set of conditions as shown below:

For this test I am going to run the playbook manually, so I can show the incident updates.

After the playbook runs, the severity is now raised to ‘High’, there is a new tag added ‘RiskIQ Malicious’, and the status changes to ‘Active’.

Additionally, these useful comments are added to the incident:

Including a link to the associated RiskIQ article:

In the same way that I can run these playbooks at the incident level, I can also run them at the alert level, for any alert associated with the incident, as shown below. This because the solution includes both incident and alert trigger playbooks.

As with any other playbook (logic app), I can also look at the history of the runs:

And just like any other playbook (logic app) I can troubleshoot in case of issues:


Playbooks in Microsoft Sentinel are used for many different SOAR tasks. This RiskIQ Illuminate solution makes great use of these playbooks to enrich incident data that can make a SOC analyst’s life just a little bit easier. Because we know that these days every little bit counts!

Joiners – Movers – Leavers (JML) Part 4

This post is part of a series.

Part 1 – The basics
Part 2 – Lifecycle Management and Provisioning/Deprovisioning
Part 3 – RBAC/ABAC, Entitlements Management, and Requests & Approvals
Part 4 –Separation of Duties, Certification / Attestation, and Privileged Access Management (this post)

Separation of Duties

SoD is sometimes referred to as ‘segregation of duties’. The typical example used for SoD is Accounts Payable and Accounts Receivable, because having access to both allows a single user to intentionally or unintentionally commit fraud and cover it up. This concept of checks and balances goes hand in hand with the concept of least privilege, which is imperative to enforce security policies for data access. Using SoD rules or policies when defining roles and entitlements is essential to prevent or limit the likelihood of a single user’s ability to negatively impact our systems. These policies not only protect the users from making mistakes, but they also limit how much damage an intruder can make if they are able to impersonate that user. SoD rules and policies ideally should be preventative measures. A good identity governance solution should provide the means to enforce these policies during the access request and approval process.

Azure AD currently offers this feature in preview. You can add specific incompabible access packages:

Or specific incompatible groups:

This means that if the user already has the incompatible access package assigned, or is a member of the incompatible group, then they cannot request this access package.

Certification / Attestation

We couldn’t possibly talk about access without mentioning access reviews. Because no process is ever perfect, including the JML process, we need to certify or confirm access. Access reviews are typically part of the Identity Governance solution and the purpose is to certify privileges a user is assigned are still required by the user. There are two main parts to the UAR process:

  • Reviewers, who are typically the LOB owners for those privileges, review the users that are assigned and either approve or deny the access going forward.
  • The privileges are automatically removed for any users who were denied by the reviewers.

Azure AD also offers options to send notifications and reminders to the reviewers to ensure they provide feedback within the allotted time. There are also options to automatically remove the access on denied or not reviewed outcomes, basically for anything that wasn’t specifically approved by a person.

Continuous access reviews for privileged access group membership and privileged role assignments are essential to re-certify, attest, and audit users’ access to resources.

Azure AD offers access reviews for:

  • Access Packages
  • Teams/Groups and Applications
  • Privileged roles and groups managed via PIM (see next section)
Privileged Access Management / Privileged Identity Management

Privileged roles are at the top of the priority goals for attackers, that’s why they have to be protected with an equivalent urgency level. Microsoft has done a fantastic job of documenting recommendations to protect privileged users, especially when it comes to protecting those highly privileged users from on-prem attacks.

Azure AD offers Privileged Identity Management which provides the ability to assign privileged roles either active or eligible. This means that if a user doesn’t need access for their daily job, they can then be assigned that role as eligible, which means they have to activate it when they need to use it. That activation can then require additional requirements, such as:

  • Azure MFA
  • Justification for activation
  • Ticket – if a support ticket is required for auditing purposes
  • Approval

As you can see above, the activation can also be restricted to only a certain number of hours, and it can be scheduled. So, if someone is expecting to work on a Saturday morning, they can get their approvals earlier in the week and schedule the activation for the hours they plan to work.

Another great feature is the ability to expire those role assignments. Many times privileged roles are assigned for a specific project and sometimes in an emergency situation and there’s really no need for those users to keep the roles forever.

In this case PIM allows the assignments to expire and the users can then request extensions if they still need the roles.

Azure AD PIM currently supports the following groups and roles:

  • Azure AD roles
  • Azure resources (RBAC roles for subscriptions)
  • Privileged access groups (preview), these are groups that can be assigned roles and have enabled privileged access.

Finally, one of my favorite features of PIM are the email notifications to ensure that once you implement PIM the assignments remain within the guidelines the enterprise has deemed necessary. The users that are assigned Global Administrator, Security Administrator, and Privileged Role Administrator will receive these notifications.

These administrators will be able to see if users are assigned roles outside PIM and/or assigned permanently, and they are provided with links to adjust the assignments as needed.

There is a LOT more to identity governance, but hopefully I have given you an idea of what an enterprise level solution should include and the tools that Azure AD provides to build on top of the robust solutions offered. On the road to implementing zero trust solutions, establishing a solid JML process is a big step forward for any enterprise. I know it’s called ‘governance’, but it’s much more than governance, it’s preventive security.

Joiners – Movers – Leavers (JML) Part 3

This post is part of a series.

Part 1 – The basics
Part 2 – Lifecycle Management and Provisioning/Deprovisioning
Part 3 – RBAC/ABAC, Entitlements Management, and Requests & Approvals (this post)
Part 4 – Separation of Duties, Certification / Attestation, and Privileged Access Management


You may ask, how does this all relate to the access controls models we’ve heard about? Well, I could present a full dissertation on ABAC, RBAC, MAC, and DAC, but I’m just going to mention the difference and that the JML process is closely related to these terms because these are the most common access control policy models.

  • DAC – Discretionary Access Controls
  • MAC – Mandatory Access Controls
  • RBAC – Role Based Access Controls
  • ABAC – Attribute Based Access Controls

DAC is based on ACLs or a matrix of rules, such as OS permissions. MAC is mostly used for government, as its based on security labels or clearance levels. RBAC is probably the one most people are familiar with, since it’s essentially the foundation of ABAC, you can consider RBAC the ‘birthright’ or ‘default’ role privileges, based on job responsibilities, as I mentioned on the previous post. And ABAC, as the name implies, is based on attributes and/or combination of attributes, and for that reason is the most granular. It is also the most flexible, and quite common for more modern SaaS services, including most cloud resource services. 

In the ABAC example below, I created custom security attributes (currently in preview). In my example I am using clearance levels of ‘Confidential’, ‘Secret’, and ‘Top Secret (TS)’, and I’ve assigned a different level to three different users.

So, when I go to assign the Storage Blob Data Reader role, I can assign it to a group that includes all three users, as members of the ‘My Super Secret Project’ group:

But I can add a condition that only allows them to read, if they have SecurityClearanceLevel attribute value of ‘Top Secret (TS)’.

As you can see it can get very granular. That’s why the approach is typically to start with RBAC, which includes the bare minimum privileges a specific role will need, and then expand to ABAC, to be able to control granular access privileges.

Entitlements Management

So, to start with RBAC enterprise teams typically work with the identity management operations team to create specific access packages for the various roles within their team. These packages are just groups of privileges that may include membership to security groups, target application roles, even access to sites where files are stored. These packages exist within catalogs that end users can then request from based on certain criteria.  However, some packages will be deemed “birthright” for a specific position and/or department combination, so those packages can be automatically assigned to those end-users. The triggering of the automatic assignment may be based on values or combination of values on the user record, such as department, job position, etc. The automation of the assignment and removal of these privileges can be achieved using existing tools, such as Logic Apps that communicate with the Microsoft Graph API to trigger assignment of access packages.

In the example below, I am making the access package available to be requested by “all members (excluding guests)”, that way I can isolate specific set of permissions from my guests and maybe I can create a separate access package that is just for guests, with those permissions that the enterprise has deemed appropriate for guests.

Notice above the “For users not in your directory” is greyed out, this is because this access package is created within a catalog that is not enabled for external users, as you can see below. This is a flag that can be controlled and can be very useful to isolate permissions.

In the example below I am creating an access package that will include all the access that any person joining a specific team will require on day 1. One thing to notice here is that I have the ability to include more than just security groups. I am able to include security groups, teams, SharePoint sites, and application roles, which is a huge Azure AD benefit that is not possible with other identity providers, as I discussed in detail in my previous post, Groups vs Roles.

I can also add specific questions to be answered during the request, data that may be required for auditing and compliance purposes:

Another fantastic feature is the ability to collect data that is required for specific resources. In the example below, the ServiceNow resource requires an additional attribute maybe because provisioning has been configured to populate that value on the target application or maybe I need it to trigger additional logic apps, so I am able to add that attribute to be included during the request process, as shown below:

The ability to enforce least privilege goes hand in hand with the ability to remove access when no longer needed. Normally, the minimum any identity provider should provide is access reviews, which I cover on part 4. However, Azure AD goes above and beyond by providing the ability to expire access packages. I’ve seen this be a hard to meet requirement for some compliance frameworks, especially those related to government compliance requirements.

Finally, one of the newest features that is currently in preview is the ability to trigger other actions during a specific stage of the access package.

This is the ultimate flexibility because these custom extensions are used to trigger custom Logic Apps, which many Azure developers are already familiar with. This is something where Microsoft partners can build on top of the Azure AD solutions to enhance the JML process for enterprise customers.

One important note here. As I mentioned in part 2 of this series, it is highly recommended to use solutions that support the SCIM protocol for provisioning/deprovisioning. Logic apps are great for additional changes that may be required on target applications, additional flags that need to be set, etc., but the actual provisioning/deprovisioning of the users and their access should use SCIM where possible. I’ve seen other identity providers use tools to provision that do not rely on the SCIM protocol and it has been the source of many headaches.

Access Requests and Approvals

Other packages, such as those that include administrative privileges, may have to be requested and approved by various levels.  Keep in mind that the same package that may be deemed “birthright” for a member of the security team may be a package that requires approvals for members of a different engineering team. Azure AD provides the ability to create different policies for different requirements:

The benefit in creating access packages is that they are typically associated with a specific role, and so the owners of those roles can determine the level of access required in every single application, which can then be assigned or requested by members of teams, without having to request dozens, sometimes hundreds of different permissions.

As noted above, within each policy, not only can I designate which users or groups can request the specific access packages, but I can also designate who will approve the access for each of the levels and if there is a backup action to be taken, in case someone doesn’t approve/deny the request within a specific amount of time.

And don’t forget about connected organizations, which allows the ability of controlling specific access to my tenant from specific external tenants.

With connected organizations I can designate some access packages to only be requested by a specific external tenant, and I can also set an expiration on those! So, if the access expires then the guests can be removed automatically from my tenant, when they no longer require the access.

In the final part of the series, part 4, I cover the final identity governance requirements an enterprise should expect from their identity solution.

Joiners – Movers – Leavers (JML) Part 2

This post is part of a series.

Part 1 – The basics
Part 2 – Lifecycle Management and Provisioning/Deprovisioning (this post)
Part 3 – RBAC/ABAC, Entitlements Management, and Requests & Approvals
Part 4 – Separation of Duties, Certification / Attestation, and Privileged Access Management

Lifecycle Management

Lifecycle Management targets the creation of users and their privileges as they join, move, or leave an organization. It must be done from a central solution, in order to ensure accurate reporting and auditing, and because that’s the only way to ensure there are no lingering accounts or access when they are no longer needed.

User Lifecycle – Provisioning / Deprovisioning of users

User identities go through various main stages in the lifecycle. The typical stages are non-existent, active, disabled, and finally deleted. There are also possible transitions between those stages, as depicted below:

Each of these transitions is triggered typically by either the initial record creation on the “source of truth” or by updates to the attributes of the existing user records. This is the first component of the JML process, the provisioning and deprovisioning of users. When those transitions occur within the sources of truth, the provisioning solution receives that data and processes the equivalent required transitions on the target applications. For example, the hiring of a new user in an HR solution, will trigger the creation of a user in various target applications, including the main enterprise directory, an email solution, etc. A transition that triggers the creation of all users on a specific target application is sometimes referred to as a “birthright” application, because ALL users get access to that application, one example is an email target application, because typically all users require an email account.  Some applications will only trigger the creation of users based on a specific attribute, for example, only users that are hired for the finance department will need accounts created on a financial application.  A provisioning automation solution can use the value of ‘department’ to trigger the provisioning of users on a target application.

Some target applications do not require the users to exist at all in the target application to be able to grant access, because they use a type of federation that doesn’t require a local user to exist. You can see an example of that setup on my AWS Single Account Access blog post. You still need the user to exist on the identity provider (IdP), but they don’t need to exist on the target application. The privileges granted to the user are tracked on the identity provider only. This is a great setup because it eliminates the need to update users on those target applications.

However, the simple provisioning or deprovisioning of users on target applications does not constitute a full JML process. Normally when a user is provisioned to a target application, birthright or otherwise, they can be assigned “default” privileges, but most users will need different privileges, so default privileges are not always sufficient. That’s where the access lifecycle explained below comes into play.

Access Lifecycle – Provisioning / Deprovisioning of privileges

In the same way that users are provisioned and deprovisioned on the target applications, when required, they also may need specific privileges assigned and removed as they move through the lifecycle.  For instance, an engineer that is part of a cloud implementation team will need access to specific accounts and specific resources within that account. In her first job role, she may initially need privileges such as Key Vault Contributor for a specific vault for a specific project. Later on the same engineer moves to a 2nd position in a different department, and she will no longer need the access to the previous account, but now needs a different level of access on different subscription, such as Azure Kubernetes Service RBAC Admin.  This is the second component of the JML process, the provisioning and deprovisioning of privileges.

This second component of JML in itself has two main options:

  • Some privileges can be automatically assigned and removed as the user moves through the lifecycle.  The automation of the assignment and removal of these privileges can be achieved using existing tools, such as Logic Apps that communicate with the Microsoft Graph API to trigger assignment of access packages. More on that on part 3.
  • Other privileges have to be requested because they require additional approvals.
Azure Active Directory & SCIM

As mentioned on part 1, SCIM (System for Cross-domain Identity Management) is the standard protocol used for provisioning and deprovisioning. Azure AD supports SCIM protocol for a large number of applications and services, including well known enterprise solutions , custom apps, and on-prem applications.

For example, on my demo tenant I configured a few solutions to provision users and their roles to target applications. This example is for ServiceNow, where I configured provisioning and the job runs every 40 minutes.

So, when I assign Patti to ServiceNow with the role of User in Azure AD:

The user is then provisioned automatically to the ServiceNow instance with that role:

And when I remove Patti from the assigned users/groups, she is also disabled in ServiceNow:

I will expand on how roles for various applications can be packaged together for specific users in the follow-up posts on this series.

I have only scratched the surface of what is possible with Azure AD and SCIM. Azure AD App Gallery continues to increase the number applications and services that support provisioning of users and roles as our partners continue to develop their own SCIM connectors. Additionally, solutions for on-prem applications are also being used that support SCIM protocol.

The ability to use the SCIM protocol for all types of provisioning requirements is a huge Azure AD benefit for performance, reliability, and security reasons. Unfortunately, I have seen what happens when identity providers create their own solutions to provision users that do not follow the SCIM protocol and the results are often unpredictable, which in turn cause unnecessary headaches for IT Operations, Governance, and Security teams

In the next post, part 3, I cover the concepts of RBAC/ABAC, Entitlements Management, and Requests & Approvals.

Joiners – Movers – Leavers (JML) Part 1

TL;DR – An overview of the Joiners-Movers-Leavers process and how it can be implemented using Microsoft Azure Active Directory.

When we read about the zero trust model and specifically, the principle of least privileged access, most people think about just the authentication and authorization process. Although that is a huge part, we cannot forget about the processes associated with identity governance that are there to control the specific access those identities possess and how long they have the access. The ultimate goal for any enterprise should be to have one central Identity Governance solution because that is the only way to guarantee an auditable joiners/movers/leavers (JML) process for all employees. That’s the topic of the posts in this series:

I grew up in the Caribbean in a house that didn’t have A/C, needless to say it was HOT! When we go back to visit, my sons can’t understand how I survived all those years without A/C. The truth is I didn’t know there were better options. That’s what comes to mind when I see enterprise customers depending on identity providers that don’t provide all the identity governance tools to implement a full JML process. So, this post is an attempt to share what enterprise customers should expect from their identity providers when they refer to identity governance solutions to accomplish an enterprise level JML process. All of which is available with Azure Active Directory.

What is JML?
  • Joiners – The joiners process covers the identities that are created for the employees that join the company. The joiners process will also include providing the minimum required access on a variety of applications and services for that user to be able to do their job.
  • Movers – The movers process covers the removal and addition of access to the identities of employees that move to a different position, department, location, project, etc. For example, if the employee transfers from the accounting department to the sales department, then they should no longer have access to accounting applications and services, now they should be granted access to sales applications and services.
  • Leavers – As the employees retire or are terminated, the access they had should be removed, and their users should be disabled and/or deleted.
A simple goal, an enormous challenge.

The goal of a JML process is to provision, and eventually deprovision, user identities and their privileges to the target applications and services only for users that need it and only for the time they need it. It is a simple goal, but it is an enormous challenge to achieve for all identities in an enterprise. The challenge comes from the number of target applications and services and the number of identities. The higher those numbers are, the higher the complexity. Keep in mind, identities can be the individual user identities as well as their non-user accounts, such as accounts used for administrative tasks.

This complexity is why any enterprise should aim to automate the JML process as much as possible. And that is where a good Identity Governance solution comes in. Partners can then build upon the available solutions to automate the identity governance requirements for enterprises. I’ll cover the details of how that automation can be achieved in the follow-up posts of this series.

What about workload identities?

Even workload identities for machine-to-machine access should be included in some of the identity governance processes, such as access reviews, because they also have permissions associated and therefore we also want to enforce the principle of least privilege on those identities.

A quick note on sources and targets

I mentioned above “target applications and services”, but let’s cover the source first, because for a target to exist, there must be a source. What is commonly referred to as the “source of truth” in identity management is typically an HR system or some directory, where the users are created initially when they are hired as employees or contingent workers. Sometimes you have more than one “source of truth”, for example, if you have one system for employees and another one for contingent workers or if you are getting attributes from other sources. Keep in mind, the “source of truth” is not necessarily the identity provider, in other words, the source of truth is just the initial location of the user data where it exists, where any identity provisioning solution will pull information from about the user and its attributes. Which is not to be confused with the identity provider (IdP), where users authenticate for a SSO solution, because that’s a topic for a different day.

What are the targets then? The identity provisioning solution will get the data from the “source(s) of truth” and based upon the values of those attributes it will then provision and deprovision user identities and privileges to those target applications and services. You can see another over simplified diagram of the solution above. I am not going too deep, but just keep in mind we can also reconcile specific attribute data from target applications back to the provisioning solution, making the targets also sources for specific attributes, because other target applications may need those attributes as well.  

As you can imagine, the values of those attributes don’t remain the same for the entire lifecycle. Attributes such as last name, department, job position, and many others are updated constantly. This is why a central identity management solution integrated with all target applications and services is essential to ensure all those dependent values are updated on target applications as changes occur. And more importantly, the updates of those attributes is what triggers the stage changes and privilege assignment/removal in target applications.

What does Identity Governance include?

A full identity solution should include identity governance and what that includes sometimes can be represented in different ways by different identity service providers, depending on what they offer. Some identity service providers do not provide identity governance solutions and if they do, sometimes they only provide a portion of these. Fortunately, Microsoft, and specifically Azure Active Directory, provides the full range of identity governance solutions required for a successful joiners-movers-leaver process at an enterprise level. Here is a simple list of what an enterprise should require:

  • Lifecycle Management – We need to be able to manage both the user lifecycle as well as the access lifecycle. It is very important that we do this from *one* central location, because that is the only way to ensure we know what access a specific identity has on any target application/service. This is also the only way to ensure accurate auditing.
  • Provisioning/Deprovisioning (SCIM) – Some applications and services require local identities to be created, so we need to be able to provision those users to the target applications when they are onboarded and then deprovision them once they no longer require access to those applications and services. SCIM (System for Cross-domain Identity Management) is the standard protocol used for provisioning/deprovisioning.
  • RBAC / ABACRole Based Access Controls and Attribute Based Access Controls. Basically, we need to assign those identities the access that is appropriate for the job they are doing, which can be based on their role, the project they are working on, the location they work from, etc., and only while they need it.
  • Entitlements Management – Entitlements are the access permissions that can be assigned to users. They can be in the form of group membership, application roles, OAuth scopes, etc. We need to be able to manage or group these permissions to enforce RBAC/ABAC. By now you know my opinion on groups vs roles and I also gave you some basic info on OAuth and it’s scopes.
  • Access Requests and Approvals – Users need to be able to request additional access that was not automatically assigned to them for valid reasons. And that access should be approved by the proper LOB owners or managers, etc.
  • Separation of Duties (SoD) – We need to be able to mitigate and reduce risk by isolating privileges that when combined can cause significant errors or intentional fraud. Think of this as ‘checks and balances’. SoD is sometimes referred to as ‘segregation of duties’.
  • Certification / Attestation – We need someone to be able to certify or attest that the permissions those identities have at that time are in fact required. This is normally achieved via scheduled access reviews.
  • Privileged Access Management – The access to roles that are considered highly privileged should have controls in place that reduce the risks by enforcing JIT (Just-In-Time) access.

I’ll cover how an enterprise can use Microsoft Azure Active Directory security solutions and how partners can build upon them to address all these requirements in parts 2, 3, and 4 in this series.

Building secure applications using modern authentication (part 4)

This post is a part of a series.

Additional Security Features

Finally, I just want to share a few additional security features to keep in mind when creating your applications.

  • In an effort to abide by the principle of least privilege it is highly recommended to implement RBAC (role based access controls) within your application. You can technically achieve this with either roles or groups, however I prefer roles for the reasons I detailed on my previous blog, Roles vs Groups. Organizations can further restrict the time and expiration of these roles by implementing PIM (Privileged Identity Management).
  • Use the Integration Assistant, which is a handy tool to generate specific security recommendations based on the type of application and wether or not it calls APIs.
  • A new feature announced recently offers a new secure option to use a federated identity to allow GitHub Actions (only one currently supported) to exchange tokens from GitHub for access tokens in AAD. These tokens then allow access to Azure resources without having to worry about using secrets or certificates. I think about these as external managed identities, for GitHub only (for now).

I hope this series of posts is helpful as you embark to create your own secure applications! I will do my best to keep these pages updated as more security features become available.

Building secure applications using modern authentication (part 3)

This post is a part of a series.

Whether you are building something as complex as a SCIM connector or a mobile application, or just a simple SPA application, chances are you want to share your application with customers or other partners. If so, there are a few security features you should be aware of.

Publisher Verification

In part 2 I discussed the concept of consent and I briefly mentioned that organizations can control how and who can consent to the various permissions by updating the tenant settings. Well, those settings can be found here:

With the setting above organizations can prevent the users from being able to consent to share their data, and they can also configure a workflow that allows the users to request the consent from their administrators.

With the settings below organizations can allow end-users to consent to specific privileges they deem to be low impact only from verified publishers.

Microsoft recommends restricting user consent to allow users to consent only for app from verified publishers, and only for permissions you select.” This is due to known risks associated with the abuse of application permissions that have either been forgotten by the organizations or simply not secured enough.

But what are verified publishers? Applications that are associated with a verified publisher provide the end-users with a trust factor. This is because becoming a verified publisher means the partner has a valid MPN (Microsoft Partner Network) account that has been verified to be a legitimate business. Once the process is completed any consent forms presented to users for applications associated with that MPN will show the verified publisher blue badge.

And as you can see above, it can also expedite the consent process if the tenants have the recommended settings that allow end-users to consent only for applications from a verified publisher.


App Gallery is a “catalog of thousands of apps that make it easy to deploy and configure single sign-on (SSO) and automated user provisioning.” The nice thing about AppGallery is that you can make your app available to all your customers in a secure way via Enterprise Apps. Although offering SSO for your application is highly recommended for security reasons, you don’t necessarily need to have both SSO and user provisioning enabled, it could be one or the other, so maybe you can start with some features and then later add others.

Keep in mind there is a process review associated with publishing your app via AppGallery, you can find the checklist and the steps here.

In part 4 of the series I’ll cover a few additional security recommendations for your custom applications.

Building secure applications using modern authentication (part 2)

This post is a part of a series.

Azure Active Directory, which is the Identity Provider (IdP) or OpenID Provider (OP) behind Azure and Office 365, supports OpenID Connect (OIDC) and OAuth 2.0, for authentication and authorization, respectively. It does this via Application Registrations and Service Principals (Enterprise Applications), which in turn are assigned permissions (scopes) for a variety of APIs, including Microsoft Graph API, as well as custom APIs exposed by applications on AAD. 

OAuth and OIDC are supported for both applications and services where AAD is the IdP/OP as well as within the AAD tenant itself and it works in the same manner.  I state this because unfortunately, that’s not the case for all IdPs. Some IdPs support OAuth for registered applications, but not for tenant/organization level access, such as unlocking a user or resetting MFA, etc.

Azure AD offers different types of permissions (scopes) for the various flows, as described on part 1 of this series. As we talk about permissions, please keep in mind that organizations can determine how and who can consent to the various permissions by updating the tenant settings. More about this in part 3 of the series.

There are different types of permissions mostly because they are meant for different types of applications and processes. Here is a quick summary of the types of permissions, their intended use, the consent required, and the effective permissions.

Effective Permissions
Interactive Application (signed-in user)

For interactive applications where a signed-in user is logged in, the applications will get access on behalf of users, which is the case for mobile, web, or SPAs (Single Page Applications). These interactive applications should be using delegated permissions since they act as the signed-in user when making calls to the API.

By default users can consent to delegated permissions, however admins will have to consent to some higher privileged permissions and when the permissions are required for all users. Consent is usually requested automatically when the user initially accesses an application that requires permissions that are protected by OAuth, or if the application specifically requests consent. It can also happen if the permissions have changed, if the user or an admin revoked the consent, or if the application is using incremental consent to ask for some permissions now and more later as needed, maybe for optional features. Incremental consent is a great way to abide by the principle of least privilege.

The effective permissions of interactive applications are essentially the intersection of the delegated permissions assigned to the application and the permissions the user has been granted within the system, which essentially prevents elevation of privilege.

Background Service or Daemon Process

For background services or daemon processes, the applications can only log in as themselves or a Service Principal (SPN).  Only administrators can consent to application permissions, since there is no associated user. These applications will require application permissions since there is no signed-in user and they make calls to the API as the SPN with the associated credentials, which can be a secret or a certificate.  These credentials should be stored in a password vault, such as Azure Key Vault. The great thing about using AKV is that you can use a managed identity to access the vault where the secret is kept, only when needed. Internally, managed identities are service principals that can be locked to only be used with specific Azure resources. Additionally, there are no credentials in the code, Azure takes care of rolling the credentials that are used. When the managed identity is deleted, the corresponding service principal is automatically removed. Permissions for MSIs are assigned via PowerShell (New-AzureADServiceAppRoleAssignment) or CLI, not the portal.

The effective permissions of these applications are the full application permissions that were granted and consented to for this application, since there is no associated user signed-in.

NOTE: Granting application permissions to interactive applications can significantly increase the risk associated due to the possibility of inadvertently elevating privileges for a signed-in user that can circumvent any permission guardrails directly associated with the user. For example, the Mail.Read permission, when assigned as a delegated permission “Allows the app to read the signed-in user’s mailbox“, but when assigned as an application permission, it “Allows the app to read mail in all mailboxes without a signed-in user“.

The permissions referenced above are assigned via the Application Registration menu, within the API permissions blade:

Even Exchange?

And in case you are wondering, yes, even accessing mail endpoints can be accomplished using OAuth, for additional details, please reference the links below:


One of the best benefits AAD offers is MSAL. The Microsoft Authentication Library (MSAL) is a set of libraries that authenticate and authorize users and applications. They are OAuth 2.0 and OIDC connect libraries that are built to handle protocol level details for developers. They stay up to date with the latest security updates and cache and refresh tokens automatically so developers don’t have to worry about the token expiration within custom applications. Basically, it provides developers a safe head start with OAuth 2.0 and OIDC for custom applications.

MSAL supports CAE (Continuous Access Evaluation), which is a new feature that allows tokens to be revoked as needed, based on specific risks, events (i.e. user is disabled), or policy updates (new location), etc. This feature allows tokens to have a longer life because they can be revoked when there is an action that dictates the access must be removed. MSAL supports this feature and will proactively refresh the tokens as needed. So, not only is your application safer, but it’s also more efficient.

MSAL also supports PIM and Conditional Access, including authentication context, which allows you to protect specific sensitive resources within your custom application. For an example of how Conditional Access works, please reference my previous blogs (Restrict downloads for sensitive (confidential) documents to only compliant devices and Passwordless Azure VM SSH login using FIDO2 security keys).

In part 3 of the series I’ll cover the App Gallery and the concept of publisher verification.

Building secure applications using modern authentication (part 1)

TL;DR – You don’t need to disable MFA for users in the name of “automation”. Basic authentication is considered legacy authentication because there are safer options available. Keep reading to learn about OAuth, OIDC, modern authentication and how to use the valet key to create secure applications.

As scary as it sounds, I have worked with too many third party tools (even security tools!) that rely on basic authentication to integrate services. Any IdP that offers only an API key that never expires as an option to authenticate apps used for automation is not offering you the safest option available for authentication.

With the upcoming deprecation of basic auth for Exchange, I figured this is a good time to talk about modern authentication, why it is a safer option, and how Microsoft makes it easier to implement. That’s the topic of the posts in this series:

Let’s start with a quick summary of the basics…

What is OAuth and OIDC?

OAuth is an authorization (authz) framework that was developed to allow application clients to be delegated specific access to services or resources. 

The client gets access via an access token with a specific lifetime that is granted with specific permissions, referred to as scopes, which are included in the claims (contents) within the access token. These access tokens are granted by an authorization server based on the approval of the owner of that resource or service. 

Open ID Connect, or OIDC, is the authentication (authn) profile that is built on top of OAuth to authenticate and obtain information about the end user, which is stored in an id_token once the user authenticates.

Is OIDC the same as SAML? No, it is not the same, but it is similar because it is used for federated authentication. If you are familiar with SAML, the Identity Provider (IdP) would be referred to as the OpenID Provider (OP) for OIDC and the Service Provider (SP) would be referred to as the Relying Party (RP) for OIDC.  And what you normally see within the assertion in SAML, you will see in the id_token in OIDC and the SAML attributes are the user claims in OIDC. SAML is mostly used for websites, whereas OIDC is mostly used for APIs, machine-to-machine, and mobile applications, so far.

In summary:

  • OAuth is for authorization through an access token.
  • OIDC is for authentication through an id token.

Both id_tokens and access tokens are represented as JSON Web Tokens (JWTs). Also, please keep in mind there is another type of token, a refresh token, that allows the clients to get a new access token after expiration. 

Note: When referring to OAuth on this page we are specifically referring to OAuth 2.0 and above. 

Why OAuth?

One word, granularity.

The typical analogy used to describe OAuth is the valet parking process, where the valet key is the OAuth token because it can’t do everything that the regular key can do, only what you need it to do for the valet person to park the car. The valet key cannot open the trunk or the glove box, two places where you may have valuable objects that you don’t want the valet person to have access to.

Those scopes (permissions) that we mentioned above are included in the claims within the token.  Therefore, the actions allowed using that token can only be within the range that is specifically noted in the scope.  For example, the JWT token below, can only perform actions that are permitted for these scopes: “Group.Read.All”, “User.Read.All”, and/or “AccessReview.Read.All”.  Additionally, these access tokens have a predetermined lifetime, as you can see from the expiration time (‘exp’) claim below. Consequently, the permissions on that token are only valid for the amount of time stated.

I used jwt.ms below to decode the token, so you can see the claims included.

Some of the claims in this token are explained in detail below. To simplify the discussion I am just focusing on the most relevant claims in the description below. The areas highlighted in red are (1) the expiration and (2) the specific permissions associated with that access token.

This makes it much easier to deliver effective applications while abiding by the principle of least privilege.

Different flows for different requirements

Client Types – OAuth defines two client types based on whether they can keep a secret or not. 

  • Confidential clients can keep a secret because they have a safe process to store secrets. For example, machine-to-machine, web application with a secure backend, etc.
  • Public clients cannot keep a secret. For example, mobile applications, SPAs, etc.

The type of client used for the connection combined with other factors determines the OAuth flow that is recommended for the solution, as explained below.


Legacy: Authorization Code – This flow is similar to PKCE explained below, but it does not include the code verifier and challenges noted below. This flow should only be used for confidential clients because it is susceptible to authorization code injection. OAuth 2.1 will require PKCE for all OAuth clients using the authorization code flow.

Authorization Code with PKCE (Proof Key for Code Exchange)- PKCE (pronounced ‘pixie’) is an improvement of the legacy Authorization Code grant type described above because vulnerabilities were discovered.  Specifically, a malicious application could intercept the authorization code and obtain the authorization token. This is the recommended flow for public clients. Originally it was intended for for mobile applications, but now this is the recommended flow for browser apps as well. See below a detailed description of this flow:

Authorization Code with PKCE

Client Credentials – This flow allows the client to get a token without the context of an end user.  In other words, you just need a client id and and a client secret. This flow is recommended only for confidential clients, where there is no end-user, i.e. machine-to-machine. Additionally, secrets and certificates used must be stored in a secure vault, such as Azure Key Vault and the credentials should be rotated. Keep in mind there are safer options within this same OAuth flow for machine-to-machine authentication and authorization within various clouds, i.e. MSIs (managed identities) in Azure AD, IAM roles in AWS, etc., and you should use those where possible. They are safer because the system generates the secrets and rotates the secret automatically, a developer doesn’t even need to know the secret.

Legacy: Implicit Flow – When this flow initially became available there were no other options to implement cross domain requests in a secure manner, however, that is no longer the case. Currently, there are secure ways, such as Authorization Code with PKCE, which is the recommended option. This flow is expected to be removed on OAuth 2.1.

Legacy: Password Grant or Resource Owner Password Flow – It’s just a way for the client to exchange a username and the user’s password for an access token from the authorization server. Given the obvious security risks associated with exposing a user’s credential to a client, the IETF states that “the resource owner password credentials grant MUST NOT be used“.  This flow is expected to be removed on OAuth 2.1.

Note: Some flows above have been marked Legacy because the upcoming OAuth 2.1 release will not support those. 

In part 2 of the series I’ll discuss the various types of scopes or permissions and the wonderful MSAL.

Restrict downloads for sensitive (confidential) documents to only compliant devices

TL;DR – Yes, you can restrict file access within a folder. Keep reading to see how you can restrict downloads or other actions for specific files to only allow certain access from compliant devices.

This specific scenario came up during a session and I wanted to document and share how this is possible. The question was wether you could restrict downloads for specific files, not just at folder level, to only allow the download of that file on compliant devices. Yes, it’s possible!

It’s a real better together story, where the following services work together to deliver:

  • Microsoft 365 Information Protection – for the sensitivity label configuration.
  • Azure Active Directory Conditional Access – for the App Control within the sessions controls to enforce the policy on specific cloud apps and/or specific users. After detecting the signals, AAD forwards the evaluation to Defender for Cloud Apps.
  • Microsoft Defender for Cloud Apps (previously MCAS) – for the conditional access policy that will evaluate the sensitivity labeled document is being downloaded from a compliant device, and if not, it will block it.
  • Microsoft Endpoint Manager / Intune – for the compliance policies that determine if the device is compliant. Intune passes information about device compliance to Azure AD.
  • Microsoft Defender for Endpoint – also helps because my compliance policies require devices to be at or under the specific risk score.
Information Protection Configuration

On this tenant I just have these three sensitivity labels with Highly Confidential – Project – Falcon being the highest level.

Azure Active Directory Conditional Access Configuration

This is the conditional access policy that will trigger the evaluation:

It will trigger for these specific cloud apps:

And it will then pass the baton over to Defender for Cloud Apps (previously MCAS) by selecting to “Use Conditional Access App Control” within Session controls.

Defender for Cloud Apps Configuration

In Defender for Cloud Apps (previously MCAS), I have a Conditional access policy:

And the following settings were used to configure this policy:

  • Session control type: Control file download (with inspection)
  • Filters: Device Tag does not equal Intune Compliant
  • Filters: Sensitivity label equal Highly Confidential – Project – Falcon

Additionally under Actions, I have selected to Block and notify, and I also customized the message the user will see.

Finally, I configured an additional alert to my email, which can be an administrator email, if needed.

I should also note that once you have completed the setup for the conditional access policy, you’ll notice these apps will slowly start showing in Defender for Cloud Apps under “Connected apps“, specifically under “Conditional Access App Control apps“.

Microsoft Endpoint Manager Configuration

Intune manages my endpoints and as you can see I have some compliant and some that are not compliant based on the compliance policies applied to them.

The compliance policies also require the device to be at or under the machine risk score:

I have connected Microsoft Endpoint Management to Defender for Endpoint as shown below:

Defender for Endpoint Configuration

From the Defender for Endpoint side, I have also connected it to Intune to share device information:


The final result is that any end user that tries to download files that are labeled Highly Confidential – Project – Falcon from a non-compliant device will be blocked, while still allowing downloads of other files in the same folder, as you can see this video:

The document that was blocked from downloading was labeled Highly Confidential – Project – Falcon and that is why the user is not allowed to download it to a non-compliant device.

Finally, since I configured an alert email, I also received this alert:

This is just one very specific scenario, there is so much more that is possible to modify and tweak depending on the requirements. However, I hope it gives you an idea of what is possible and hopefully inspires you to create your own scenario.