Always Threat Model Your Applications

The Why

The motivation of a cyber attacker may fall into one or more of these categories financial gain, political motives, revenge, espionage or terrorism. Once an attacker gets in they may install malware, such as a virus or ransomware. This  can disrupt operations, lock users out of their systems, or even cause physical damage to infrastructure. Understanding the motives behind cyber attacks helps to prevent and respond to them effectively.

The primary goal of threat modelling is to identify vulnerabilities and risks before they can be exploited by attackers, allowing organizations to take proactive measures to mitigate these risks.

This should include a list of critical assets and services that need to be protected. Examine the points of data entry or extraction to determine the surface attack area and check if user roles have varying levels of privileges.

Threat modelling should begin in the early stages of the SDLC, when the requirements and the design of the system are being established. This is because the earlier in the SDLC that potential security risks are identified and addressed, the easier and less expensive they are to mitigate. Also, it is advisable to engage your security operations team early in the design phase. Due to the sensitive nature of the threat modelling document, it would be advisable to label the document as confidential and not distribute freely within an organisation.

It is important to revisit the threat model to identify any new security risks that may have emerged and when new functionality is added to a system, or as the system is updated or changed.

Threat modelling is typically viewed from an attacker’s perspective instead of a defender’s viewpoint. Remember the most likely goal or motive of an attacker is information theft, espionage or sabotage. When modelling your threats, ensure you include both externally and internally initiated attacks.

To assist in determining the possible areas of vulnerabilities, a data flow diagram (DFD) would be beneficial. This will give a visual representation of how the application processes the data and highlights any persistent points. It helps to identify the potential entry points an attacker may use and the paths that they could take through the system to reach critical data or functionality.

The How

Let’s go through the process of modelling a simple CRM website which maintains a list of customers stored in a database hosted in Azure.

The hypothetical solution uses a Web Application Gateway to provide HA (High Availability) using two webservices. There are also two microservices, one manages all the SQL Database CRUD operations for customers and the other manages all customers images that are persisted to a blob store. Connection strings to the database are stored in Azure Key vault. Customers use their social enterprise identities to gain access to the website by using Azure AD B2C.

The help with the process of threat modelling the application, a DFD (Data Flow Diagram) is used to show the flow of information between the processors and stores. Once the DFD is completed, add the different trust boundaries on the DFD drawing.

DFD Diagram of the CRM solution

Next, we will start the threat modelling process to expose any potential threats in the solution. For this we use the STRIDE Model developed by Microsoft https://learn.microsoft.com/en-us/azure/security/develop/threat-modeling-tool-threats#stride-model. There are other modelling tools available such as PASTA (Process for Attack Simulation and Threat Analysis), LINDDUN (link ability, identifiability, nonrepudiation, detectability, disclosure of information, unawareness, noncompliance), Common Vulnerability Scoring System (CVSS) and Attack Trees.

The STRIDE framework provides a structured approach to identifying and addressing potential threats during the software development lifecycle

STRIDE is an acronym for Spoofing, Tampering, Repudiation, Information disclosure, Denial of Service, and Elevation of privilege.

The method I use involves creating a table that outlines each of the STRIDE categories, documenting potential threats and the corresponding mitigation measures. Use the DFD to analyse the security implications of data flows within a system and identify potential threats for each of the STRIDE categories. The mitigation may include implementing access controls, encryption mechanisms, secure authentication methods, data validation, and monitoring systems for suspicious activities.

Spoofing  (Can a malicious user impersonate a legitimate entity or system to deceive users or gain unauthorized access)

IdThreatRiskMitigation
SF01Unauthorized user attempts to impersonate a legitimate customer or administrator accountModerateUsers authenticate using industrial authentication mechanisms and Administers enforced to use MFA.   Azure AD monitoring to detect suspicious login activities.
SF02Forgery of authentication credentials to gain unauthorized access.LowToken  based authentication is used to protect against forged credentials

Tampering (Can a malicious user modify data or components used by the system?)

IdThreatRiskMitigation
TP01Unauthorized modification of CRM data in transit or at rest.ModerateTLS transport is used between all the components. SQL Database and Blob store are encrypted at rest by default.
TP02Manipulation of form input fields to submit malicious data.HighData validation and sanitation is performed  at every system process.   Configure the Web Application Firewall (WAF) rules to mitigate potential attacks.
TP03SQL injection attacks targeting the SQL Database.ModerateSecure coding practices to prevent SQL injection vulnerabilities.   Configure the Web Application Firewall (WAF) rules to mitigate potential attacks.
TP04Unauthorized changes to the CRM website’s code or configuration.LowSource code is maintained in Azure Devops and deployed using pipelines which incorporate code analyses and testing.

Repudiation (Can a malicious user deny that they performed an action to change system state?)

IdThreatRiskMitigation
RP01Users deny actions performed on the CRM website.LowLogging and audit mechanisms implemented to capture user actions and system events
RP02Attackers manipulate logs or forge identities to repudiate their actions.ModerateAccess  to log files are protected by RBAC roles.

Information Leakage (Can a malicious user extract information that should be kept secret?)

IdThreatRiskMitigation
IL01Customer data exposed due to misconfigured access controls.  HighFollow the principle of least privilege and enforce proper access controls for CRM data.  
IL02Insecure handling of sensitive information during transit or storage.  HighEncryption for sensitive data at rest and TLS is used when in transit.  
 IL03Improperly configured Blob Store permissions leading to unauthorized access to customer images.ModerateRegularly assess and update Blob Store permissions to ensure proper access restrictions.

Denial of Service (Can a malicious user disrupt or degrade system functionality or availability?)

IdThreatRiskMitigation
DS01Application Gateway targeted with a high volume of requests, overwhelming its capacity.HighConfigure appropriate rate limiting and throttling rules on the Application Gateway.  
DS02SQL Database subjected to resource-intensive queries, causing service disruption.LowImplemented SQL Database performance tuning and query optimization techniques.
DS03Azure Blob Store overwhelmed with excessive image upload requests, impacting availability.ModerateExponential back-off is implemented to decrease and to ease out spikes in traffic the store.  

Elevation of Privilege (Can a malicious user escalate their privileges to gain unauthorized access to restricted resources or perform unauthorized actions?)

IdThreatRiskMitigation
EP01Unauthorized users gaining administrative privileges and accessing sensitive functionalities.ModerateSensitive resource connection strings stored in Key vault.   Implement network segmentation and access controls to limit lateral movement within Azure resources.   SQL Database hosted inside VNet and access controlled by NSG
EP02Exploiting vulnerabilities to escalate user privileges and gain unauthorized access to restricted data or features.LowRegularly apply security patches and updates to the CRM website and underlying Azure components.   Conduct regular security assessments and penetration testing to identify and address vulnerabilities.

Alternative Approach

Instead of going through this process yourself, Microsoft offer a threat modeling tool which allows you to draw your Data Flow Diagram that represents your solution. Then the tool allows you to generate an HTML report of all the potential security issues and suggested mitigations. The tool can be downloaded from here https://learn.microsoft.com/en-us/azure/security/develop/threat-modeling-tool

Conclustion

With the rising sophistication of attacks and the targeting of critical infrastructure, cyber threats have become increasingly imminent and perilous. The vulnerabilities present in the Internet of Things (IoT) further contribute to this escalating threat landscape. Additionally, insider threats, risks associated with cloud and remote work, and the interconnected nature of the global network intensify the dangers posed by cyber threats.

To combat these threats effectively, it is crucial for organisations and individuals to priorities cybersecurity measures, including robust defenses, regular updates, employee training, and strong encryption techniques.

By reading this article, it is hoped that the significance of incorporating threat modeling into your application development process is emphasized.

DevOps Pipeline Token Replacement Template

There are times when you simply need to replace tokens with actual values in text files during the deployment phase of a solution. These text files may be parameter or app setting files or even infrastructure as  code files such as ARM/Bicep templates.

Here I will be building a reusable template to insert the pipeline build number as a Tag on some LogicApps every time the resources are deployed from the release pipeline. Using the process describe below can easily be used to replace user defined tokens in other types of text files by supplying the path to the file to search in, the token signature and the replacement value.

I used the PS script below to read in the text file contents and then search and replace the token with the required value. The PS script uses several parameters so it may be reused throughout the release pipeline for many different text file types and tokens.

The PS script is called from a template file below which takes a list of files to search as one of the template parameters. This allows me to search for the token across multiple files in one hit. The template iterates through each file calling the PS script and passing the file path and the other required parameters.

The yml release pipeline file is shown below which calls the template at line 19 to replace the tokens in the LogicApp.parameters.json files. In this scenario, the token name to search for is specified at line 24.  You need to ensure the chosen token name does not clash with any other valid text.

The last step at line 27 deploys the LogicApps to the specified resource group.

Executing the pipeline will read the following LogicApp parameter file from the source code folder on the build agent and then replace the buildNumber token with the actual value.

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "environment":{
      "value": null
    },
    "businessUnitName":{
      "value": "n/a"
    },
    "buildNumber":{
      "value": "%_buildNumber_%"
    }
  }
}

After running through the token replacement step, the buildNumber is updated with the desired value.

The full source code for this article is available on Github here: https://github.com/connectedcircuits/tokenreplacement. I tend to use this template with my other blog about using global parameter files here: https://connectedcircuits.blog/2022/11/17/using-global-parameter-files-in-a-ci-cd-pipeline/

Enjoy…

Using Global Parameter Files in a CI/CD Pipeline

When developing a solution that has multiple projects and parameter files, more than likely these parameter files will have some common values shared between them. Examples of common values are the environment name, connection strings, configuration settings etc.

A good example of this scenario are Logic App solutions that may have multiple projects. These are typically structured as shown below where each project may several parameter files, one for each environment. Each of these parameter files will have different configuration settings for each of the 3 environments but are common across all the projects.

Keeping track of the multiple parameter files can be a maintenance issue and prone to misconfiguration errors. An alternative is to use a global parameter file which contains all the common values used across the projects. This global file will overwrite the matching parameter value in each of the referenced projects when the projects are built inside a CI/CD pipeline.  

By using global parameter files, the solution will now look similar to that shown below. Here all the common values for each environment are placed in a single global parameter file. This now simplifies the solution as there is now only one parameter file under each project and all the shared parameter values are now in a global parameter file. The default global values for the parameter files under each Logic App project will typically be set to the development environment values.


The merging of the global parameter files is managed by the PowerShell script below.

# First parameter is the source param and the second is the destination param file.
param ($globalParamFilePath,$baseParamFilePath)

# Read configuration files
$globalParams = Get-Content -Raw -Path $globalParamFilePath | ConvertFrom-Json
$baseParams = Get-Content -Raw -Path $baseParamFilePath | ConvertFrom-Json

foreach ($i in $globalParams.parameters.PSObject.Properties)
{
  $baseParams.parameters | Add-Member -Name $i.Name -Value $i.Value  -MemberType NoteProperty -force
}

# Output to console and overwrite base parameter file
$baseParams | ConvertTo-Json -depth 100 |Tee-Object $baseParamFilePath

The script is implemented in the release pipeline to merge the parameter files during the build stage. A full working CI/CD pipeline sample project can be downloaded from my GitHub repo here https://github.com/connectedcircuits/globalParams

In the solution mentioned above,  I have 2 Logic App projects where the parameter files have the following content.

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "environment":{
      "value": dev
    },
    "businessUnitName":{
      "value": "n/a"
    }
  }
}

And the contents of the global parameter file is listed here:-

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "environment":{
        "value": "sit"
      },
      "businessUnitName":{
        "value": "Accounting Department"
      }
    }
  }

Running the release pipeline produces the following merged file which is used by the pipeline to deploy the Logic Apps to Azure.

The Logic App resource file uses these parameters to create some tags and appends the environment variable to the Logic App name as shown below.

Using Environment variables available in Azure DevOPs is another option, but I like to keep the parameter values in code rather than have them scattered across the repo and environment variables.

Enjoy…

Azure APIM Content Based Routing Policy

Content based routing is where the message routing endpoint is determined by the contents of the message at runtime. Instead of using an Azure Logic App workflow to determine where route the message, I decided to use Azure APIM and a custom policy. The main objective of using an APIM policy was to allow me to add or update the routing information through configuration rather than code and to add a proxy API for each of the providers API endpoints.

Below is the high level design of the solution based on Azure APIM and custom policies. Here the backend ERP system sends a generic PO message to the PO Routing API which will then forward the message onto the suppliers proxy API based on the SupplierId contained in the message. The proxy API will then transform the generic PO to the correct format before being sent to the supplier’s API.

The main component of this solution is the internal routing API that accepts the PO message and uses a custom APIM Policy to read the SupplierId attribute in the message. The outbound URL of this API is then set to the internal supplier proxy API by using a lookup list to find the matching reroute URL for the SupplierId. This lookup list is a JSON array stored as a Name-Value pair in APIM. This allows us to add or update the routing information through configuration rather than code.

Here is the structure of the URL lookup list which is an array of JSON objects stored as a Name-value pair in APIM.

[
   {
     "Name": "Supplier A",
     "SupplierId": "AB123",
     "Url": https://myapim.azure-api.net/dev/purchaseorder/suppliera
    },
    {
     "Name": "Supplier B",
     "SupplierId": "XYZ111",
     "Url": https://myapim.azure-api.net/dev/purchaseorder/supplierb
    },
    {
     "Name": "SupplierC",
     "SupplierId": "WAK345",
     "Url": https://myapim.azure-api.net/dev/purchaseorder/supplierc
    }
 ]

PO Routing API

The code below is the custom policy for the PO Routing API. This loads the above JSON URL Lookup list into a variable called “POList”.  The PO request body is then loaded into a JObject  type and the SupplierId attribute value found. Here you can use more complex queries if the required routing values are scattered within the message.

Next the URL Lookup list is converted into a JArray type and then searched for the matching SupplierId which was found in the request message. The  variable “ContentBasedUrl” is then set to the URL attribute of the redirection object. If no errors are found then the backend url is set to the  internal supplier’s proxy API and the PO message forwarded on.


  // Get the URL Lookup list from the Name-value pair in APIM 
    <set-variable name="POList" value="{{dev-posuppliers-lookup}}" />


    <set-variable name="ContentBasedUrl" value="@{ 

        JObject body = context.Request.Body.As<JObject>(true);

        // Create an JSON object collection
        try
        {

          // Get the AccountId from the inbound PO message
            var supplierId = body["Purchase"]?["SupplierId"];

            var jsonList = JArray.Parse((string)context.Variables.GetValueOrDefault<string>("POList"));

          // Find the AccountId in the json array 
            var arr = jsonList.Where(m => m["SupplierId"].Value<string>() == (string)supplierId);

            if(arr.Count() == 0 )
            {
                return "ERROR - No matching account found for :- " + supplierId;
            }

            var url = ((JObject)arr.First())["Url"];
            if( url == null )
            {
                return "ERROR - Cannot find key 'Url'";
            }          

            return url.ToString();
        }
        catch( Exception ex)
        {
            return "ERROR - Invalid message received.";
        }

    }" />
    <choose>
        <when condition="@(((string)context.Variables["ContentBasedUrl"]).StartsWith("ERROR"))">
            <return-response>
                <set-status code="400" />
                <set-header name="X-Error-Description" exists-action="override">
                    <value>@((string)context.Variables["ContentBasedUrl"])</value>
                </set-header>
            </return-response>
        </when>
    </choose>
    <set-backend-service base-url="@((string)context.Variables["ContentBasedUrl"])" />
</inbound>
<backend>
    <base />
</backend>
<outbound>
    <base />
</outbound>
<on-error>
    <base />
</on-error>

Supplier Proxy API

A proxy API is created for each of the suppliers in APIM where the backend is set to the suppliers API endpoint.  The proxy will typically have the following policies.

The primary purpose of this proxy API is to map the generic PO message to the required suppliers format and to manage any authentication requirements. See my other blog about using a mapping function within a policy https://connectedcircuits.blog/2019/08/29/using-an-azure-apim-scatter-gather-policy-with-a-mapping-function/ There is no reason why you cannot use any other mapping process in this section.

The map in the Outbound section of the policy takes the response message from the Suppliers API and maps it to the generic PO response message which is then passed back to the PO Routing API and then finally to the ERP system as the response message.

Also by adding a proxy for each of the supplier’s API provides decoupling from the main PO Routing API,  allowing us to test the suppliers API directly using the proxy.

The last step is to add all these API’s to a new Product in APIM so the subscription key can be shared and passed down from the Routing API to the supplier’s proxy APIs.

Enjoy…

Enterprise API for Advanced Azure Developers

For the last few months I have busy developing  the above course for EC-Council. This is my first endeavour doing something like this and wanted to experience the process of  creating an online course.

You can view my course here at https://codered.eccouncil.org/course/enterprise-api-for-advanced-azure-developers and would be keen to get your feedback on any improvements I can make if I do create another course in the future.

The course will take you through the steps of designing and developing an enterprise-ready API from the ground up, covering both the functional and non-functional design aspects of an API. There is a total of 7 chapters equating to just over 4 hours of video content covering the following topics:

  • Learn the basics of designing an enterprise level API.

  • Learn about the importance of an API schema definition.

  • Learn about the common non-functional requirements when designing an API.

  • Develop a synchronous and asynchronous API’s using Azure API App and Logic Apps

  • Understand how to secure your API using Azure Active Directory and custom roles.

  • Set up Azure API Management and customizing the Developer Portal.

  • Create custom APIM polices.

  • Configure monitoring and alerting for your API.

Enjoy…

Always Linter your API Definitions

An API definition may be thought of as an electronic equivalent of a receptionist for an organisation. A receptionist is normally the first person they see when visiting an organisation and their duties may include answering enquires about products and services.

This is similar to an API definition, when someone wants to do business electronically, they would use the API definition to obtain information about the available services and operations.

My thoughts are, API’s should be built customer first and developer second. Meaning an API needs to be treated as a product and to make it appealing to a consumer, it needs to be designed as intuitive and usable with very little effort.

Having standardised API definition documents across all your API’s, allows a consumer to easily navigate around your API’s as they will all have a consistent look and feel about them. This specially holds true when there are multiple teams developing Microservices for your organisation. You don’t want your consumers think they are dealing with several different organisations because they all look and behave differently. Also, when generating the client object models from several different API definitions from the same organisation, you want the property names to all follow the same naming conventions.

To ensure uniformity across all API schema definitions, a linter tool should be implemented. One such open-source tool is Spectral https://github.com/stoplightio/spectral from Stoplight, however there are many other similar products available.

Ideally the validation should be added as a task to your CI/CD pipelines when committing the changes to your API definition repository.

Using Spectral in MS DevOps API Projects

Spectral provides the capability to develop custom rules as a yaml or json file. I typically create a separate DevOps project to store these rules inside a repository which may be shared across other API projects.

Below is how I have setup the repositories section in the pipeline to include the linter rules and the API definition.

image

After downloading the Spectral npm package in the pipeline, I then run the CLI command to start the linter on the schema specification document which is normally stored in a folder called ‘specification’ in the corresponding API repository.

clip_image001

An example of the pipeline job can be seen below. Here the job failed due to some rule violations.

clip_image002

The build pipeline and sample rules can be found here https://github.com/connectedcircuits/devops-api-linter

Hope this article now persuades you to start using Linter across all your API definitions.

Enjoy…

Using Azure Event Grid for Microservice Data Synchronisation

This blog is about using the Event Driven architecture pattern to synchronise data across multiple services. I typically use this when several Microservices have their own database (Database per service) and are required to either cache or locally persist slow changing data as read-only retrieved from another source.

I mentioned slow changing data because using this pattern provides eventual data consistency. If data changes too rapidly, the subscribers may never keep up with the changes.

This other source would typically be an authoritative Microservice that sends out an event to any interested parties whenever its data has been modified. The events are typically lightweight and there is no strong contract between publisher and subscribers. Normally these events do not contain the updated data, it’s simply a notification to say something has been updated. If you want the actual changes, then come a get it by using another mechanism. This could be a HTTP Request/Response on another endpoint belonging to the authoritative service.

Microservices that cache or persisting external data locally have the following benefits:

  • Removes the dependency from any authoritative Microservices.
  • Avoids API Composition by not having to query across multiple services.
  • Improved overall performance of the Microservice.
  • Promotes loose coupling between Microservices.
  • Reduces network traffic and chatter between Microservices.

Below is the high level solution using Azure Event Grid to publish events and Logic Apps as subscribers.

clip_image001clip_image001[7]

Whenever specific data is updated in the authoritative Microservice, the service will also publish an event to Azure Event Grid as it provides a lightweight HTTP call-back function to the registered subscribers.

A Logic App is used as the event handler with a HTTP Trigger and will be the call-back endpoint for the Event Grid subscriber. Using a Logic App abstracts the data synchronisation process away from the actual downstream Microservice logic. Also the Logic App provides a number of available connectors for different data stores out of the box if pushing the data directly to the database is desired. Or the other alternative is the Logic App calls an API endpoint on the downstream Microservice to update the repository.

Use-case

A typical scenario would be an E-Commerce website which manages online orders from customers. This type of application would normally comprise of multiple Microservices. This may involve a CRM service which will be the authoritative service for the customer information, a Delivery service that manages the dispatch of orders.

To improve the performance of the Delivery service, a copy of the customers address would be persisted in the local database rather than query CRM each time an order is to be dispatched. This removes the dependency on the CRM service being available when dispatching orders..

This E-Commerce solution may also incorporate an Accountancy service which publishes an event whenever the customer’s account status changes from active to on-hold or vice versa. The Cart and Delivery Microservices would subscribe to this this event and persist the status of the customer locally. Again it’s the same architecture pattern as before except the event data may contain the customer’s account status as this data would be relatively small.

Below is the complete sequence diagram of the events between the artefacts.

clip_image001[9]

Whenever the website calls the CRM service to update the customers information, the service will also publish a custom event to the Azure Event Grid Topic.

The structure of the data payload has several properties, the customer Id, the event source and the CRM endpoint URL to retrieve the updated data. The value in the CallbackUrl property is what the Logic App will use to retrieve the full address dataset from the CRM service.

clip_image001[11]clip_image001[13]

For the event handler, a Logic App with a Http request trigger is used. This provides the necessary workflow logic and built-in connectors to query the CRM service via HTTP and then update the Delivery Microservice database using the SQL Db connector.

Simple POC

To simulate the CRM service publishing events, I created a simple console app to publish several events onto the Event Grid. The code for the console app can be found here: https://github.com/connectedcircuits/azureeventgridpub

The Logic App workflow is shown below. The ‘For each Event’ iterates through each event calling the CRM endpoint defined in the event data. The switch task evaluates the event subject to determine the crud operation and the Db stored procedure to call to update the local database.

clip_image001[15]

Below is a sample received HTTP request sent by Azure Event Grid and the event subscription filter is based on the eventType and subject values.

clip_image001[17]

Final thoughts

Enabling the dead lettering feature of the Azure Event Grid is recommended to track which consumers are not responding to the events and remember to setup alerting and monitoring around this.

If a subscriber misses a notification event, what kind of compensation transaction should occur. If events are received out of order, what should happen. These types of questions are typically answered by the business analyst.

Some form of event tracking mechanism should also be incorporated into the solution. Imagine there may be several hundred events per minute and trying to trace an event to a subscriber would be quite difficult. This could involve using a session Id that can be tracked across all services involved in the event.

When there are multiple subscribers interested in the same event and they all call the same endpoint to retrieve the full dataset, the authoritative Microservice should use some form of caching mechanism on the response object to improve performance.

Enjoy….

Extracting Claims from an OAuth Token in an Azure Function

This is a follow up from my previous blog on adding custom application claims to an App registered in AAD (https://connectedcircuits.blog/2020/08/13/setting-up-custom-claims-for-an-api-registered-in-azure-ad/). In this article I will be developing an Azure Function that accepts an OAuth token which has custom claims. The function will validate the token and return all the claims found in the bearer token as the response message.

Instead of using the built-in Azure AD authentication and authorization support in Azure Functions, I will be using the NuGet packages Microsoft.IdentityModel.Protocols and System.IdentityModel.Tokens.Jwt to validate the JWT token. This will allow decoding of bearer tokens from other authorisation providers.

These packages uses the JSON Web Key Set (JWKS) endpoint from the authorisation server to obtain the public key. The key is then used to validate the token to ensure it has not been tampered with. The main advantage of this option is you don’t need to worry about storing the issuer’s public key and remembering to update the certs before they expire.

 

Function Code

The full code for this solution can found on my github repository https://github.com/connectedcircuits/azOAuthClaimsFunct.

Below is a function which returns the list of signing keys from the jwks_uri endpoint. Ideally the response should be cached, as downloading the keys from the endpoint can take some time.

// Get the public keys from the jwks endpoint      
private static async Task<ICollection<SecurityKey>> GetSecurityKeysAsync(string idpEndpoint )
{
var openIdConfigurationEndpoint = $"{idpEndpoint}.well-known/openid-configuration";
var configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>(openIdConfigurationEndpoint, new OpenIdConnectConfigurationRetriever());
var openIdConfig = await configurationManager.GetConfigurationAsync(CancellationToken.None);
return openIdConfig.SigningKeys;
}

The next part of the code is to configure the TokenValidationParameters properties with the authorisation server address, the audiences and the signing keys obtained from the GetSecurityKeysAsync function mentioned above.

TokenValidationParameters validationParameters = new TokenValidationParameters
{
ValidIssuer = issuer,
ValidAudiences = new[] { audiences },
IssuerSigningKeys = keys
};

Next is to validate the token and acquire the claims found in the token which is assigned to the Claims Principal object.

//Grab the claims from the token.
JwtSecurityTokenHandler handler = new JwtSecurityTokenHandler();
SecurityToken validatedToken;
ClaimsPrincipal principal;
try
{
principal = handler.ValidateToken(token, validationParameters, out validatedToken);
}
catch(SecurityTokenExpiredException ex)
{
log.LogError(ex.Message);
req.HttpContext.Response.Headers.Add("X-Error-Message", $"Token expired at {ex.Expires}");
return new UnauthorizedResult();
}
catch(Exception ex)
{
log.LogError(ex.Message);
return new UnauthorizedResult();
}

Once you have the principle object instantiated, you can use the IsInRole(“<role_name>”) method to check if the token contains the role. This method will return a boolean true value if the role is found.

 

Runtime results

This is the token request for an  app registered in Azure AD that has the crm.read and crm.write claims assigned.

image

This is the response from the Azure Function using the bearer token attained from AAD. Here you can see the two custom application claims crm.read and crm.write listed amongst the default claims.

 

image 

 

 

This is another example of using Auth0 (https://auth0.com/) as an alternative authorisation server. The API was registered with full crud permissions added whilst the client was only given access to read & update roles. Below is the request sent to Auth0 for a token.

image

 

This is the response from calling the Azure Function using the Bearer token from Auth0 with the two custom claims crm:read and crm:update returned with the other default claims.

 

image

 

Conclusion

As you can see, you can use any authorisation server that supports a jwks_uri endpoint to acquire the public signing keys and when the generated token uses RS256 as the algorithm for signing and verifying the token signatures.

Enjoy…

Setting up custom Claims for an API registered in Azure AD

One way to further reduce the surface area attack on an API when using the Client Credential OAuth flow is to pass claims in the token. This adds another layer of security and provides more granular control over your API. By adding custom claims within the token, the API can inspect these and restrict what operations the client may perform within the API service. This may be just to limit basic crud type operations or some other type of grant requirement. More information about optional claims can be found here: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-optional-claims

Just to recall, Client Credentials flow is normally Client-to-Service requests and has no users involved in the flow. A Service Principal (SP) object is associated with the Client App when making calls to a backend service API. The value of this Id can be found under AAD Enterprise Applications and is the ObjectId  of the application name. To ensure the SP used to call your API is the one you are expecting, use the oid or sub claims to validate the SP as another security check.

Also the Client Credentials flow is meant to be used with confidential clients. These are clients that are running on the server as opposed to those running on user devices (which are often referred to as ‘public clients’) as they are able to store the secret securely. Confidential clients provide their client ID and client secret in the requests for access tokens.

 

Scenario

A typical scenario is where a Client App needs to call a backend API Service. Here the Client App is a CRM Website which is calling a CRM Service which is the backend API Service to either get a users profile or update their details using the Client Credentials flow. Also to limit operational functionality, two roles will be sent with the bearer token to allow only reading or updating the CRM contact information.

 

image

 

In this article I will go through the process of adding a list of Application claims to an App registered as an API service (CRM Service) in Azure Active Directory (AAD). Another App registration is also required for the client (CRM Website) which has a set of restricted claims. These restricted claims will be inserted into the Access token where the API service (CRM Service) would interrogate the claims and perform the required operations if the claims are valid.

 

Configuration Walkthrough

Below are the steps required to create an AAD App Registration for the CRM Service API and to add the required application claims (roles). The claims are added by modifying the manifest file after the App has been registered as there is no UI available to manage this.

 

Registering the Service API

1. Register the CRM Service API – Under the Azure Active Directory page, click on App registrations and New registration

 

clip_image001[7]

2. Add a name (CRM Service) for the application and then click the Register button at the bottom of the page.

 

image

 

3. Adding Claims – Once the App registration has been saved, click on the Manifest menu option to view and update the manifest appRoles via the portal.

 

image

 

Now we can add a of collection of custom roles under the “appRoles” attribute using the following JSON structure for the claims. This is where you define the complete list of custom roles that your API service will support. The claims I will be adding for the CRM service are basic crud type operations. Note the value for the id for each role is just a Guid to keep it unique within the tenant.

 

image

Remember to click the Save button to on the top to update the manifest.

 

4. Set Application ID URL – Under the Expose an API menu option, click the Set for Application ID URI. This value will be used as the Scope parameter when requesting an OAuth token for the Client App. You can also view this value from the app registration Overview page.

 

clip_image001[13]

 

That is all required to register the API Service and to add custom application claims. Next is registering the client. 

 

Registering the Client App

1. Create a new App registration as before, but for the client (CRM Website)

 

clip_image001[15]

 

2. Add the required permissions – This section is where you define the roles required for the client app. To add the permissions, click on API Permissions and then Add a permission.

clip_image001[17]

 

Then click on My API’s on the top menu to list all your registered apps. Select the CRM Service app which was added in the first section to which you want access to.

 

image

 

Select Application permissions and then expand the permissions. This will list all the available roles available that was added when registering the CRM Service API. For the CRM Website, I only require the read and update roles. Once checked, then click on the Add permissions button.

 

clip_image001[21]

 

Once the permissions have been added, Click on Grant admin consent for the cloud and then press Yes on the dialogue box.

 

clip_image001[23]

 

The status of the permissions should then be all green as highlighted below.

 

clip_image001[25]

 

5. Add a client secret – Click on the Certificates & secrets menu option to add a new client secret. Remember to take note of this value as it is required when obtaining an OAuth token later.

 

clip_image001[27]

 

Requesting an OAuth Token

Before the Client App (CRM Website) can call the API Service (CRM Service), a bearer token needs to be obtained from the OAuth 2.0 token endpoint of AAD. For this, I am going to use Postman to act as the Client App by sending the following parameters. You should be able to get all these values from the Overview of the Client App (CRM Website) and the Scope value from the API Service (CRM Service)

  • grant_type – Must be set to client_credentials
  • client_id – The application found in the Client App (CRM Website)
  • client_secret – The secret that was generated in the Client App (CRM Website) which must be URL encoded.
  • scope – This the application ID URL of the API Service (CRM Service) which must be URL encoded. Also you need to append /.default to the end of the URL eg  (scope=api://189f0961-ba0f-4a5e-93c1-7f71a10b1a13/.default)

 

Here is an example of the PostMan request to obtain the token. Remember to replace {your-tenant-id} with your one.

image

When you send it and all the parameters are correct, you should receive a JWT token as per below.

image

Now if you take the value of the “access_token” and use a tool like https://jwt.ms/ to decode it, you should see the custom application claims.

image

In conclusion, use custom claims to provide granular control of operations in your backend API’s and any security breaches from hijacked client applications. 

Keep a watch out for my next blog where I will show you how to access these claims from within an Azure Function.

Enjoy…

Connecting an Azure WebApp to a SQL Server VM inside a VNet

This article is about connecting an Azure WebApp to a SQL Server VM which is hosted inside an Azure Virtual Network. Typically a SQL Server VM would be hosted inside an Azure Virtual Network (VNet)  to isolate it from the internet by blocking all inbound and outbound internet traffic using a Network Security Group (NSG). In this scenario, connectivity  to the SQL Database is achieved by using the new VNet Integration feature found on the App Service component. Using this feature removes the requirement of an App Service Environment (ASE) for the WebApp thus reducing overall hosting costs.

Using VNet integration provides private outbound access from your App Service to resources in your VNet using the RFC1918 internal IP address allocation range (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) by default.

 

Scenario

A web application is hosted in a WebApp which requires a connection to the SQL Database hosted inside a VNet.

The network topology of this scenario is shown below which uses the Regional VNet Integration option where both the WebApp and SQL VM are in the same region. Here we have a VNet called Backend which has two subnets, one for the VNet Integration used for delegating called IntegDeleg and the other to host the SQL Server VM called DataStore.

 

image

 

Configuration Walkthrough

The following are the sequence of steps used to setup VNet Integration between a Web App and SQL Server with the assumption the SQL Server VM is already hosted inside a VNet.

1. Adding a VNet subnet

2. Provisioning  an AppSvc Plan

3. Provisioning  a WebApp

4. Setting up the VNet Integration

5. Validating SQL Server Security Settings

6. Testing connectivity to the SQL Server

7. Updating the Web App SQL Connection String

 

1. Adding a VNet Subnet

A dedicated subnet used by the VNet Integration feature is required to be added to the existing VNet hosting the SQL Server VM. The IP range should match the maximum number of AppSvc plan instances when fully scaled out as each instance would require a IP address. For this scenario I will be using a /27  prefix giving a total range of 32 address, however  5 address are used internally by Azure leaving 27 usable addresses for each plan instance.

 

image

 

2. Provisioning App Svc Plan

To use VNet Integration, you will need to provision an App Service plan using newer V2 scale units. Note if you are currently using V1 App Services, you will need to provision a new plan using V2 and migrate you apps to this new plan.

To confirm if you have selected the newer V2 App Services, the Premium plans should be shown as P1V2, P2V2 and P3V2. Here I will be using a Standard Plan S1 for this scenario highlighted below.

image

 

3. Provisioning Web App

Create a new Web App and ensure it is in the same region as the VNet. Also ensure you have selected the  App Service Plan you created above.

image

 

4. Enable VNet Integration

Under the Web App that was provisioned above, click on the Networking menu item to view the networking options and then click on “Click here to configure” under the VNet Integration heading.

image

 

Once the VNet Configuration form opens, click on the “Add VNet” to open the Network Feature Status blade. Select the VNet that hosts the SQL Server and then the Subnet that was created earlier for the VNet Integration. Then press OK to save the changes.

 image

 

After you hit OK, the VNet Integration should be connected and ready for testing the connectivity. Remember the VNet Integration will route all outbound RFC1918 traffic from the WebApp into your VNet.

 

image

 

5. Validating SQL Server Security Settings

To reduce the surface area of attack, ensure the SQL Server can only accept connections within the VNet. This is done by setting the “SQL connectivity” option to Private (within Virtual Network) under the Security menu of the SQL Virtual Machine.

 

image

 

Also check the NSG attached to the SQL Server VM to ensure there is a rule to disable all outbound internet traffic. If there is a inbound rule called default-allow-sql as highlighted below, it is advisable to delete this rule if not required. This inbound rule default-allow-sql is normally created when the security on the SQL Server VM allows SQL connectivity via Public (Internet) connections.

 

image

 

6. Testing connectivity

To check connectivity between the Web App and the SQL server, we can use the  tcpping command from a console window. Go to the Web App that was created previously and click on the Console menu item to open a console window similar to below.

image

In the console window type the command tcpping <sql vm private ip address>:1433. All going well you should get a reply similar to that below where 10.0.2.4 was the private IP address of my SQL Server VM using the default port 1433.

image

 

7. Updating the Web App SQL Connection String

Once the connectivity has been verified, the next step is to update the connection string on the Web App to use the private IP address of the SQL Server VM. Typically the connection string should look something like this:- Server=10.0.2.4;Database=coreDb;User Id=myusername;Password=mypassword;MultipleActiveResultSets=true

After the connection string has been updated to use the private IP address, you should be able to test your application. Here I am just adding some new tasks in the TodoList web application and confirming the records are written to the database.

 image

 

Conclusion

VNet Integration provides an easy and cost effective solution to access databases hosted within a VNet without resorting to a dedicated  ASE. Also using rules on the NSG applied to the SQL Server VM provides the capability to block all internet traffic and allow only RFC1918 internal addresses to have access.

More information about VNet Integration can be found on the MS docs site https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet.

Enjoy…