Ensuring Ordered Delivery of Messages using Azure Functions

Typically when using Azure Functions to consume messages from a Service Bus (SB), the ordering is not guaranteed although the SB is First-In-First-Out (FIFO). This is due to competing consumers, where multiple instances of the function are competing for messages of the service bus.

An example where out of ordering can happen is when a function instance takes longer to process a message than other instances therefore affecting the process ordering. This is represented in the sequence diagram below, where the function instance 1 took longer to update the same record in a database than instance 2.

image

One option to enforce ordered delivery is to configure the Azure Function to spin up only one instance. The only problem with this solution is it won’t scale very well. A more scalable option is to use sessions.This allows you to have multiple instances of a function executing giving you a higher message throughput.

To enforce message ordering several properties must be set. The property Requires Session must be enabled on the SB queues and topic subscriptions. Messages sent onto the SB must set the context property SessionId to unique value from other non related messages. Some examples of a session Id could be the account number, customer number, batch Id, etc. Azure Functions need to have the IsSessionsEnabled property set to enabled on the SB input binding.

This feature for Azure Functions to use SB sessions only came GA as of mid 2019. Enabling sessions on the Azure Function places a lock on all messages that have the same session Id causing the locked messages to be consumed by that one function instance that placed the lock.

Typical Scenario

A warehouse needs to track the progress of an order from when its first received to when it gets dispatched. Throughout each stage (Ordered, Picked, Packaged, Dispatched) of the ordering process, the status of the order must be updated. This involves placing a new message onto the service bus every time the order status needs to get updated. An Azure function will then pull the messages from the service bus and update the order status in a database where the customer can view the current state of their order.

To simulate the warehouse tracking system, a console app will be used to create messages for each status change (Ordered, Picked, Packaged, Dispatched), for several hundred orders. The session Id of each status message will be set to the order number. The app will then send the messages to a SB Topic where it will have two subscriptions, one with sessions enabled and the other disabled. This is so we can compare the ordering of messages being received with and without sessions enabled.

Order message generater
  1. class Program
  2.   {
  3.       private static string connectionString = ConfigurationManager.AppSettings[“ServiceBusConnectionString”];
  4.       private static string topicName = ConfigurationManager.AppSettings[“TopicName”];
  5.       private static int orders = 100;
  6.       private static int messagePerSession = 4;
  7.       static async Task Main(string[] args)
  8.       {
  9.           Console.WriteLine(“Creating Service Bus sender….”);
  10.           var taskList = new List<Task>();
  11.           var sender = new MessageSender(connectionString, topicName);
  12.           //create an order
  13.           for (int order = 0; order < orders; order++)
  14.           {
  15.               var orderNumber = $”OrderId-{order.ToString()};
  16.               var messageList = new List<Message>();
  17.               //simulate a status update in the correct order
  18.               for (int m = 0; m < messagePerSession; m++)
  19.               {
  20.                   var status = string.Empty;
  21.                   switch (m)
  22.                   {
  23.                       case 0 :
  24.                           status = “1 – Ordered”;
  25.                           break;
  26.                       case 1:
  27.                           status = “2 – Picked”;
  28.                           break;
  29.                       case 2:
  30.                           status = “3 – Packaged”;
  31.                           break;
  32.                       case 3:
  33.                           status = “4 – Dispatched”;
  34.                           break;
  35.                   }
  36.                   var message = new Message(Encoding.UTF8.GetBytes($”Status-{status}))
  37.                   {
  38.                       //set the service bus SessionId property to the current order number
  39.                       SessionId = orderNumber
  40.                   };
  41.                   messageList.Add(message);
  42.               }
  43.               //send the list of status update messages for the order to the service bus
  44.               taskList.Add(sender.SendAsync(messageList));
  45.           }
  46.           Console.WriteLine(“Sending all messages…”);
  47.           await Task.WhenAll(taskList);
  48.           Console.WriteLine(“All messages sent.”);
  49.       }
  50.   }

Two Azure functions will be created, where one has sessions enabled and the other disabled. The functions will have a random delay created from 1 to 10 seconds to simulate some business logic which may be calling out to an external service before updating the order status. Instead of the function writing to a database, each status update message received will be written to an Azure Table storage to create an audit log of when a status update message was processed.

Below is the source code for the function which will process the messages on the service bus using sessions. Note the IsSessionEnabled property is set to true on the ServiceBusTrigger input binding. The randomiser is to simulate some business logic that could vary in time to process a message.

Azure Function using sessions
  1. public static class MsgOrderingSessions
  2.     {
  3.         [FunctionName(“MsgOrderingSessions”)]
  4.         [return: Table(“OrdersSession”, Connection = “StorageConnectionAppSetting”)]
  5.         public static OrderEntity Run([ServiceBusTrigger(“orders”, “OrdersSession”, Connection = “SbConnStr”, IsSessionsEnabled = true)]
  6.               Message sbMesssage, ILogger log)
  7.         {
  8.             log.LogInformation($”C# ServiceBus topic trigger function processed message: {Encoding.UTF8.GetString(sbMesssage.Body)});
  9.             Random random = new Random();
  10.             int randNumb = random.Next(1000, 10000);
  11.             System.Threading.Thread.Sleep(randNumb);
  12.             return new OrderEntity { PartitionKey = $”{sbMesssage.SessionId} – {DateTime.Now.Ticks} , RowKey = Guid.NewGuid().ToString(), Text = Encoding.UTF8.GetString(sbMesssage.Body) };
  13.         }
  14.     }

Below is the source code for the function which does not use sessions. Here the IsSessionEnabled is set to false.

Azure Function no sessions
  1. public static class MsgOrderingNoSession
  2.     {
  3.         [FunctionName(“MsgOrderingNoSessions”)]
  4.         [return: Table(“OrdersNoSession”, Connection = “StorageConnectionAppSetting”)]
  5.         public static OrderEntity Run([ServiceBusTrigger(“orders”, “OrdersNoSession”, Connection = “SbConnStr”, IsSessionsEnabled = false)]
  6.               Message sbMesssage, ILogger log)
  7.         {
  8.             log.LogInformation($”C# ServiceBus topic trigger function processed message: {Encoding.UTF8.GetString(sbMesssage.Body)});
  9.             Random random = new Random();
  10.             int randNumb = random.Next(1000, 10000);
  11.             System.Threading.Thread.Sleep(randNumb);
  12.             return new OrderEntity { PartitionKey = $”{sbMesssage.SessionId} – {DateTime.Now.Ticks}, RowKey = Guid.NewGuid().ToString(), Text = Encoding.UTF8.GetString(sbMesssage.Body) };
  13.         }       
  14.     }

Below is the settings for the service bus topic which has 2 subscriptions and one of them has Requires Session checked.

image

Running the console app creates 400 messages on both subscriptions, 4 status update messages per 1 order.

image

Conclusion

The Azure function which had the ServiceBusTrigger, IsSessionsEnabled = false inserted the rows out of order due to multiple function instances competing for the next message on the service bus.

image

Now the Azure Function which had IsSessionsEnabled = true and read messages from a service bus subscription which also had the Requires Session flag enabled, the messages were processed in the correct sequence as they were placed onto the service bus.

image

When using sessions, there is a slight performance hit depending on the number of function instances executing. In this example both functions where running under the consumption plan which spun up 6 instances. As you can see the number of messages waiting on each of the subscriptions below, the subscription which had sessions disabled are processing the messages a lot faster.

When sessions are used, each function instance places a locked on all messages having the same session Id which are processed one after another. As there were only 6 instances available, only a maximum of six orders could be processed at one time.

image

Enjoy…

Always subscribe to Dead-lettered messages when using an Azure Service Bus

I typically come across solutions that incorporate a service bus where monitoring of the dead-letter queues are simply ignored. Upon questioning the reason, the normal excuse for not monitoring the DLQ (dead-letter queue) is quote “I don’t require it because I have added exception handling therefore no messages will end up on the DLQ”. From experience there will be scenarios where the exception logic did not capture an edge case scenario and the message will end up onto the DLQ without anyone knowing about. 

A simple solution is to have a single process to monitor all the DQL messages and raise an alert when one occurs. Below is one of my building blocks which a typically incorporate when there is a service bus involved in a solution.

image

To use the DLQ building block to centrally capture any DLQ message, simply set the “ForwardDeadLetteredMessagesTo” property on each of the messaging queues to a common DLQ handler queue as shown below.

image

Now when a message gets dead lettered, it will end up in this common queue which is monitored by an Azure Function. Note the current NuGet version v3.0.4 of the ServiceBus DLL has the DeadLetterSource property and is not currently available in the Logic App Service Bus connector. The function writes the DLQ meta data, any custom properties and the message payload to a blobstore file. By using an Azure Storage Account V2 for the blobstore, a new blob creation event will be fired to any interested subscribers which in this case is a Logic App.

Azure Function Code

Below is the code for the Azure Function. Here I am using the IBinder interface to allow me to set the folder path and file name imperatively. The connection strings (ServiceBusConn, StorageAccountConn) are defined in the Application settings of the  Azure App Service.

Code Snippet
  1. using Microsoft.Azure.ServiceBus;
  2. using Microsoft.Azure.WebJobs;
  3. using Microsoft.Extensions.Logging;
  4. using Newtonsoft.Json;
  5. using System;
  6. using System.Collections.Generic;
  7. using System.IO;
  8. using System.Threading.Tasks;
  9.  
  10. namespace ServiceBusDLQMonitoring
  11. {
  12.     public static class Function1
  13.     {
  14.         [FunctionName("Function1")]
  15.  
  16.         public static async Task RunAsync([ServiceBusTrigger("dlq-processor", Connection = "ServiceBusConn")] Message dlqQueue,
  17.             Binder blobBinder,          
  18.             ILogger log)
  19.         {
  20.             log.LogInformation($"C# ServiceBus queue trigger function processed message: {dlqQueue.MessageId}");
  21.  
  22.             //set the filename and blob path
  23.             var blobFile = Guid.NewGuid().ToString() + ".json";
  24.             var path = string.Concat("dlq/",DateTime.UtcNow.ToString("yyyy/MM/dd"),"/" ,dlqQueue.SystemProperties.DeadLetterSource,"/", blobFile);
  25.  
  26.             var dlqMsq = new DLQMessage
  27.             {
  28.                 DeadletterSource = dlqQueue.SystemProperties.DeadLetterSource,
  29.                 MessageId = dlqQueue.MessageId,
  30.                 SequenceNumber = dlqQueue.SystemProperties.SequenceNumber,
  31.                 SessionId = dlqQueue.SessionId,
  32.                 UserProperties = dlqQueue.UserProperties,
  33.                 DeadLetterReason = dlqQueue.UserProperties["DeadLetterReason"].ToString(),
  34.                 EnqueuedDttmUTC = dlqQueue.SystemProperties.EnqueuedTimeUtc,
  35.                 ContentType = dlqQueue.ContentType,
  36.                 DeliveryCount = dlqQueue.SystemProperties.DeliveryCount,
  37.                 Label = dlqQueue.Label,
  38.                 MsgBase64Encoded = Convert.ToBase64String(dlqQueue.Body, Base64FormattingOptions.None)
  39.             };
  40.  
  41.             var blobAttributes = new Attribute[]
  42.             {
  43.                 new BlobAttribute(path),
  44.                 new StorageAccountAttribute("StorageAccountConn")                
  45.             };
  46.  
  47.             using (var writer = await blobBinder.BindAsync<TextWriter>(blobAttributes))
  48.             {              
  49.               writer.Write(JsonConvert.SerializeObject(dlqMsq,Formatting.Indented));
  50.             }
  51.         }
  52.     }
  53.  
  54.     public class DLQMessage
  55.     {       
  56.         public string DeadletterSource { get; set; }
  57.         public long SequenceNumber { get; set; }
  58.         public string MessageId { get; set; }
  59.         public string SessionId { get; set; }
  60.         public string Label { get; set; }
  61.         public string DeadLetterReason { get; set; }
  62.         public DateTime EnqueuedDttmUTC { get; set; }
  63.         public int DeliveryCount { get; set; }        
  64.         public string ContentType { get; set; }
  65.         public IDictionary<string,object> UserProperties { get; set; }
  66.         public string MsgBase64Encoded { get; set; }
  67.     }
  68. }

 

Logic App Implementation

A Logic App is used to the retrieve the message from the blob store when it is triggered by the EventGrid  HTTP webhook. The basic workflow is shown below and can be expanded to suit your own requirements.

image

The expression for the ‘Get blob content using path’ action is @{replace(triggerBody()[0][‘data’][‘url’],’https://dqlmessages.blob.core.windows.net/’,”)}. Here I am just replacing the Domain name with an empty string as I only want the resource location.

The Parse JSON action has the following schema. This makes it easier to reference the properties downstream.

{

    "properties": {

        "ContentType": {

            "type": [

                "string",

                "null"

            ]

        },

        "DeadLetterReason": {

            "type": "string"

        },

        "DeadletterSource": {

            "type": "string"

        },

        "DeliveryCount": {

            "type": "integer"

        },

        "EnqueuedDttmUTC": {

            "type": "string"

        },

        "Label": {

            "type": [

                "string",

                "null"

            ]

        },

        "MessageId": {

            "type": "string"

        },

        "MsgBase64Encoded": {

            "type": [

                "string",

                "null"

            ]

        },

        "SequenceNumber": {

            "type": "integer"

        },

        "SessionId": {

            "type": [

                "string",

                "null"

            ]

        },

        "UserProperties": {

            "type": "any"

        }

    },

    "type": "object"

}

 

The last action ‘Set variable MsgBody’  has the value set to: “@{base64ToString(body(‘Parse_JSON’)?[‘MsgBase64Encoded’])}”

Blob creation event configuration

Next is to setup a subscription to the Blob creation event. Click on Events under the Storage account for the DLQ messages as shown below.

image

Then click on the +Event Subscription to add a new subscription.

image

Setup the subscription Basic details with a suitable subscription name and the blob storage account resource name. Uncheck the ‘Subscribe to all event types’ and select Blob Created event. Set the endpoint details to Web Hook and the URL of the Logic App Http trigger endpoint address.

image

Under Filters, enable subject filtering. Add the following prefix ‘/blobServices/default/containers/’ to the name of the DLQ container (in this example its called ‘dlq’) and add it to the ‘Subject Begins With’ textbox.  Then in the ‘Subject Ends With’, set it to the filename extension ‘.json’. Now click the Create button at the bottom of the page to create the subscription.

image

Sample Logic App output

Once everything is wired up and if there are any messages that have been placed onto the DLQ, you should see some logic app runs. Below is an example of the outputs from the last two actions.

image

All you need to do now is extend the Logic App to deliver the DLQ alert to somewhere.

Enjoy…

Providing DR By Using An Azure Service Bus In Two Regions

This is a Disaster Recovery option I used on one of my projects. A client required a DR solution which guaranteed not to lose any messages during the DR failover process. In essence, no messages were allowed to be lost in mid flight whilst the DR process was taking place.

Solution

The approach I took was to use an Azure Service Bus  in a primary Azure region and another service bus in a secondary Azure  DR region. The publisher would then send the same message to both service bus endpoints. For my solution I used Azure APIM to post the same message to both primary and secondary regions, thereby simplifying the code for the publisher. 

image

The primary region would simply process the messages as per normal using a Logic App to poll for new messages on the service bus. However in the DR region,  the equivalent Logic App would be set in a disabled state. The messages in the secondary region would remain on the bus until the Time-To-Live (TTL) threshold is reached before either being moved onto the Deadletter queue, or dropped from the queue altogether by setting the EnableDeadLetteringOnMessageExpiration property to false. Using this technique provides automatic pruning of the older messages that would have definitely been processed by the primary region before failing over.

The value chosen for the TTL property is determined by how long it would take to failover to the DR region and the time to detect an issue being realised in the primary region.

Failing over would simply involve enable the Logic App the DR region and failing back would involve disabling this Logic App again. 

Pros

  • No messages are lost during the failover process.
  • Low monetary transaction cost by the DR environment as no Logic Apps are being triggered during the normal process flow through the primary region.
  • Simplistic DR design which involves just another another queue.
  • Simple failover process.

Cons

  • There is a delay before any new messages would be processed while  the older messages on the service bus are reprocessed first.
  • The backend system processing the messages must be idempotent, meaning the same message maybe processed by the backend system multiple times and produce the same outcome.
  • Requires the publisher to send the same message to 2 different endpoints. This may be mitigated by using APIM to manage sending the messages to both service bus endpoints.

Enjoy…

Exposing Azure Service Bus through APIM, generating a SAS Token and setting the Session Id

On one of my recent projects, a client application was required to place a message onto an Azure Service Bus by using a HTTP endpoint rather than using the Service Bus SDK and with the following constraints.

  • The client is unable to generate the Service Bus SAS token.
  • Service Bus Session Id needs to be set to the customer number found in the message to ensure ordered delivery by the consumer.
  • All  messages are to be sent to a Service Bus Topic and require to add a custom Service Bus property called ‘MgsType’.
  • Custom HTTP Request Headers may be used.

I decided upon a solution that uses Azure APIM to expose the Service Bus endpoint.  A custom policy will be used to generate the Service Bus SAS token and to parse the message for the customer Id. The customer Id will then be used to set the SessionId on the Service Bus. The client then only has to register for a subscription key in the Azure Developer portal and then pass this key with each HTTP request in the header.

image

The first step of this solution  is to create an Azure Service Bus with a Topic called ‘transactions’ and to create a Shared access policy called ‘Sender’ which has only Send claims.

image

Next add a subscription to the topic with ‘Require Session’ enabled and the following  rule “MsgType = ‘Deposits’”

image

Before we start developing the custom policies, we need to setup 3 Name Values in the APIM blade as shown below. These values are used for generating the Service Bus SAS token and are obtained from the Service Bus properties.

image

Where,

  • SB_Key – primary key for the ‘Sender’ shared access policies
  • SB_KeyName  – name of the Shared access policy
  • SB_Uri – is the topic URL which can be found by clicking on the topic name under the ‘Topics’ blade shown below.

image

Next is to create an API using the ‘Blank API’ template similar to what I have done below. Note the ‘Web service URL’ value is the base address of the Service Bus topic URL. (ie without the topic name resource location)

image

Then add an operation to the service as below. Note the URL should be the name of your topic with the resource ‘messages’ appended to the end.

image

Once the operation has been created, we can now add the custom policy to the ‘Inbound processing’ stage.

image

The APIM policy consists of several code blocks inside the inbound processing stage which are describe in detail below. To improve performance I will be caching the generated SAS token.

The  first code block looks up cache for a value. If nothing is found then the variable “cachedSasToken” is assigned to null as there is no default value specified.

<cache-lookup-value key="crmsbsas" variable-name="cachedSasToken" />

Next a control flow is used to check the variable “cachedSasToken” for null and if true then a SAS token is generated using the values stored in APIM Name-Value pairs. Once the token is  calculated then it is stored in cache and set to expire in 120 seconds. The cache is then read again to assign the variable “cachedSasToken” with the generated SAS token.

Both the signature and resource URL is required to be UrlEncoded. My first choice was to use the System.Web.UrlEncode function to encode the values. Unfortunately this function is not available in the APIM policy expressions as it only has a subset of the .Net Framework types. To work around this issue, I ended up using the System.Uri.EscapeDataString method instead.

<choose>

            <when condition="@(context.Variables.GetValueOrDefault&lt;string>("cachedSasToken") == null)">

                <cache-store-value key="crmsbsas" value="@{

                        string resourceUri = "{{SB_Uri}}";

                        string keyName = "{{SB_KeyName}}";

                        string key = "{{SB_Key}}";

                        TimeSpan sinceEpoch = DateTime.UtcNow - new DateTime(1970, 1, 1);

                        var expiry = Convert.ToString((int)sinceEpoch.TotalSeconds + 120);

                        string stringToSign = System.Uri.EscapeDataString(resourceUri) + "\n" + expiry;

                        HMACSHA256 hmac = new HMACSHA256(Encoding.UTF8.GetBytes(key));

                        var signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));

                        var sasToken = String.Format("SharedAccessSignature sr={0}&amp;sig={1}&amp;se={2}&amp;skn={3}", 

                                        System.Uri.EscapeDataString(resourceUri),

                                        System.Uri.EscapeDataString(signature), expiry, keyName);

                        return sasToken;

                    }" duration="100" />

                <cache-lookup-value key="crmsbsas" variable-name="cachedSasToken" />

            </when>

        </choose>

The SAS token is then added to the header using the value from the variable ‘cachedSasToken’

<set-header name="Authorization" exists-action="override">

<value>@(context.Variables.GetValueOrDefault<string>("cachedSasToken"))</value>

</set-header>

I then set the message content type to application/json and remove the APIM subscription key header from being sent to the Service Bus endpoint.

<set-header name="Content-type" exists-action="override">

          <value>application/json</value>

      </set-header>

      <set-header name="Ocp-Apim-Subscription-Key" exists-action="delete" />

The last part is to extract the customer number from the message and assign it to the SessionId property of the Service Bus. The standard set of Service Bus properties are required to be added to  a custom header called ‘BrokerProperties’

<set-header name="BrokerProperties" exists-action="override">

            <value>@{

               string reqData = context.Request.Body?.As<string>(true);

               dynamic data =  JsonConvert.DeserializeObject(reqData);

               string order =  data?.CustomerNumber;

               return string.Format("{{\"SessionId\":\"{0}\"}}", order);

            }</value>

        </set-header>

The full code for the custom policy is here:

<!--

    IMPORTANT:

    - Policy elements can appear only within the <inbound>, <outbound>, <backend> section elements.

    - Only the <forward-request> policy element can appear within the <backend> section element.

    - To apply a policy to the incoming request (before it is forwarded to the backend service), place a corresponding policy element within the <inbound> section element.

    - To apply a policy to the outgoing response (before it is sent back to the caller), place a corresponding policy element within the <outbound> section element.

    - To add a policy position the cursor at the desired insertion point and click on the round button associated with the policy.

    - To remove a policy, delete the corresponding policy statement from the policy document.

    - Position the <base> element within a section element to inherit all policies from the corresponding section element in the enclosing scope.

    - Remove the <base> element to prevent inheriting policies from the corresponding section element in the enclosing scope.

    - Policies are applied in the order of their appearance, from the top down.

-->

<policies>

    <inbound>

        <base />

        <cache-lookup-value key="crmsbsas" variable-name="cachedSasToken" />

        <choose>

            <when condition="@(context.Variables.GetValueOrDefault&lt;string>("cachedSasToken") == null)">

                <cache-store-value key="crmsbsas" value="@{

                        string resourceUri = "{{SB_Uri}}";

                        string keyName = "{{SB_KeyName}}";

                        string key = "{{SB_Key}}";

                        TimeSpan sinceEpoch = DateTime.UtcNow - new DateTime(1970, 1, 1);

                        var expiry = Convert.ToString((int)sinceEpoch.TotalSeconds + 120);

                        string stringToSign = System.Uri.EscapeDataString(resourceUri) + "\n" + expiry;

                        HMACSHA256 hmac = new HMACSHA256(Encoding.UTF8.GetBytes(key));

                        var signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));

                        var sasToken = String.Format("SharedAccessSignature sr={0}&amp;sig={1}&amp;se={2}&amp;skn={3}", 

                                        System.Uri.EscapeDataString(resourceUri),

                                        System.Uri.EscapeDataString(signature), expiry, keyName);

                        return sasToken;

                    }" duration="10" />

                <cache-lookup-value key="crmsbsas" variable-name="cachedSasToken" />

            </when>

        </choose>

        <set-header name="Authorization" exists-action="override">

            <value>@(context.Variables.GetValueOrDefault<string>("cachedSasToken"))</value>

        </set-header>

        <set-header name="Content-type" exists-action="override">

            <value>application/json</value>

        </set-header>

        <set-header name="Ocp-Apim-Subscription-Key" exists-action="delete" />

        <set-header name="BrokerProperties" exists-action="override">

            <value>@{

               string reqData = context.Request.Body?.As<string>(true);

               dynamic data =  JsonConvert.DeserializeObject(reqData);

               string order =  data?.CustomerNumber;

               return string.Format("{{\"SessionId\":\"{0}\"}}", order);

            }</value>

        </set-header>

    </inbound>

    <backend>

        <base />

    </backend>

    <outbound>

        <base />

    </outbound>

    <on-error>

        <base />

    </on-error>

</policies>

 

Now lets use Postman to send a message to the URL endpoint exposed by APIM to test the policy. The headers contain the APIM subscription key and a custom header for the ‘MsgType’ which will be used for the Service Bus subscription filter.

image

The message body simply contains the Customer number and the amount to deposit.

image

After posting the message to the APIM endpoint URL, we can see the message was successfully forwarded to the Service Bus by using Service Bus Explorer to view the message properties and content.

Notice the message custom properties has the MsgType and the ‘SessionId’ is populated with the customer number.

image

Enjoy…

Content based message routing using Azure Logic Apps, Function and Service Bus

Content Based Routing (CBR) is another pattern used in the integration world. The contents of the message determines the endpoint of the message.

This article will describe one option to develop this pattern using an Azure Service Bus, an Azure Function and a Logic App.

Basically the service bus will be used to route the message to the correct endpoint using topics and subscriptions. The Azure Function is used to host and execute the business rules to inspect the message contents and set the routing properties. A logic app is used to accept the message and call the function passing the received message as an argument. Once the function executes the business rules, it will return the routing properties in the response body. The routing information is then used to set the properties on the service bus API connector in the Logic App before publishing the message onto the bus.

image

Scenario

To demonstrate a typical use-case of this pattern we have 2 message types, Sales Orders (SO) and Purchase Orders (PO). For the SO I want to send the order to a priority queue if the total sales amount is over a particular value. And for a PO, it should be sent to a pre-approval queue if the order value is under a specified amount.

Here is an example of a SO message to be routed:

image

And an example of a PO being sent:

image

Solution

The real smarts of this solution is the function which will return a common JSON response message to set the values for the Topic name and the custom properties on the service bus connector. The fields of the response message are described below.

  • TopicName – the name of service bus topic to send the message to.
  • CBRFilter_1 – used by the subscription rule to filter the value on. Depending on your own requirements you may need more fields to filter more granular.
  • RuleSetVersion – used by the subscription rule to filter the value on. It’s a good idea to have this field as you may have several versions of this rule in play at any one time.

Let’s start with provisioning the service bus topics and subscriptions for the 2 types of messages. First create 2 topics called purchaseorder and salesorder.

 

image

Now add the following subscriptions and rules for each of the topics.

Topic Name Subscription Name Rule
purchaseorder Approved_V1.00 CBRFilter_1 = ‘Approved’ and RuleSetVersion = ‘1.00’
purchaseorder NotApproved_V1.00 CBRFilter_1 = ‘ApprovedNot’ and RuleSetVersion = ‘1.00’
salesorder HighPriority_V1.00 CBRFilter_1 = ‘PriorityHigh’ and RuleSetVersion = ‘1.00’
salesorder LowPriority_V1.00 CBRFilter_1 = ‘PriorityLow’ and RuleSetVersion = ‘1.00’

Next is the development of the Azure function. This is best developed in Visual Studio where you can include a Unit Test project to each of the rules. Add a new Azure Function project to your solution.

image

After the project has been created, right click on the function and click Add -> New Item. Choose Azure Function, give it a name and select the Http trigger option.

image

Below is code for the HTTP trigger function which includes the class definition for the RoutingProperties object. I am checking for specific elements SalesOrderNumber, PurchaseOrderNumber in the JSON message to determine the type of message and which determines what rule code block to execute. Each rule block code will first set the TopicName and RuleSetVersion properties.

public static class CBRRule

   {

       [FunctionName("CBRRule")]

       public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function,  "post", Route = null)]HttpRequestMessage req, TraceWriter log)

       {

           log.Info("C# HTTP trigger function processed a request.");

           var routingProperties = new RoutingProperties();


           // Get request body

           JObject data = await req.Content.ReadAsAsync<JObject>();


           //Is this a  sales order message type           

           if (data != null && data["SalesOrderNumber"] != null)

           {

               routingProperties.CBRFilter_1 = "PriorityLow";

               routingProperties.RuleSetVersion = "1.00";

               routingProperties.TopicName = "SalesOrder";


               var lineItems = data["Lines"];

               var totalSaleAmount = lineItems.Sum(x => (decimal)x["UnitPrice"] * (decimal)x["Qty"]);


               //if the total sales is greater than $1000 send the message to the high priority queue

               if (totalSaleAmount > 1000)

                   routingProperties.CBRFilter_1 = "PriorityHigh";

           }


           //Is this a purchase order message type           

           if (data != null && data["PurchaseOrderNumber"] != null)

           {

               routingProperties.CBRFilter_1 = "ApprovedNot";

               routingProperties.RuleSetVersion = "1.00";

               routingProperties.TopicName = "PurchaseOrder";


               var lineItems = data["Lines"];

               var totalSaleAmount = lineItems.Sum(x => (decimal)x["UnitPrice"] * (decimal)x["Qty"]);


               //Approve PO if the total order price is less than $500

               if (totalSaleAmount < 500)

                   routingProperties.CBRFilter_1 = "Approved";

           }


           return req.CreateResponse(HttpStatusCode.OK, routingProperties);

       }


       /// <summary>

       /// Response message to set the custom routing properties of the service bus

       /// </summary>

       public class RoutingProperties

       {

           public string TopicName { get; set; }

           public string CBRFilter_1 { get; set; }

           public string RuleSetVersion { get; set; }


           public RoutingProperties()

           {

               this.CBRFilter_1 = "Unknown";

               this.RuleSetVersion = "Unknown";

               this.TopicName = "Unknown";

           }

       }


   }

The business rule for a SO aggregates all the line items and checks if the total amount is greater than 1000, if it is then set the property CBRFilter_1 to “PriorityHigh”.

The business rule for a PO also aggregates all the line items and checks if the total amount is less than 500, if it is then set the property CBRFilter to “Approved”.

With the following input message sent to the function:

clip_image001

The output of the function should look similar to this below

:clip_image001[6]

Now we need to publish the function from Visual Studio to your Azure resource group using the publishing wizard.

clip_image002

The last component of this solution is the Logic App which is triggered by an HTTP Request API and then calls the Azure function created above. The basic flow looks like this below.

clip_image004

The HTTP Request trigger has no request body JSON schema created. The trigger must accept any type of message.

clip_image006

Add an Azure Function after the trigger action and select the method called “CBRRule”

clip_image008

Set the Request Body to the trigger body content and the Method to “POST”

clip_image010

Next add a Service Bus action and set the properties as shown. Both the Queue/Topic name and Properties are set from the function response message

.clip_image012

Here is the code view of the Send Message action showing how the properties are set.

clip_image014

Testing

Using PostMan we can send sample messages to the Logic App and then see them sitting on the service bus waiting to be processed by some other method.

At the moment the service bus should have no messages waiting to be processed as shown

.clip_image015

Using PostMan to send the following PO, we should see this message end up in the purchaseorder/NotApproved subscription.

clip_image016

All going well, the message will arrive at the correct subscription waiting to be processed as shown below using Service Bus Explorer.

clip_image018

Sending the following SO will also route the message to the correct subscriber.

clip_image019

clip_image021

Conclusion

CBR can be easily achievable using an Azure Function to execute your business rules on a message to set up the routing properties. Taking this a step further, I would abstract the business rules for each message type in its own class for manageability.

Also it is advisable to setup a Unit Test project for each of the classes holding the business rules to ensure 100% code testing coverage.

Enjoy…

Itinerary based message routing using Azure Logic Apps and Service Bus Actions

This is another integration pattern used quite extensively in the integration world. It is used when a message is required to be routed to several endpoints in a particular order using some form of routing list. Depending on your business requirements, the message being routed may be enriched or replaced before sending it to the next service endpoint in the list.

image

This is probably one of the easiest integration patterns to implement using Logic Apps and Service Bus Actions. There are probably other methods to implement this pattern, but I wanted to abstract the routing logic away from the Logic App itself and leave it to focus on the business process and not worry about setting up the next routing endpoint.

By using the Actions feature of the service bus I can defined the routing order of the next service endpoint, or in this case the next service bus subscription by setting properties to match the next subscriber filter condition. A service bus action is executed after the filter condition has been met and is used to set the value of either a system or custom property before the message is consumed from the service bus. It is set in a similar way to the T-SQL “Set” command where you set a field to a value.

For this scenario I have a sales order message that is required to be passed to several logic apps in a particular sequence. There is a sales order header logic app, sales order line logic app, sale order payments logic app and finally a sales order completion logic app. The sales order message will look similar to this below.

image

Provisioning Azure Service Bus

The method involves creating a service bus topic called “salesorder” with the following subscriptions.

servicebus

The filter and action rules are setup as in the following table. There are 2 custom properties “MsgType” and “ItineraryLeg” used for the subscriptions. The MsgType is just used to group the messages and the ItineraryLeg  defines the order of the subscribers.

The action is used to specify the next subscriber to send the message to after the current process by setting the ItineraryLeg property.

Seq Subscription Name Rule Filter Rule Action
1 SOHeader MsgType=’salesorder’ and ItineraryLeg = ” set  ItineraryLeg = ‘solines’
2 SOLines MsgType=’salesorder’ and ItineraryLeg = ‘solines’ set  ItineraryLeg = ‘sopayments’
3 SOPayments MsgType=’salesorder’ and ItineraryLeg = ‘sopayments’ set  ItineraryLeg = ‘socompletion’
4 SOCompletion MsgType=’salesorder’ and ItineraryLeg = ‘socompletion’

A sample of the SOHeader rules are shown below:

image

Creating the Logic Apps

Next we will provision the 4 logic apps to create the SO header, process the line items, apply the payments and complete the sales order.

image

The logic apps (SalesOrderHeaderProcessor, SalesOrderLinesProcessor, SalesOrderPaymentsProcessor) are constructed in a similar manner as shown below and the only difference is the topic subscription names for the service bus trigger action.  The Delay action is where you would implement your business process logic on the received message.

image

Expanding the service bus trigger shape has the following properties set. The other logic apps will have different Topic subscription names set.

image

The real smarts of these workflows are the “Republish message” service bus action shape. You will need to add this shape to every logic app that is involved in processing the message. Here we set the ItineraryLog and MsgType property values to the same values in the Trigger service bus action shape. Also I am adding another custom  property called “Tracking”. This is used to track the name of logic app the message was routed to in a chorological order and can be useful for debugging. The content property is just set the received message from the service bus trigger. Depending on your business requirements, you may be required to publish a totally different message.

image

Below is the code behind for the Republish message shape showing how to setup the properties section.

“body”: {
“ContentData”: “@{triggerBody()?[‘ContentData’]}”,
“Properties”: {
“ItineraryLeg”: “@triggerBody()?[‘Properties’][‘ItineraryLeg’]”,
“MsgType”: “@triggerBody()?[‘Properties’][‘MsgType’]”,
“Tracking”: “@concat(coalesce(triggerBody()?[‘Properties’]?[‘Tracking’],’Begin’),’,’,workflow()?[‘name’])”
}
},

The other logic app SalesOrderCompletionProcessor simply pulls the message from the last subscription and sends the Tracking property value to RequestBin.

image

Messages are placed onto the service bus using the MessagePublisher logic app which accepts a json message and initialises the service bus custom properties “ItineraryLeg” and “MsgType”. Here the ItineraryLeg is set to an empty string and the MsgType being set to “salesorder”. The body of the HTTP request trigger is used as the message content for the Send message action shape.

image

Testing

Using PostMan we can POST a message to the MessagePublisher logic app. If you set the delays long enough in the logic apps, you can see the message being published and consumed by the subscribers in the correct order as defined in the service bus actions.

Here is the result of the message being posted to the MessagePublisher logic app and the output from the SalesOrderCompletionProcessor  logic app which sends it to RequestBin.

Begin,SalesOrderHeaderProcessor,SalesOrderLinesProcessor,SalesOrderPaymentsProcessor,SalesOrderCompletionProcessor

Now if I wanted to change the order of the message processing, say I wanted to process the payments before the sales order lines. Here I would simply update the actions on the service bus subscriptions as follows:

Seq Subscription Name Rule Filter Rule Action
1 SOHeader MsgType=’salesorder’ and ItineraryLeg = ” set  ItineraryLeg = ‘sopayments’
3 SOPaymentsSOLines MsgType=’salesorder’ and ItineraryLeg = ‘solines’ set  ItineraryLeg = ‘socompletion’
2 SOPayments MsgType=’salesorder’ and ItineraryLeg = ‘sopayments’ set  ItineraryLeg = ‘solines’
4 SOCompletion MsgType=’salesorder’ and ItineraryLeg = ‘socompletion’

Now the output of the SalesOrderCompletionProcessor looks like this below. You can clearly see the payments logic app being executed after the sales header process.

Begin,SalesOrderHeaderProcessor,SalesOrderPaymentsProcessor,SalesOrderLinesProcessor,SalesOrderCompletionProcessor

In Summary

Use service bus actions to manage the itinerary list to abstract the routing logic away from the normal business process of the logic apps.

Typically you would use Azure Resource templates to setup the service bus subscriptions and rules under TFS control.

Keep a watch out for my next article on Content Based Routing using Azure Logic Apps and Service Bus.

Enjoy…

Integration Scatter-Gather pattern using Azure Logic Apps and Service Bus

Recently I have taken a role as the Integration Architect and tech lead for a project to integrate messaging between MS Dynamics 365  AX/CRM and 3rd party systems. This was a huge solution that took over 10 months to design and develop.  It consisted of developing over 60 Logic Apps.

With a large integration project like this were multiple entities are required to be updated across multiple systems in a transactional manner, I required a process to correlate and aggregate all the responses into one composite message before proceeding onto the next task in the workflow. The scatter-gatherer pattern was chosen as a  good candidate for this type of scenario.

The next problem was how to implement this pattern using Logic Apps. Using what I had learnt from my previous blog on enforcing ordered delivery of messages  https://connectedcircuits.blog//wp-content/uploads/2017/08/26/enforcing-ordered-delivery-using-azure-logic-apps-and-service-bus/ I was able to design the solution below.

image

It is based on a single Logic App to publish the message onto a service bus topic and to retrieve all the responses from another service bus queue. I also wanted the solution to be flexible to decide which services the message should be scattered to by passing in a routing list which determines who to send the request message to.

The main component of the solution is the (Scatter/Gather)  logic app which sends the request message to the the service bus topic after setting subscription property values and a unique batch Id. In this example I have 3 sub-processor logic apps which will process the message in some form or another and then write a response message onto the service bus queue. Because sessions have been enabled on this queue and I am setting the session Id with the same BatchId that was sent with the message, we are able to correlate all the responses from to sub-processors within the same logic app instance that initially published the message onto the service bus topic.

After the Scatter/Gather logic app sends the message onto the service bus topic, it will go into a loop until all the response messages from the Process logic apps are received. It will then return the aggregated responses from the sub-process logic apps.

The whole solution can be broken down into 3 sections, setting up the Azure Service Bus, developing the Scatter-Gatherer Logic App workflow and the Sub-process Logic Apps subscribing to the topic.

Setting up the service bus

Lets start by setting up the Azure Service Bus. A Topic is provisioned to publish the request message with rules created for each subscriber. In this scenario I will have 3 subscriptions with default rules set to a property called MsgProcessor = ‘1’, MsgProcessor = ‘2’ and MsgProcessor = ‘3’ on each subscription respectively.

image

Next a Service Bus Queue is provisioned to receive all the response messages from the subscribers. Sessions must be enabled on this queue to allow a single consumer to process all messages with the same sessionId which in our case will be the Scatter/Gather Logic App.

image

Developing the Scatter/Gather Logic App

The next part to develop is the scatter-gather Logic App which is the real smarts of the solution. Below is the full workflow with all the actions collapsed. We will go through and expand each of the shapes.

image

This  workflow is triggered by a HTTP request, but can also be triggered by some other connectors.

image

Next is a series of variables required by the workflow listed below:

  • BatchId – This is used to group all the messages together and is used as the SessionId and is initialised with a random Guid.
  • CompositeMsg – This holds the aggregated response messages from the Processor logic apps.
  • IntemediateMsg – Used in the loop the temporary store the aggregated message.
  • ScatterCount – The number of sub-processors the request body was published to.
  • ResponseCount – Holds the current number of responses received from the Processing logic apps.

image

Once the variables have been setup, a parallel branch is used to scatter the HTTP Request body contents to the subscribing systems. For this demo I have an option to send the request body content to 3 different sub-processes.

image

To determine which sub-process to send the request body to, I pass a list of the process names to send the request body to as one of the HTTP Request Header properties as shown here:

“X-ScatterList”:”‘Process1′,’Process2’,’Process3′”

The filters on the parallel conditions are setup as below, where is checks if the list in the HTTP header contains the process name.

image

Expanding one of the condition tasks shows an action to send the request onto the Service Bus  Topic and to increment the ScatterCount variable by one.

image

Below is the “Send message processor 1”  service bus connector expanded to show the properties. The two important pieces of information are custom properties BatchId and MsgProcessor. The BatchId will be used to set the sessionId latter and the MsgProcessor is used by the subscription filter on the  service bus Topic.

image

The next step is to increment to Scatter count. This is how we keep track of the number sub-processors that the message was scattered to.

image

Once the request body content has been published to the service bus Topic, we cycle in a loop until we have received all the response messages from the service bus queue or the Until loop times out.

image

The first step in the loop is to get one new message at a time from the queue using the BatchId variable as the session Id. This is to ensure we only get messages off the queue matching this same Id.

image

Then we check if a message was found using the Condition action with this filter: @equals(length(body(‘Get_messages_from_a_queue_(peek-lock)’)), 1)

 

image

If a message was found on the service bus queue then the “If true” branch is executed and if no message was found the “If false” branch is executed setting a delay of a few seconds before iterating again.

Below are the actions inside of the “If true” branch.

image

First we check if this is the first message received off the queue using the following filter: @equals(variables(‘ResponseCount’), 0)

 

image

If it is, then we just set the variable “CompositeMsg” to the service bus queue content data.

image

As we are using the Service Bus Connector that is capable of returning multiple messages, we need to use an indexer to get to the first message. Below is the code view of the above action. Note when the sub-processor logic app puts a message onto the service bus, we base64 encode it, therefore we need to decode it back to string value.

image

Now if this was not the first message received of the service bus, we need to append it the other messages received. This is done with the following actions in the “If false” branch shown below.

image

First we need to copy the contents of the CompositeMsg variable into the “IntermediateMsg” variable. Then concatenate the message received of the service bus and the contents of the “IntemediateMsg” variable,  and then add them to the “CompositeMsg” variable. The syntax of the value for the Append intermediate to composite is shown below:

“value”: “@{concat(variables(‘IntermediateMsg’),’,’,base64ToString(body(‘Get_messages_from_a_queue_(peek-lock)’)?[0][‘ContentData’]))}”

After the message from the queue has been added or appended to the CompositeMsg variable, it is completed to remove it from the queue and the ResponseCount variable is incremented.

image

Once all the response messages have been received from the sub-process logic apps or the “Until loop” times out, we need to close the service bus queue session using the BatchId and return the composite response message.

image

Here I am just sending the composite response message to a RequestBin endpoint. Ideally you would place this onto another service bus and perhaps with the initial HTTP request body to tie everything together.

Sub-Processor Logic Apps

The last part of this solution is the sub-processor logic apps which subscribe to the service bus Topic and processes the message in some form or another before returning a response message onto the service bus queue. Using a separate logic app the manage the post processing of the initial request message provides scalability and separation from the  messaging orchestration components.

image

The properties of the trigger service bus connector is shown below which has a typical setup.

image

Next the delay task is there just to simulate processing of the message received from the topic. This where you would normally call another API endpoint or process the message in some other fashion. Remember the maximum lock duration is 5 minutes and if your process will take longer than this, you will need to renew the lock.

image

After the message has been processed, you will need to compose a response message to put back onto the service bus queue. The schema of the response message should be generic across all the sub-process logic apps to make it easier to parse latter on. For the demo a relatively simple schema will be used consisting of the received BatchId and the sub-processor logic app name.

image

Once the response message has been composed, it is placed onto the service bus queue ensuring the session Id is set to the BatchId that was sent with the message. Remember this queue has been provisioned with “Sessions Enabled”

image

The code view for the connector looks like this:

image

The last task of the workflow is to complete the topic subscription as shown here:

image

Testing the solution

We can use Postman to send the following request to the scatter-gather logic app. Also I am setting the scatter list header property to publish the message to all 3 sub-processor logic apps.

POST /workflows/…/triggers/manual/paths/invoke?api-version=2016-10-01&amp;sp=%2Ftriggers%2FmanualHTTP/1.1
Host: prod-11.australiasoutheast.logic.azure.com:443
Content-Type: application/json
X-ScatterList: ‘Process1′,’Process2′,’Process3’
Cache-Control: no-cache
Postman-Token: 644c5cdb-fd60-4903-b3ec-2d3d6febfe7e

{
“OrderId” : “12”,
“Message” : “Hello1”
}

Using RequestBin we can see the aggregated response messages from all 3 sub-processors.

[{"BatchId":"5e1df244-2153-4529-aaba-12a82aa788fd","ResponseMsg":"Process1"},{"BatchId":"5e1df244-2153-4529-aaba-12a82aa788fd","ResponseMsg":"Process2"},{"BatchId":"5e1df244-2153-4529-aaba-12a82aa788fd","ResponseMsg":"Process3"}]

What’s next

Further enhancements can be made by adding flexibility for the timeout of the Until loop as some responses may take hours/days  to send a response back.  When waiting for very long periods for a response to return, you may need to extend the delay period in the “Until loop” to avoid reaching the 5000 iteration limit.

Keep a watch for my next blog where I will show you to implement the Itinerary based routing integration pattern using Azure Logic Apps and  Service Bus.

Enjoy…

Enforcing Ordered Delivery using Azure Logic Apps and Service Bus

When consuming messages from an Azure service bus the order may not be guaranteed due to the brokered based messaging scheme where multiple consumers can consume messages from the bus. Sure you can force the Logic App to execute as a single instance but then you sacrifice performance and scalability. You can also use ReceiveAndDelete but then you loose the transactional nature of the bus. Ultimately to ensure a message is consumed in the correct order using the transactional nature of the bus you would add a sequence number to each message and use this to enforce the ordering.

To achieve ordered delivery using Logic Apps, you would need to ensure all related messages are consumed by the same Logic App instance and for this we use the session Id property on the service bus. Below is the full workflow process to force ordered delivery using Logic Apps and session Id’s on the service bus subscription.

image

This scenario is based on a financial institution which requires all monetary transfers to be processed in an ordered fashion.  The key is choosing a suitable session identifier and with this in mind, the account number was the most suitable candidate as we want a single consumer to process all the transactions for a particular account number.

Here we have created a subscription for a topic called AccountTransfers. Note the Enabled sessions is checked.

image

Once the service bus has been configured, we can now dissect the workflow to see how we can achieve ordered delivery.

The workflow is initiated by a pooling Service Bus Connector. The properties of this connector are shown below. The key point here is to set the Session id to “Next Available”. This forces the Logic App to create a new instance for each unique session id value found on the service bus.

image

The next action “ProcessSBMessage” is used to call another logic app which does the processing of the message found on the bus. Here I am just passing the raw base64 encoded message from the Service Bus Trigger action. Using the pattern “separation of concerns” moves the business logic  away from the process of ensuring ordered delivery.

image

Once the message has been sent to the chained Logic App and a response has been returned, we can complete the message from bus with the following action.

image

Next we go into a loop until the exit condition has been satisfied. I am going to use a counter that is incremented if no messages are found on the service bus. If no more messages are found on the service bus after 30 seconds, the loop will exit.

image

The loop inside starts with another service bus connector trigger which gets the messages from the topic subscription. Here we only want to retrieve one message at a time from the service bus using a peek-lock trigger and using the Session Id from the initial service bus trigger “When a message is received in a topic subscription”.  We then check if a message is found in the output body using the expression “@not(equals(length(body(‘Get_messages_from_a_topic_subscription_(peek-lock)’)), 0))

image

If a message is found, the “If True” branch is executed which again calls the same Logic App as before to process the message. Note the indexer to get to the context data as the service bus connector trigger above returns a collection.

image

Once a successful response is received from the ProcessSBMessage Logic App, the message is completed and the LoopCounter variable is reset to zero. Note the lock token is from the service bus connector trigger within the loop and the Session Id is from the initial service bus connector which started the workflow.

image

Below is the code view for setting the lockToken and SessionId of the “Complete the message” action inside the loop.  Take note of the indexer “[0]” before the LockToken element.

image

If no messages are found on the service bus, the False branch is then executed. This simply has a delay action as not to pool too quickly and increments the LoopCounter.

image

The last step is to close the session when the Until loop exists using the Session Id from the initial service bus connector trigger which started the workflow.

image

Now you are ready to the send messages into the service bus. You should see a Logic App spin up for each unique session Id. Remember to set the session Id property on the service bus to some value before sending the message.

Enjoy…

Testing an Azure Relay Service Bus using Postman

One of my favourite tools for testing RESTFul services is Postman. In this blog I will describe how to use this tool for testing  an Azure relay service bus. Remember if you are hosting the service in IIS you will need to ensure the service is already warm started. Here is a blog by Jeroen Maes describing how to do this.

First we need to generate the Authorization token as described in this article by Microsoft using the SAS keys Shared Access Signature Authentication with Service Bus

Below is some sample code to create a SAS token to be passed in the request header.

Code Snippet
public static string GenerateToken( string resourceUri, string sasKeyName, string sasKey)
{
//set the token lifespan
TimeSpan sinceEpoch = DateTime.UtcNow – new DateTime(1970, 1, 1);
var expiry = Convert.ToString((int)sinceEpoch.TotalSeconds + 3600);  //1hourstring stringToSign = System.Web.HttpUtility.UrlEncode(resourceUri) + “\n” + expiry;
HMACSHA256 hmac = new HMACSHA256(Encoding.UTF8.GetBytes(sasKey));
var signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));//format the sas token
var sasToken = String.Format(CultureInfo.InvariantCulture, “SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}”,
HttpUtility.UrlEncode(resourceUri), HttpUtility.UrlEncode(signature), expiry, sasKeyName);return sasToken;
}

The function GenerateToken() takes in 3 parameters:
  • resourceUri – This is URL of the web service endpoint in Azure, eg (https://mywebservice.servicebus.windows.net/RequestAvailability)
  • sasKeyName – The name given to the authorization rule in Azure Service Bus.
  • sasKey – The generated SAS key from Azure Service Bus

Calling the GenerateToken()  function will return a string value similar to this: SharedAcessSignature sr=https%3a%2f%2fmywebservice.servicebus.windows.net%2fRequestAvailability&sig=UUEIhOkWX3FzrtBsRia9WFeYxbhMQ9FdppmFMjuJv7U%3d&se=1443495583&skn=MyKeyName

Now we can setup the headers for the request in Postman. Add a header called Authorization and set the value to the string returned from the function GenerateToken().  Now you are ready to send your request to the relay service bus.

image

Enjoy.