Robust Cloud Integration with Azure

For the last year I have been busy co-authoring this book on Azure Cloud Integration with my follow co-authors Abhishek Kumarm, Martin Abbott, Gyanendra Kumar Gautam, James Corbould and Ashish Bhanbhani.

Image result for robust cloud integration with azure

It is available on the Packt website here: https://www.packtpub.com/virtualization-and-cloud/robust-cloud-integration-azure

This book will teach you how to design and implement cloud integration using Microsoft Azure. It starts by showing you how to build, deploy, and secure the API app. Next, it introduces you to Logic Apps and helps you quickly start building your integration applications. We’ll then go through the different connectors available for Logic Apps to build your automated business process workflow. Its packed with a lot of information spanning just under 700 pages.

Don’t forget to check out another publication I co-authored back in 2015 with with Mark Brimble, Johann Cooper and Colin Dijkgraaf called SOA Patterns with BizTalk Server 2013 and Microsoft Azure.

SOA Patterns with BizTalk Server 2013 and Microsoft Azure - Second Edition Book Cover

And it is still available from the Packt website here: https://www.packtpub.com/networking-and-servers/soa-patterns-biztalk-server-2013-second-edition

Hope you enjoy reading it, just as I enjoyed writing the content.

Using Azure Application Insights with BizTalk. Part 1

One of the great features of Application Insights is the capability to monitor IIS websites and web services. It uses an agent  installed on the IIS server to send telemetry data to Azure Application Insights running in Azure.

In this blog I will setup monitoring of a published BizTalk schema as a WCF service hosted on the server running BizTalk.

First you will need a Microsoft Azure Subscription. Logon to Azure and create a new Application Insight service as shown below. Set the Application Type to ASP.Net web application. This will install the default IIS template on the dashboard for monitoring web sites and web services. You also have the option to add additional monitoring of the other metrics available or delete the existing ones added by the template.

image

Note the location of the data store can only be set to Central US at this time.  When the blade is created, the default template will look very similar to below.

image

Once the service has been created, you will need the generated Instrumentation Key which is available from the properties blade.

image

Next download the Status Monitor from https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-performance-live-website-now/ and install it on your BizTalk server which has IIS installed. Execute the file Microsoft.Diagnostics.Agent.StatusMonitor.exe from the default installed  location c:\Program Files\Microsoft Application Insights\Status Monitor.

When it comes to selecting the published WCF service and the telemetry subscription, you will get the following error shown below.

image

This is because the monitor application cannot read the web.config file in the web service directory created by BizTalk.  The namespace added to the Configuration element highlighted below must be temporary removed and the Monitor application executed again to correctly read the web.config file.

image

After the Monitor application has been executed with the namespace removed, it will successfully update the web.config file and add the ApplicationInsights.config to the web service folder. Remember to add the namespace back into the web.config file.

image

If you open the ApplicationInsights.config file you will see the instrumentation key already inserted under the element called InstrumentationKey.

Now when  you start sending requests to the WCF web service endpoint, you will  start seeing metrics being captured under the Performance investigation option as shown below.

image

Also you have the capability to create alerts and to setup monitoring of other metrics if you so desire.

Enjoy.

Error Code: 0x800706D9, There are no more endpoints available from the endpoint mapper.

On one of our Windows clustered BizTalk 2013  installations, we got this error when failing over to a second node. The nodes were failing over previously without any issues  except for this one occasion.  It seems the  Master Secret service was not able to start for some reason.

With the help of this blog post http://blogs.msdn.com/b/biztalkcpr/archive/2008/06/23/basic-troubleshooting-for-enterprise-single-sign-on-sso.aspx I was able to diagnose the issue with SSO after executing the following SSOCONFIG commands:

  • -showDB              : show the SQL Server and SSO database names
  • -discover             : discover SSO servers
  • -status                  : display SSO server status

It seams the SSO on the second node was not part of the clustered resource any more. As a long shot, I decided to restart the MSDTC cluster resource first and then the Enterprise Single Sign-On service. And it worked Smile , SSO on the second node was now clustered again.

After some more digging around, I found this article “The BizTalk, master ENTSSO, and MSDTC resources may require a long time to be brought online or are not bought online in BizTalk Server 2006 R2 if the BizTalk group moves to another node”.  The  article can be found here:  https://support.microsoft.com/en-us/kb/2400571

Moving forward, we made the SSO cluster resource dependant on the MSDTC cluster resource and the BizTalk services dependant on the SSO resource.

Using XSLT to map from many-to-one

This blog is a just a lesson on how to use XSLT to append many rows into one output node using a map with a XSLT Scripting Functiod.

The source message looked something like the one below, where the Line nodes are the repeated rows that are required to be append to one string value.

<SitaBDS xmlns=”http://ECL.Online.BT.JobBooking.SITAAirNZBDS_FF”>
<BagageClaim xmlns=””>
<DS>
      <Lines>
<Field>This is the first line.</Field>
</Lines>
<Lines>
<Field>This is the second line.</Field>
</Lines>
<Lines>
<Field>This is the third line.</Field>
</Lines>
    </DS>
</BagageClaim>
</SitaBDS>

The final output should look like this:

<ns0:Root xmlns:ns0=”http://FFSchema.Schema1″><Field>This is the first line. This is the second line. This is the third line. </Field></ns0:Root>

 

Below are the steps I normally follow to work out the syntax of the XSL to be placed inside an Inline XSLT Scripting Functiod.

1. Create a BizTalk map for  the source and destination schemas and add a link between the root nodes as shown below.

image

I initially use this this technique of linking the root elements together to obtain the correct namespaces and a template XSL code to start from.

 

2. Click on Validate the map to get Visual Studio to generate the XSLT mapping file. This will create the hyperlinks to the generated file as shown.

image

3. Clicking on the link to the output file of the XSLT will open the file in Visual Studio. Right click on the page and select View Source and take a copy of the code.

image

4. In Visual Studio add a new XSLT file to your project and replace the sample XSL with the generated XSL from the above step. It should look something like the code below. This method of pasting the generated XSL code provides the required namespaces  and  aliases to use.

   1: <?xml version="1.0" encoding="utf-16"?>

   2: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt"

   3:                 xmlns:var="http://schemas.microsoft.com/BizTalk/2003/var"

   4:                 exclude-result-prefixes="msxsl var s0" version="1.0"

   5:                 xmlns:ns0="http://FFSchema.Schema1"

   6:                 xmlns:s0="http://FFSchema.FlatFileSchema1">

   7:   <xsl:output omit-xml-declaration="yes" method="xml" version="1.0" />

   8:   <xsl:template match="/">

   9:     <xsl:apply-templates select="/s0:SitaBDS" />

  10:   </xsl:template>

  11:   <xsl:template match="/s0:SitaBDS">

  12:     <ns0:Root>

  13:       <xsl:value-of select="./text()" />

  14:     </ns0:Root>

  15:   </xsl:template>

  16: </xsl:stylesheet>

5. Now we need to modify the XSL between the <ns0:Root> tags to concatenate the field values from the source message.  This is the point where I start to construct the XSL syntax. To append the field values the following XSL was added between the root elements.

   1: <?xml version="1.0" encoding="utf-16"?>

   2: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt"

   3:                 xmlns:var="http://schemas.microsoft.com/BizTalk/2003/var" exclude-result-prefixes="msxsl var s0" version="1.0"

   4:                 xmlns:ns0="http://FFSchema.Schema1"

   5:                 xmlns:s0="http://ECL.Online.BT.JobBooking.SITAAirNZBDS_FF">

   6:   <xsl:output omit-xml-declaration="yes" method="xml" version="1.0" />

   7:   <xsl:template match="/">

   8:     <xsl:apply-templates select="/s0:SitaBDS" />

   9:   </xsl:template>

  10:   <xsl:template match="/s0:SitaBDS">

  11:     <ns0:Root>

  12:

  13:       <!-- variable to hold appended values-->

  14:       <xsl:variable name="AppendedValue">

  15:         <!-- iterate through each line-->

  16:         <xsl:for-each select="BagageClaim/DS/Lines/Field">

  17:           <xsl:value-of select="./text()" />

  18:           <!-- add space -->

  19:           <xsl:value-of select="' '" />

  20:         </xsl:for-each>

  21:       </xsl:variable>

  22:

  23:       <!-- output the concatenated value -->

  24:       <xsl:element name="Field">

  25:         <xsl:value-of select="$AppendedValue"/>

  26:       </xsl:element>

  27:     </ns0:Root>

  28:   </xsl:template>

  29: </xsl:stylesheet>

6. You should be able to test the XSL syntax by setting the XML Document properties to a sample XML file. Right click anywhere on the XSLT file and select Properties on the context menu. Set the Input property to a valid sample XML source file. Once you have done that press the keys  (Ctrl + Alt + F5) to start the transformation without  debugging.

This should produce the following output.

<ns0:Root xmlns:ns0=”http://FFSchema.Schema1″><Field>This is the first line. This is the second line. This is the third line. </Field></ns0:Root>

7. Once we have tested the XSL code, we can now add that to our map using the Inline XSLT Functiod.

image

Remember when using XSL to populate a element, you are responsible for creating the destination node and all child nodes in your XSL. The final map looks like below:

image

Now testing the map with a real sample source message produces the following output.

image

Enjoy.

Concatenating XML elements in BRE without using .Net classes

Here is a solution I use to concatenate several XML elements in BRE without having to use static .Net classes. I used this method to avoid having to assert an instance of a .Net class. More in formation about about asserting .Net classes can be found here: http://msdn.microsoft.com/en-us/library/aa950269.aspx

This method simply embeds the XSLT  CONCAT string function in the XPATH query. For example to concatenate 2 elements the following code would be used: concat(xpath to element 1, xpath to element 2).

In the sample message below, I am required to append the  CorrelationID and JobNo values together and return the concatenated value as a vocabulary in BRE.

image

 

The XPATH statement to concatenate the two elements is shown below:

concat(/*[local-name()=’FaultEnvelope’]/*[local-name()=’CustomException’]/*[local-name()=’CorrelationID’],/*[local-name()=’FaultEnvelope’]/*[local-name()=’CustomException’]/*[local-name()=’JobNo’])

This produces the following output. However the only issue now is I require a space between the two values.

image

A space can be added by appending another concat statement in the xpath as shown.

image

Now that the XPATH statement is correctly structured, it can be added to the XPath field in Vocabulary Definition Wizard. Note the XPath selector field is required to be set to the root path “/”

clip_image002

 

In the past I have also used other XPATH functions inside the XPath field with great success.

Enjoy.

Using User Defined Table parameters of the same name in a stored procedure

When using a stored procedure that uses UDT (User defined table) you may get the following error  “An item with the same key has already been added” when using the Consume Adaptor Service as shown below.

image

This error was due because I was using the same UDT ([NameTVP]) for 2 of parameters in the stored procedure as shown below at lines 3 and 4.

   1: CREATE PROCEDURE dbo.bts_LoadAddress

   2:     -- Add the parameters for the stored procedure here

   3:     @Region [NameTVP] readonly,

   4:     @Town [NameTVP] readonly

 

It appears the adaptor wizard cannot handle several User Defined Tables using the same name  for the parameters. The simple resolution is to use distinct names for the UDT’s to avoid this error.

Sending JSON messages from a BizTalk2013 adaptor

I had a requirement where I needed to send a message to a web API that only accepted messages as JSON.

BizTalk2013  provides a WCF-WebHttp adaptor for sending sending XML messages to a REST endpoint but  not in JSON format. There are  rumours  this will be remedied in BizTalk2013R2, unfortunately I required a solution now.

So in the meantime I will use the the Json.NET component  to convert the XML  message to JSON. I also found this blog from http://blog.quicklearn.com/2013/09/06/biztalk-server-2013-support-for-restful-services-part-45/ to convert XML messages to JSON  using the Json.net in a custom pipeline component which I ended up using with some mods.

 

When using this pipeline remember to set the Outbound HTTP Headers as shown below:

image

 

This all worked a treat until during the unit testing phase I created a sample XML document with only one repeating element.This caused the  following error from the Web API.

Can not deserialize instance of java.util.ArrayList out of VALUE_STRING token at [Source: java.io.StringReader@6b270743; line: 1, column: 152] (through reference chain: model.ExtContact["names"])

As it turned out, the Web API required all unbound elements to be sent as an array which should have looked like this “names”: [“Mahindra Morar”] but instead was being sent as “names”:”Mahindra Morar”

After reading the documentation on the Json.Net component there is an option to always force a json array for an element. This is done by adding the following attribute json:Array=”true” on the unbound element.

Here is a code snippet from the documentation that shows how it is done.

image

 

Now I though I could just simply import a schema with the element and attribute which produced the following schema output.

<ns0:Root xmlns:ns0=”http://BizTalk_Server_Project1.People”>
<ns1:Name Array=”true” xmlns:ns1=”http://james.newtonking.com/projects/json”>Tom Lee</ns1:Name>
</ns0:Root>

However the Json.net component did not like having the namespace declared inline with the unbound element when it was generated by BizTalk. The work around for this was to add the attribute to the elements in the custom pipeline component, use the XMLTranslatorStream to remove the inline namespaces or debug the source code. I ended up using the first option as I already had the XML document loaded in a XMLDoument type. To keep the solution a bit more flexible, I exposed a property on the pipeline to accept a list of comma separated xpath elements that required to be outputted as json arrays.

The Execute method  in the code from http://blog.quicklearn.com/2013/09/06/biztalk-server-2013-support-for-restful-services-part-45/ was modified as shown below.

public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)
        {

            #region Handle CORS Requests
            // Detect of the incoming message is an HTTP CORS request
            // http://www.w3.org/TR/cors/

            object httpMethod = null;
            httpMethod = inmsg.Context.Read(HTTP_METHOD_PROPNAME, WCF_PROPERTIES_NS);

            if (httpMethod != null && (httpMethod as string) == OPTIONS_METHOD)
            {
                // Remove the message body before returning
                var emptyOutputStream = new VirtualStream();
                inmsg.BodyPart.Data = emptyOutputStream;

                return inmsg;
            }
            #endregion


            // Make message seekable
            if (!inmsg.BodyPart.Data.CanSeek)
            {
                var originalStream = inmsg.BodyPart.Data;
                Stream seekableStream = new ReadOnlySeekableStream(originalStream);
                inmsg.BodyPart.Data = seekableStream;
                pc.ResourceTracker.AddResource(originalStream);
            }

            // Here again we are loading the entire document into memory
            // this is still a bad plan, and shouldn't be done in production
            // if you expect larger message sizes

            XmlDocument xmlDoc = new XmlDocument();
            xmlDoc.Load(inmsg.BodyPart.Data);
            
            //Get the array list of the elements required to be outputted as json arrays from the exposed pipeline property.
            string[] elementList = arrayXpathElementList.Split(',');

            //add the namespace required to force an element.
            xmlDoc.DocumentElement.SetAttribute("xmlns:json", "http://james.newtonking.com/projects/json");

            if (xmlDoc.FirstChild.LocalName == "xml")
                xmlDoc.RemoveChild(xmlDoc.FirstChild);

            
            for (int indexElement = 0; indexElement < elementList.Length; indexElement++)
            {
                XmlNodeList dataNodes = xmlDoc.SelectNodes(elementList[indexElement]);
                foreach (XmlNode node in dataNodes)
                {
                    //Add the attribute required on the element to force the creation of an json array. 
                    string ns = node.GetNamespaceOfPrefix("json");
                    XmlNode attr = xmlDoc.CreateNode(XmlNodeType.Attribute, "Array", ns);
                    attr.Value = "true";
                    node.Attributes.SetNamedItem(attr);
                }
            }
            // Remove any root-level attributes added in the process of creating the XML
            // (Think xmlns attributes that have no meaning in JSON)
            

            string jsonString =  JsonConvert.SerializeXmlNode(xmlDoc, Newtonsoft.Json.Formatting.Indented, true);

            #region Handle JSONP Request
            // Here we are detecting if there has been any value promoted to the jsonp callback property
            // which will contain the name of the function that should be passed the JSON data returned
            // by the service.

            object jsonpCallback = inmsg.Context.Read(JSONP_CALLBACK_PROPNAME, JSON_SCHEMAS_NS);
            string jsonpCallbackName = (jsonpCallback ?? (object)string.Empty) as string;

            if (!string.IsNullOrWhiteSpace(jsonpCallbackName))
                jsonString = string.Format("{0}({1});", jsonpCallbackName, jsonString);
            #endregion

            var outputStream = new VirtualStream(new MemoryStream(Encoding.UTF8.GetBytes(jsonString)));
            inmsg.BodyPart.Data = outputStream;

            return inmsg;
        }

        #endregion        
    }

After making the changes above, the component is now able to covert a unbound element with only one value to a json array correctly.

Enjoy.

Handling SQL Errors in Try-Catch blocks using the WCF SQL Adaptor

At times  I am required to raise business rule type errors or T-SQL errors back to the BizTalk application from a stored procedure. This blog is about issues I encountered  when using a typed stored procedure schema.

The basic structure of the stored procedure is similar to below. The SQL statements are normally wrapped inside in a Try-Catch block with any business errors raised by the RAISERROR statement.

Create PROCEDURE [dbo].[MySproc]

@RequestID int
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;

  begin try

        select @RequestID    
        
        --RAISERROR  ( 'This is my raised error',  16,1)        
        --select 1/0    
                
    
        
        return 0
    end try
    begin catch
        --error handler
        select     ERROR_NUMBER() AS ExceptionErrorNumber
                ,ERROR_MESSAGE() AS ExceptionErrorMessage;
            
        return 1    
    end catch;
END

Below is the generated typed schema using the Consume Adaptor Service wizard. The first element StoredProcedureResultSet0 is the returned dataset and second element StoredProcedureResultSet1 is the dataset representing the error structure.

image

 

When dropping a test message in, the response message looks like this below:

image

As expected the StoredProcedureResultSet0 contains the recordset returned. Now if I modify the stored procedure to raise an error as highlighted below.

alter PROCEDURE [dbo].[MySproc]
@RequestID int
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;

  begin try

        select @RequestID            
        RAISERROR  ( 'This is my raised error',  16,1)        
        --select 1/0    
        return 0
    end try
    begin catch
        --error handler
        select     ERROR_NUMBER() AS ExceptionErrorNumber
                ,ERROR_MESSAGE() AS ExceptionErrorMessage;
            
        return 1    
    end catch;
END

The following message will be received:

image

Notice the StoredProcedureRecordSet0 has the recordset value returned from the select statement and  StoredProcedureRecordSet1 has the error message that was thrown from the RASIERROR statement. This is what you would expect.

Now look what happens if you cause a SQL error.  I generated an error by dividing 1 by zero as highlighted below which will be caught by the Begin-Catch statement block.

alter PROCEDURE [dbo].[MySproc]
@RequestID int
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;

  begin try

        select @RequestID            
        --RAISERROR  ( 'This is my raised error',  16,1)        
        select 1/0    
        return 0
    end try
    begin catch
        --error handler
        select     ERROR_NUMBER() AS ExceptionErrorNumber
                ,ERROR_MESSAGE() AS ExceptionErrorMessage;
            
        return 1    
    end catch;
END

The response message gets suspended with the following error:

The adapter failed to transmit message going to send port “WcfSendPort_SqlAdapterBinding_TypedProcedures_dbo_Custom” with URL “mssql://.//Prototype?”. It will be retransmitted after the retry interval specified for this Send Port. Details:”Microsoft.ServiceModel.Channels.Common.AdapterException: The ResultSet returned as part of the Typed Stored Procedure or Typed Polling invocation did not match the metadata available. If this Stored Procedure or Polling Statement can return a variable number of result sets, consider using the un-typed Stored Procedure or un-typed Polling operation instead

The reason for the suspended message is when a SQL error is raised, the error recordset is placed in another StoredProcedureRecordSet array starting at element 2 as shown below:

image

From this point on any further requests that generate any SQL errors will now be placed in the StoredProcedureRecordSet2 as shown until the send port is unlisted and restarted.

image

image

 

To solve this issue there are two options available.  The first option is described below.

1. Modify the generated SQL WCF schema to include a second StoredProcedureResultSet2 as shown to capture the SQL errors.

image

 

2. Create a map with an in-line XSLT functiod to map any one of the error elements to the destination schema.

image

The XSLT script is shown here which uses wildcards in the path leading up to the exception errors.

<Error>
<Number><xsl:value-of select="/*/*/*/*[local-name()='ExceptionErrorNumber' and namespace-uri()='http://schemas.microsoft.com/Sql/2008/05/ProceduresResultSets/dbo/CatchExceptions']"/></Number>
<Message><xsl:value-of select="/*/*/*/*[local-name()='ExceptionErrorMessage' and namespace-uri()='http://schemas.microsoft.com/Sql/2008/05/ProceduresResultSets/dbo/CatchExceptions']"/></Message>
</Error>

 

The second option is to develop a custom pipeline component using the XML Translator Stream to rename the element names that are greater than 1 . I will continue this in part 2 of this blog.

Enjoy.

Referenced nodes not showing in map with 2 input schemas

I had a scenario today where some source elements of type reference were not showing in a map with 2 input schemas. Creating maps with just a single input schema seemed fine.

The highlighted element below did not show the child elements for some reason.

image

Yet the source schema looks like this.

image

And when using the schema in a single input map it was fine as shown below. The child elements were shown as expected.

image

After some investigative work I found the issue. The schema location path in the schemas where using absolute paths as shown here: <xsd:import schemaLocation=”base_cust_3_5.xsd” …

By changing the paths to a relative path as shown in bold <xsd:import schemaLocation=”.\base_cust_3_5.xsd“… did the trick.

Enjoy.