Windows App Certification Kit – Error: Another user has installed an unpackaged version

After testing my new windows store app in debug mode, I then prepared to test my application using the Windows App Certification Kit.

After the code compiled successfully and automatically launched the certification kit, I got the following error:

image

This occurred because I launched the certification kit after adding my account to the local administrators group account. I could not remove the existing application as it was not listed on the desktop.

By using the following steps below I successfully removed the app and created another package to test with the certification kit.

1. Launch Windows PS (PowerShell) using the Run as administrator account

image

2. Execute the following command in the PS console window

Get-AppxPackage   -Name *<app name>* -AllUsers

where <app name> is the name of your windows app. In my scenario the PS script looked like Get-AppxPackage –Name *mycontents* -AllUsers

image

3.  Using the information found above, you can now determine the users who have installed the app. In my case it was the user primary\mm

4. There are two options to remove this package, either run the following PS script as the administrator or logon as the user specified in the property PackageUserInformation. In this case the user was “primary\mm”

Get-AppxPackage   -Name *mycontents* –AllUsers | Remove-AppxPackage

After executing the PS script, I could now run the certification kit. Also as a side note you must be a member of the local administrators group on the computer when creating the App Package and performing the certification check.

Enjoy.

Could not load assembly “Microsoft.BizTalk.Interop.SSOClient, Ver=5.0.1.0” after BTS2013 upgrade

After successfully upgrading an existing BTS2010 and SSO installation to BTS2013 we found a custom pipeline component failing with the following error in the event log after some initial testing.

image

After reviewing the component source code, a reference to the Microsoft.BizTalk.Interop.SSOClient v5.0.1.0 assembly was found. Instead of recompiling the the solution using VS2012 and having to redeploying,  I decided to redirect the assembly version in the BizTalk config files BTSNTSvc.exe.config and BTSNTSvc64.exe.config.

Below is the assembly binding information that was placed between the existing <runtime> tags.

<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1" appliesTo="v4.0.30319">
   <dependentAssembly>
      <assemblyIdentity name="Microsoft.BizTalk.Interop.SSOClient"
                        publicKeyToken="31bf3856ad364e35"
                        culture="neutral" />
      <bindingRedirect oldVersion="5.0.1.0"
                       newVersion="7.0.2300.0"/>
   </dependentAssembly>
</assemblyBinding>

See this link http://msdn.microsoft.com/en-us/library/7wd6ex19.aspx for more information on redirecting assembly versions.

 

Enjoy.

In-place upgrading of BTS2010 to BTS2013 missing DB permissions

After completing an in-place upgrade of BTS2010 to BTS2013 successfully, I found the ESB Portal no longer worked and displayed the following error:

image

Also some adaptors where failing with the following error:

The type initializer for ‘Microsoft.BizTalk.Adaptotrs.ODBC.Runtime.ODBCTransmitAdaptor.ODBCAdapterProperties’ threw an exeception.

The issue is when upgrading to BTS2013 it creates 2 new tables, bts_dynamic_sendport_handlers and bts_dynamicport_subids in the BizTalkMgmtDb. However these new tables do not have any permissions applied to allow the BTS roles to access them.

After the upgrade you will need to execute the following T-SQL script to apply the correct object permissions.

use BizTalkMgmtDb
go
GRANT DELETE ON [bts_dynamic_sendport_handlers] TO [BTS_ADMIN_USERS] AS [dbo]
GO
GRANT INSERT ON [bts_dynamic_sendport_handlers] TO [BTS_ADMIN_USERS] AS [dbo]
GO
GRANT SELECT ON [bts_dynamic_sendport_handlers] TO [BTS_ADMIN_USERS] AS [dbo]
GO
GRANT UPDATE ON [bts_dynamic_sendport_handlers] TO [BTS_ADMIN_USERS] AS [dbo]
GO
GRANT SELECT ON [bts_dynamic_sendport_handlers] TO [BTS_HOST_USERS] AS [dbo]
GO
GRANT SELECT ON [bts_dynamic_sendport_handlers] TO [BTS_OPERATORS] AS [dbo]
GO
GRANT DELETE ON [bts_dynamicport_subids] TO [BTS_ADMIN_USERS] AS [dbo]
GO
GRANT INSERT ON [bts_dynamicport_subids] TO [BTS_ADMIN_USERS] AS [dbo]
GO
GRANT SELECT ON [bts_dynamicport_subids] TO [BTS_ADMIN_USERS] AS [dbo]
GO
GRANT UPDATE ON [bts_dynamicport_subids] TO [BTS_ADMIN_USERS] AS [dbo]
GO
GRANT SELECT ON [bts_dynamicport_subids] TO [BTS_HOST_USERS] AS [dbo]
GO
GRANT SELECT ON [bts_dynamicport_subids] TO [BTS_OPERATORS] AS [dbo]
GO

Enjoy.

Since then Microsoft has released a patch to fix this issue: http://support.microsoft.com/kb/2832136

Truncating data in the EsbException database

Once you got the BizTalk ESB Toolkit running for a while, you would have noticed the EsbException database will begin to grow in size if left unchecked. The main culprit is the MessageData table which holds the entire message receive by BizTalk.

Unfortunately there is no out of the box maintenance script to remove old records from this database. To resolve this issue I decided to write the stored procedure below and execute it every night in a SQL Agent job.

The parameter @DaysToKeep defines how many days worth of exception data you wish to keep. Also I decided to batch delete the rows as not to blow out the transaction log as this database can get quite large. At the end of the script I also shrink the database.

If you wish to deploy this script in a production environment, please test it thoroughly.

USE [EsbExceptionDb]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

/*********************************************************************************************
Description:    Deletes records from the tables in a batch style and shrinks the database.

Params:            @DaysToKeep - number of days from the current date to keep alerts.
*********************************************************************************************/
ALTER procedure [dbo].[maint_TruncateOldAlerts]

@DaysToKeep int = 7

as

declare @error int, @rowcount int
declare @DateToDelete datetime
set  @DateToDelete = DATEADD(dd,@DaysToKeep*-1,GETDATE())
print @DateToDelete

--using the fault table as the key table for the insertion date

--dbo.ProcessedFault
print 'Deleting ProcessedFault'
while 1 = 1
begin
    
    delete top(1000) pf from  dbo.Fault f with (nolock) inner join dbo.ProcessedFault pf with (nolock)
    on f.FaultID = pf.ProcessedFaultID where f.InsertedDate < @DateToDelete
    
    If @@rowcount < 1 break
end

--dbo.MessageData
print 'Deleting MessageData'
while 1 = 1
begin
    
    delete top(1000) md from dbo.Fault f with (nolock) inner join dbo.[Message] m with (nolock) 
    on f.FaultID = m.FaultID inner join dbo.MessageData md with (nolock) 
    on m.MessageID = md.MessageID where f.InsertedDate < @DateToDelete
    
    If @@rowcount < 1 break
end

--dbo.ContextProperty
print 'Deleting ContextProperty'
while 1 = 1
begin
    
    delete top(1000) cp from dbo.Fault f with (nolock) inner join dbo.[Message] m with (nolock) 
    on f.FaultID = m.FaultID inner join dbo.ContextProperty cp with (nolock) 
    on m.MessageID = cp.MessageID where f.InsertedDate < @DateToDelete
    
    If @@rowcount < 1 break
end

--dbo.AuditLogMessageData
print 'Deleting AuditLogMessageData'
while 1 = 1
begin
    
    delete top(1000) almd from dbo.Fault f with (nolock) inner join dbo.[Message] m with (nolock) 
    on f.FaultID = m.FaultID inner join dbo.AuditLog al on al.MessageID = m.MessageID
    inner join dbo.AuditLogMessageData almd on al.AuditLogID = almd.AuditLogID
    where f.InsertedDate < @DateToDelete
    
    If @@rowcount < 1 break
end


--dbo.AuditLog
print 'Deleting AuditLog'
while 1 = 1
begin
    delete top(1000) al from  dbo.Fault f with (nolock) inner join dbo.[Message] m with (nolock) 
    on f.FaultID = m.FaultID inner join dbo.AuditLog al on al.MessageID = m.MessageID
    where f.InsertedDate < @DateToDelete

    If @@rowcount < 1 break
end

--dbo.AlertSubscriptionHistory
print 'Deleting AlertSubscriptionHistory'
while 1 = 1
begin
    delete top(1000) ash from dbo.Fault f with (nolock) inner join dbo.AlertHistory ah with (nolock)
    on f.FaultID = ah.FaultID inner join dbo.AlertSubscriptionHistory ash with (nolock)
    on ah.AlertHistoryID = ash.AlertHistoryID
    where f.InsertedDate < @DateToDelete

    If @@rowcount < 1 break
end

--dbo.AlertHistory
print 'Deleting AlertHistory'
while 1 = 1
begin
    delete top(1000) ah from dbo.Fault f with (nolock) inner join dbo.AlertHistory ah with (nolock)
    on f.FaultID = ah.FaultID where f.InsertedDate < @DateToDelete

    If @@rowcount < 1 break
end

--dbo.AlertEmail
print 'Deleting AlertEmail'
while 1 = 1
begin
    delete top(1000) ae from dbo.AlertEmail ae where ae.InsertedDate < @DateToDelete
    If @@rowcount < 1 break
end


--dbo.Message
print 'Deleting Message'
while 1 = 1
begin
    delete top(1000) m from dbo.Fault f with (nolock) inner join dbo.[Message] m with (nolock) 
    on f.FaultID = m.FaultID where f.InsertedDate < @DateToDelete

    If @@rowcount < 1 break
end

--dbo.Fault
print 'Deleting Fault'
while 1 = 1
begin
    delete top(1000) f from dbo.Fault f with (nolock) where f.InsertedDate < @DateToDelete

    If @@rowcount < 1 break
end


--shrink the database 
DBCC SHRINKDATABASE (EsbExceptionDb, 10);

go

Enjoy.

What to use SOAP or REST

Choosing between SOAP and REST style web services for an architectural solution should depend on the consumers of the service in my opinion.  To help me make the right decision I decided to draw up a comparison matrix below showing the pros and cons of each service style.

Feature SOAP REST
Development effort

Having comprehensive toolkits  make development easier.

Toolkits are not required.  However additional work is required to map URI paths the specific handlers.
Describing available Interface definitions. Generally a WSDL is available to describe the available contracts and by using client tools proxies maybe easily generated A document is normally manually written and is  published as a web page. There is a machine readable version called WADL but is not widely used.
Message format XML based format. Has SOAP and WS-* specific markup. This can make the payload quite large. Can craft your own however common formats are XML or JASON. Does not require XML parsing.
Human readable results.
Message Transport Can use a number of transport protocols such as HTTP/S, TCP, SMTP, UDP, JMS etc Normally HTTP/HTTPS. Other protocols are supported with extra development effort.
Message contracts SOAP requires a formal contract to exist between the provider and consumer. 
If  rigid type checking is required then use SOAP.
Focus is on accessing named operations.

Has a form of dynamic contract and relies on documentation. Focus is  on accessing named resources.

Handling of complex domain objects Complex domain models can be easily represented using soap. Not so easy to handle complex models.  Excellent choice if you only require CRUD operations  over a RDBMS.
Transactional support WS* protocol supports transactions which is geared towards SOAP. Has no built in support.  The HTTP protocol cannot provide two-phase commit across distributed transactional resources.
Reliable messaging Built into the WS-* protocol. Has built in successful/retry logic. Clients need to deal with communication failures.
State management Supports both contextual information and conversation state management. The server cannot maintain any state. State management must be handled by the client
Caching No supported HTTP Get operations can be cached.
Message Encoding Supports text and binary encoding Limited to text only
Testing of services Requires unit tests to be developed or 3 rd party test tools. Can simply use a web browser and view the results. 
Security Supports enterprise security  using the WS-Security protocol. Use SOAP if intermediary devices are not trusted. Use SSL protocol for point-to-point.
Also can easily identify the intent of a request based on the HTTP verb.
Client side development complexity Toolkits are required to easily consume the service. Can consumed by any client, even a web browser using Ajax and Javascript.
Maintainability Easier to maintain due to tight data contracts and standards. In the long-run can be much expensive to maintain due to lack of standards
Popularity Mainly in enterprise applications that required WS-* features. Used by most  publically  available  web services.

My conclusion is there is no right or wrong approach for building web services with either SOAP or REST, it depends on the requirements of the consumers.

I tend to lean towards REST for CRUD type web services that integrate with websites and  SOAP for integration between critical enterprise systems that require the WS-* features such as transaction support and reliable communications.

I hope anyone reading this will find this blog helpful in making the correct architectural decision and please let me know I have left anything out.

XMLTranslatorStream to the rescue

This utility class found in the Microsoft.BizTalk.Streaming namespace has saved me on many occasions where I had to modify namespaces, elements, attributes and the XML declarations inside a pipeline and from a referenced custom assemble inside an orchestration. This class uses a stream reader and writer with virtual methods representing the components of a xml structure.

More information can be found here: http://msdn.microsoft.com/en-us/library/microsoft.biztalk.streaming.xmltranslatorstream.aspx

Using the XMLtranslatorStream avoids having to load the xml message into a xmlDocument object which will impact performance for large messages.

The list below shows all the available methods that may be overridden within this class

protected override int ProcessXmlNodes(int count);
 protected virtual void TranslateAttribute();
 protected virtual void TranslateAttributes();
 protected virtual void TranslateAttributeValue(string prefix, string localName, string nsURI, string val);
 protected virtual void TranslateCData(string data);
 protected virtual void TranslateComment(string comment);
 protected virtual void TranslateDocType(string name, string pubAttr, string systemAttr, string subset);
 protected virtual void TranslateElement();
 protected virtual void TranslateEndElement(bool full);
 protected virtual void TranslateEntityRef(string name);
 protected virtual void TranslateProcessingInstruction(string target, string val);
 protected virtual void TranslateStartAttribute(string prefix, string localName, string nsURI);
 protected virtual void TranslateStartElement(string prefix, string localName, string nsURI);
 protected virtual void TranslateText(string s);
 protected virtual void TranslateWhitespace(string space);
 protected virtual void TranslateXmlDeclaration(string target, string val);
 protected virtual bool TranslateXmlNode();

 

 

To use the XmlTranslatorStream inside a custom pipeline, add a subclass that inherits the XmlTranslatorStream. An example is shown below where I have overridden some of the methods.

Extra information maybe passed to the subclass by adding parameters to the constructor.

#region Subclass XmlModifierStream
    public class XmlModifierStream : XmlTranslatorStream
    {
        public XmlModifierStream(Stream input): base(new XmlTextReader(input), Encoding.Default)
        {
            Debug.WriteLine("[BTS.Utilities.CustomPipelines.XmlNamespaceModifierStream]Entered method");
        }

        protected override void TranslateXmlDeclaration(string target, string val)
        {
            base.TranslateXmlDeclaration(target, val);            
        }

        protected override void TranslateStartElement(string prefix, string localName, string nsURI)
        {            
            base.TranslateStartElement(prefix, localName, nsURI);
        }

        protected override void TranslateStartAttribute(string prefix, string localName, string nsURI)
        {            
            base.TranslateStartAttribute(prefix, localName, nsURI);    
        }         
    }
#endregion

 

 

Then inside the Execute method of the pipeline call the constructor of the sub class passing the data stream as shown below.

public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(
          Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)
      {
          Debug.WriteLine("[BTS.Utilities.CustomPipelines.Execute]Entered method");

          if (inmsg == null || inmsg.BodyPart == null || inmsg.BodyPart.Data == null)
          {
              throw new ArgumentNullException("inmsg");
          }

          //call the xml translator subclass
          inmsg.BodyPart.Data = new XmlModifierStream(inmsg.BodyPart.GetOriginalDataStream());

          Debug.WriteLine("[BTS.Utilities.CustomPipelines.Execute]Exit method");
          return inmsg;
      }

 

 

Below are some examples where I have used the XMLTranslatorStream class.

In this scenario the incoming message contained elements which specified the document version and document type. The values TypeVersion and Type were read from the message below and used to identify the message type and finally rename the root element, remove all the element prefixes and update the namespace.

image

Inside the Execute function of the pipeline, I first opened a stream to obtain the required values from the message. These values are then passed as parameters to the subclass XmlNamespaceModifierStream together with a reference to the original stream.

The subclass overrides the TranslateStartElement method and checks if the current element is the root element of the document. If it is the root element I rename the element from StandardBusinessDocument to either CatalogueItemNotification or PriceSynchronisationDocument and set the namespace to a new value.

The overiden TranslateAttribute removes any attributtes that are prefixed with “xmlns”

//xpath the the document version elelment

private const string XPATHQUERY_VERSION = "/*[local-name()='StandardBusinessDocument']/*[local-name()='StandardBusinessDocumentHeader']/*[local-name()='DocumentIdentification']/*[local-name()='TypeVersion']";
private const string DOCUMENTTYPE_ELEMENT = "Type";




#region IComponent members
public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)
{
Debug.WriteLine("[BTS.Utilities.CustomPipelines.NamespaceModifier.Execute]Entered method");

if (inmsg == null || inmsg.BodyPart == null || inmsg.BodyPart.Data == null)
{
throw new ArgumentNullException("inmsg");
}

string propertyVersionValue = string.Empty;
string propertyTypeValue = string.Empty;

if (!string.IsNullOrEmpty(XPATHQUERY_VERSION))
{
Debug.WriteLine("[BTS.Utilities.CustomPipelines.NamespaceModifier.Execute]Obtain the xpath value from within the document");
IBaseMessagePart bodyPart = inmsg.BodyPart;
Stream inboundStream = bodyPart.GetOriginalDataStream();

//note that the path would be "C:\Documents and Settings\<BTSHostInstanceName>\Local Settings\Temp" for the virtual stream
VirtualStream virtualStream = new VirtualStream(VirtualStream.MemoryFlag.AutoOverFlowToDisk);
ReadOnlySeekableStream readOnlySeekableStream = new ReadOnlySeekableStream(inboundStream, virtualStream);
XmlTextReader xmlTextReader = new XmlTextReader(readOnlySeekableStream);
XPathCollection xPathCollection = new XPathCollection();
XPathReader xPathReader = new XPathReader(xmlTextReader, xPathCollection);
xPathCollection.Add(XPATHQUERY_VERSION);
bool isFirstMatch = false;
while (xPathReader.ReadUntilMatch())
{
//only interested in the first match
if (xPathReader.Match(0) && !isFirstMatch)
{
propertyVersionValue = xPathReader.ReadString();
isFirstMatch = true;

//get the type next which is 2nd element down 
while (xPathReader.Read())
{
if (xPathReader.LocalName.Equals(DOCUMENTTYPE_ELEMENT))
{
propertyTypeValue = xPathReader.ReadString();
break;
}
}
}
}

if (isFirstMatch)
{
Debug.WriteLine(string.Format("[BTS.Utilities.CustomPipelines.NetNamespaceModifier.Execute]Match found for xpath query. Value equals:{0}", propertyVersionValue));
}
else
{
Trace.WriteLine(string.Format("[BTS.Utilities.CustomPipelines.NetNamespaceModifier.Execute]No match found for xpath query '{0}'", XPATHQUERY_VERSION));
}

//rewind back to start
readOnlySeekableStream.Position = 0;
bodyPart.Data = readOnlySeekableStream;
}

inmsg.BodyPart.Data = new XmlNamespaceModifierStream(inmsg.BodyPart.GetOriginalDataStream(), propertyVersionValue, propertyTypeValue);
Debug.WriteLine("[BTS.Utilities.CustomPipelines.NamespaceModifier.Execute]Exit method");
return inmsg;
}
#endregion

#region Subclass XmlNamespaceModifierStream
public class XmlNamespaceModifierStream : XmlTranslatorStream
{
private const string CIN_DOCTYPE = "catalogueItemNotification";
private const string CPN_DOCTYPE = "priceSynchronisationDocument";
private const string ROOT_GS1_ELEMENT = "StandardBusinessDocument";
private const string NS_PREFIX = "urn:ean.ucc:";

private string _newNamespaceVersion;
private string _documentType;

protected override void TranslateStartElement(string prefix, string localName, string nsURI)
{
string newNSUri = string.Empty;
bool isElementFoundWithNamespace = false;
bool isFirstElement = false;

if (!string.IsNullOrEmpty(prefix) && !isFirstElement)
{
//element found with prefix. Modify namespace with new value and append passed namespace version 
newNSUri = NS_PREFIX + _newNamespaceVersion;
isElementFoundWithNamespace = true;

if (localName.Equals(ROOT_GS1_ELEMENT))
isFirstElement = true;
}

if (isElementFoundWithNamespace & isFirstElement)
{
//replace with new namespace
Debug.WriteLine(string.Format("[BTS.Utilities.CustomPipelines.NamespaceModifier.XmlNamespaceModifierStream]Replace namespace with {0}", nsURI + newNSUri));

if (_documentType.Equals(CIN_DOCTYPE))
localName = localName + "Catalogue";
if (_documentType.Equals(CPN_DOCTYPE))
localName = localName + "Price";

base.TranslateStartElement(null, localName, newNSUri);
}
else
{
base.TranslateStartElement(null, localName, null);
}

}

protected override void TranslateAttribute()
{
if (this.m_reader.Prefix != "xmlns" && this.m_reader.Name != "xmlns")
base.TranslateAttribute();
}

public XmlNamespaceModifierStream(Stream input, string namespaceVersion, string documentType)
: base(new XmlTextReader(input), Encoding.Default)
{
Debug.WriteLine("[BTS.Utilities.CustomPipelines.NamespaceModifier.XmlNamespaceModifierStream]Entered method");
_newNamespaceVersion = namespaceVersion.Trim();
_documentType = documentType.Trim();
Debug.WriteLine("[BTS.Utilities.CustomPipelines.NamespaceModifier.XmlNamespaceModifierStream]Exit method");
}
}

#endregion

In this custom send pipeline I used the XMLTranslator to add the following XML declaration “version=”1.0″ encoding=”utf-8″” and to modify the namespaces and prefixes of some elements and attributes.

The Execute function in the custom pipeline component simply calls the subclass XmlExtensionModifierStream passing only the original message stream as a parameter value.

The subclass overrides the TranslateXmlDeclaration,  TranslateStartElement and TranslateStartAttribute methods to modify the values.

#region IComponent members
    public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, 
             Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)
    {
        Debug.WriteLine("[BTS.Utilities.CustomPipelines.ExtensionModifier.Execute]Entered method");

        if (inmsg == null || inmsg.BodyPart == null || inmsg.BodyPart.Data == null)
        {
            throw new ArgumentNullException("inmsg");
        }

        inmsg.BodyPart.Data = new XmlExtensionModifierStream(inmsg.BodyPart.GetOriginalDataStream());        
        Debug.WriteLine("[BTS.Utilities.CustomPipelines.ExtensionModifier.Execute]Exit method");
        return inmsg;
    }
    
#endregion

#region Subclass XmlExtensionModifierStream
    public class XmlExtensionModifierStream : XmlTranslatorStream
    {
        public XmlExtensionModifierStream(Stream input)
            : base(new XmlTextReader(input), Encoding.Default)
        {
            Debug.WriteLine("[BTS.Utilities.CustomPipelines.ExtensionModifier.XmlNamespaceModifierStream]Entered method");
        }

        protected override void TranslateXmlDeclaration(string target, string val)
        {
            base.TranslateXmlDeclaration(target, val);
            this.m_writer.WriteProcessingInstruction("xml", "version=\"1.0\" encoding=\"utf-8\"");
        }

        protected override void TranslateStartElement(string prefix, string localName, string nsURI)
        {            
            switch (localName)
            {

                case "fMCGTradeItemExtension":
                    base.TranslateStartElement("fmcg", localName, "urn:ean.ucc:align:fmcg:2");
                    break;

                case "attributeValuePairExtension":
                    base.TranslateStartElement("gdsn", localName, "urn:ean.ucc:gdsn:2");
                    break;
                    
                default:
                    base.TranslateStartElement(prefix, localName, nsURI);
                    break;
            }           
        }

        protected override void TranslateStartAttribute(string prefix, string localName, string nsURI)
        {            
            switch (localName)
            {

                case "schemaLocation":
                    base.TranslateStartAttribute("xsi", localName, "http://www.w3.org/2001/XMLSchema-instance");
                    break;                

                default:
                    base.TranslateStartAttribute(prefix, localName, nsURI);
                    break;
            }
        }
         
    }
#endregion

 

 

Enjoy.

How to call a Web Service using username and password as credentials

Normally when connecting to a web service requiring username/password credentials the channel should be encrypted. BizTalk requires the channel to use HTTPS as the protocol when using BasicHttp bindings. As this was an internal web service the channel did not required any encryption.

These are the steps required to use username authentication without requiring a SSL transport.

1. Create a Custom WCF solicit-response send port as shown.

image

2. Click the Configure button and ensure the Binding Type is set to customBinding. Also check that the Binding section only contains textMessageEncoding and httpTransport as shown.

image

3. Click on textMessageEncoding and ensure you have selected the correct messageVersion. As my endpoint is an asmx web service I have chosen Soap12 as the message version.

image

4. Set the authentication scheme to match your web service by clicking on the httpTransport binding. The web service I was using required Ntlm. If you chose the incorrect authentication schema, an error will be generated in the event log when calling the web service.

Below is a screenshot of the settings for my service. Note the maxBufferSize, maxBufferPoolSize and maxReceivedMessageSize properties will need to match the web service settings also.

image

5. The username and password are then entered into the Credentials tab as shown below.

image

 

Enjoy…

EDI Mapping Dramas

NADLoop1 Segment

I had a situation where I needed to get the supplier and buyer party id’s from the EDI NADLoop1 nodes as shown

 

image

I did not want to use the looping functiod, instead I resorted to use inline XSLT with selectors as shown:

<xsl:value-of select="/s0:EFACT_D96A_ORDRSP/s0:NADLoop1/s0:NAD/NAD01[text() = 'SU']/following-sibling::s0:C082/C08201/text()" />

<xsl:value-of select="/s0:EFACT_D96A_ORDRSP/s0:NADLoop1/s0:NAD/NAD01[text() = 'BY']/following-sibling::s0:C082/C08201/text()" />

 

The selectors are represented by NAD01[text() = ‘SU’] and NAD01[text() = ‘BY’] and then by using the “following-sibling” function to get the actual party id.

Now I wanted to access these values in some C-sharp code inside a scripting functiod. Here are the steps I used.

1. Add a scripting functiod for Inline C# with no connections to any of the elements as shown below:

image

In the Inline script buffer text box I added the following code to declare the variables and functions to set the get the variable values.

private string _senderGLNNumber;
       private string _customerGLNNumber;
       private string _senderOrgType;
       private string _customerOrgType;
        
       public string SenderGLNNumberSet( string value ) { _senderGLNNumber  = value; return value;}
       public string SenderOrgTypeSet(string value) {_senderOrgType = value; return value;}

       public string CustomerGLNNumberSet( string value ) {_customerGLNNumber  = value; return value;}
       public string CustomerOrgTypeSet(string value) {_customerOrgType = value; return value;}

 

2. I then set the values using the another Scripting functiod for Inline XSLT as shown below. The output of the functiod is linked to an element in the destination schema to force the functiod to execute at the start of the transformation.

image

The Inline script buffer has the following XSLT. I use the XSLT Variable  element to call the C# functions to set the values.

<xsl:variable name="var:SenderGLNNumber" select="userCSharp:SenderGLNNumberSet(string(s0:NADLoop1/s0:NAD/NAD01[text() = 'SU']/following-sibling::s0:C082/C08201/text()))" />
  <xsl:variable name="var:CustomerGLNNumber" select="userCSharp:CustomerGLNNumberSet(string(s0:NADLoop1/s0:NAD/NAD01[text() = 'BY']/following-sibling::s0:C082/C08201/text()))" />

  <xsl:variable name="var:SenderOrgType" select="userCSharp:SenderOrgTypeSet(string(s0:NADLoop1/s0:NAD/NAD01[text() = 'SU']/following-sibling::s0:C082/C08203/text()))" />
  <xsl:variable name="var:CustomerOrgType" select="userCSharp:CustomerOrgTypeSet(string(s0:NADLoop1/s0:NAD/NAD01[text() = 'BY']/following-sibling::s0:C082/C08203/text()))" />

 

3. To set a destination element, add a Inline XSLT functiod as shown

image

The Inline script buffer has the following XSLT to create the element and value.

<xsl:element name ="RECIPIENT">
        <xsl:value-of select="userCSharp:CustomerGLNNumberGet()" />
      </xsl:element>

 

or to get it via C# code, you can use the Scripting functiod using Inline C# and adding  a retrieval function  in the Inline script buffer as shown below.

public string CustomerGLNNumberGet1() {return  _customerGLNNumber; }

 

How to hide referenced elements in a schema

When working with complex schemas it can be difficult to maintain due to the size. It would be nice to only view the elements that are not referenced.

For example you may have a schema similar to the one below but more complex than this one shown:

image

The elements Address, Person and Phone are all referenced elements and are also shown in the schema. Wouldn’t it be nice to hide those referenced elements?

Here is how to do it. Right click on the schema in the solution explorer window and open it using the XML (Text) editor as shown below:

image

Add the following declaration under the schema element

<xs:schema xmlns="http://Reference_Schema.Contact" xmlns:b="http://schemas.microsoft.com/BizTalk/2003" targetNamespace="http://Reference_Schema.Contact" xmlns:xs="http://www.w3.org/2001/XMLSchema">


  <xs:annotation xmlns:b="http://schemas.microsoft.com/BizTalk/2003">

    <xs:appinfo>

      <b:schemaInfo root_reference="Clients"  displayroot_reference="Clients" >

      </b:schemaInfo>

    </xs:appinfo>

  </xs:annotation>

The two key attributes are root_reference and  displayroot_reference. These should be set to the root element of the schema, in this case our sample schema’s root element is called Clients.

The final schema will now look like this below after inserting the above declaration in the schema. Notice the referenced elements are not shown.

image

Here is the full xsd document below.

<?xml version="1.0" encoding="utf-16"?>

<xs:schema xmlns="http://Reference_Schema.Contact" xmlns:b="http://schemas.microsoft.com/BizTalk/2003" targetNamespace="http://Reference_Schema.Contact" xmlns:xs="http://www.w3.org/2001/XMLSchema">


  <xs:annotation xmlns:b="http://schemas.microsoft.com/BizTalk/2003">

    <xs:appinfo>

      <b:schemaInfo root_reference="Clients"  displayroot_reference="Clients" >

      </b:schemaInfo>

    </xs:appinfo>

  </xs:annotation>


  <xs:element name="Clients">

    <xs:complexType>

      <xs:sequence>

        <xs:element name="Contacts">

          <xs:complexType>

            <xs:sequence>

              <xs:element name="Contact">

                <xs:complexType>

                  <xs:sequence>

                    <xs:element ref="Person" />

                    <xs:element ref="Address" />

                    <xs:element ref="Phone" />

                  </xs:sequence>

                </xs:complexType>

              </xs:element>

            </xs:sequence>

          </xs:complexType>

        </xs:element>

      </xs:sequence>

    </xs:complexType>

  </xs:element>

  <xs:element name="Address">

    <xs:complexType>

      <xs:sequence>

        <xs:element name="Address1" type="xs:string" />

        <xs:element name="Addess2" type="xs:string" />

        <xs:element name="Address3" type="xs:string" />

      </xs:sequence>

    </xs:complexType>

  </xs:element>

  <xs:element name="Person">

    <xs:complexType>

      <xs:sequence>

        <xs:element name="FirstName" type="xs:string" />

        <xs:element name="Lastname" type="xs:string" />

      </xs:sequence>

    </xs:complexType>

  </xs:element>

  <xs:element name="Phone">

    <xs:complexType>

      <xs:sequence>

        <xs:element name="Home" type="xs:string" />

        <xs:element name="Mobile" type="xs:string" />

        <xs:element name="Business" type="xs:string" />

      </xs:sequence>

    </xs:complexType>

  </xs:element>

</xs:schema>

 

Enjoy.

Developing for the Microsoft Azure platform

These are my own findings whilst developing my first application to be hosted in the cloud. The Azure features I used were Azure SQL, Federated Authentication, Web roles, Worker roles and Queues.

Handling transients from SQL Server

Beware of transient connection issues when connecting to your Azure database. These conditions occur due to normal network related errors, throttling due to excessive requests and controlled load balancing by the AppFabric framework.

Fortunately these types of errors raise specific error codes which can be trappable from your code.  Below are some errors codes to expect.

Error Number Error Message
40197 The service has encountered an error processing your request. Please try again.
40501 The service is currently busy. Retry the request after 10 seconds.
10053 A transport-level error has occurred when receiving results from the server. An established connection was aborted by the software in your host machine.
10054 A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)
10060 A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 – A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)
40613 Database XXXX on server YYYY is not currently available. Please retry the connection later. If the problem persists, contact customer support, and provide them the session tracing ID of ZZZZZ.
40143 The service has encountered an error processing your request. Please try again.
233 The client was unable to establish a connection because of an error during connection initialization process before login. Possible causes include the following: the client tried to connect to an unsupported version of SQL Server; the server was too busy to accept new connections; or there was a resource limitation (insufficient memory or maximum allowed connections) on the server. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)
64 A connection was successfully established with the server, but then an error occurred during the login process. (provider: TCP Provider, error: 0 – The specified network name is no longer available.)
20 The instance of SQL Server you attempted to connect to does not support encryption.

When developing code to use SQL Azure you should code for transient connection issues. To make things easy, the team from Microsoft patterns and practices have developed  the Microsoft Enterprise Library 5.0 Integration Pack for Windows Azure library.

Logging your application

Its important to add logging to any application deployed to Azure. I use system.diagnostics debug and trace statements extensively throughout my application to emit messages to blob storage for debugging my application at runtime.

To enable some basic logging you need to wire up the Azure diagnostics listener in the app.config file as shown below.

<system.diagnostics>
        <trace>
            <listeners>
                <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, 
                     Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
                    name="AzureDiagnostics">
                    <filter type="" />
                </add>
            </listeners>
        </trace>
    </system.diagnostics>

Add a new setting called Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString to the role file under the Azure project and set the value to your Azure storage account.

image

Then add the following code in the OnStart method to persist the logs to a table called WADLogsTable in Azure storage. The data will be written to storage every minute which is configurable as shown in the code below.

public override bool OnStart()

       {

           // Set the maximum number of concurrent connections 

           ServicePointManager.DefaultConnectionLimit = 12;


           TimeSpan tsOneMinute = TimeSpan.FromMinutes(1);

           DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();

           // Transfer logs to storage every minute

           dmc.Logs.ScheduledTransferPeriod = tsOneMinute;

           // Transfer verbose, critical, etc. logs

           dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;

           // Start up the diagnostic manager with the given configuration

           DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc);



           // For information on handling configuration changes

           // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

           RoleEnvironment.Changing += RoleEnvironmentChanging;


           return base.OnStart();

       }


       private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)

       {

           // If a configuration setting is changing 

           if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))

           {

               // Set e.Cancel to true to restart this role instance 

               e.Cancel = true;

           }

       }

Now in your code use the Trace or Debug statements to log any information you require as shown below.

 Trace.WriteLine("Entry point called");

 

Compile Messages

Always review the warnings emitted after compiling your solution. In one of my projects I was referencing an assembly that was not part of the default core installation of the .Net framework. Luckily I noticed the warning message in the output window.

Ensure you set the Copy Local to true if you need to include a referenced DLL that is required to be packaged with the solution before deploying to Azure.

image

 

Viewing blob and table storage within VS2010

You can view both Blob and Table storage contents within Visual Studio 2010 after installing the Windows Azure Tools for VS2010.

From the menu bar choose View and then Server Explorer. This will open the server explorer window where you can add a new storage account as shown below.

image

Once you have added the account details, you will be able to view the contents of your stores.

VST_SE_DevStore

 

Message Queues

One important thing to remember about Azure queues after you get the message, you must exclusively delete it from the queue after your process finishes. If not the message will reappear on the queue after a configurable period.

The reason why your message reappears is your instance may get shut down half way through processing your data by the Azure AppFabric and then you are at risk of losing data before completing the process. Hence you should delete the message from the queue at the end of your process and not soon after getting the message off the queue.

Also remember the order of delivery is not guaranteed.

You must always build logic in your code to take into account that your process may get shutdown at any time.

 

Windows Live Federated Credentials

The Windows Live authentication only provides a “token string” as one of the SAML claims. Note that is token is unique to your application. If you need to get access to the users email address, you will have to capture that information within your application and persist it yourself.