Developing for the Microsoft Azure platform

These are my own findings whilst developing my first application to be hosted in the cloud. The Azure features I used were Azure SQL, Federated Authentication, Web roles, Worker roles and Queues.

Handling transients from SQL Server

Beware of transient connection issues when connecting to your Azure database. These conditions occur due to normal network related errors, throttling due to excessive requests and controlled load balancing by the AppFabric framework.

Fortunately these types of errors raise specific error codes which can be trappable from your code.  Below are some errors codes to expect.

Error Number Error Message
40197 The service has encountered an error processing your request. Please try again.
40501 The service is currently busy. Retry the request after 10 seconds.
10053 A transport-level error has occurred when receiving results from the server. An established connection was aborted by the software in your host machine.
10054 A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)
10060 A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 – A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)
40613 Database XXXX on server YYYY is not currently available. Please retry the connection later. If the problem persists, contact customer support, and provide them the session tracing ID of ZZZZZ.
40143 The service has encountered an error processing your request. Please try again.
233 The client was unable to establish a connection because of an error during connection initialization process before login. Possible causes include the following: the client tried to connect to an unsupported version of SQL Server; the server was too busy to accept new connections; or there was a resource limitation (insufficient memory or maximum allowed connections) on the server. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)
64 A connection was successfully established with the server, but then an error occurred during the login process. (provider: TCP Provider, error: 0 – The specified network name is no longer available.)
20 The instance of SQL Server you attempted to connect to does not support encryption.

When developing code to use SQL Azure you should code for transient connection issues. To make things easy, the team from Microsoft patterns and practices have developed  the Microsoft Enterprise Library 5.0 Integration Pack for Windows Azure library.

Logging your application

Its important to add logging to any application deployed to Azure. I use system.diagnostics debug and trace statements extensively throughout my application to emit messages to blob storage for debugging my application at runtime.

To enable some basic logging you need to wire up the Azure diagnostics listener in the app.config file as shown below.

<system.diagnostics>
        <trace>
            <listeners>
                <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, 
                     Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
                    name="AzureDiagnostics">
                    <filter type="" />
                </add>
            </listeners>
        </trace>
    </system.diagnostics>

Add a new setting called Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString to the role file under the Azure project and set the value to your Azure storage account.

image

Then add the following code in the OnStart method to persist the logs to a table called WADLogsTable in Azure storage. The data will be written to storage every minute which is configurable as shown in the code below.

public override bool OnStart()

       {

           // Set the maximum number of concurrent connections 

           ServicePointManager.DefaultConnectionLimit = 12;


           TimeSpan tsOneMinute = TimeSpan.FromMinutes(1);

           DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();

           // Transfer logs to storage every minute

           dmc.Logs.ScheduledTransferPeriod = tsOneMinute;

           // Transfer verbose, critical, etc. logs

           dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;

           // Start up the diagnostic manager with the given configuration

           DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc);



           // For information on handling configuration changes

           // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

           RoleEnvironment.Changing += RoleEnvironmentChanging;


           return base.OnStart();

       }


       private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)

       {

           // If a configuration setting is changing 

           if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))

           {

               // Set e.Cancel to true to restart this role instance 

               e.Cancel = true;

           }

       }

Now in your code use the Trace or Debug statements to log any information you require as shown below.

 Trace.WriteLine("Entry point called");

 

Compile Messages

Always review the warnings emitted after compiling your solution. In one of my projects I was referencing an assembly that was not part of the default core installation of the .Net framework. Luckily I noticed the warning message in the output window.

Ensure you set the Copy Local to true if you need to include a referenced DLL that is required to be packaged with the solution before deploying to Azure.

image

 

Viewing blob and table storage within VS2010

You can view both Blob and Table storage contents within Visual Studio 2010 after installing the Windows Azure Tools for VS2010.

From the menu bar choose View and then Server Explorer. This will open the server explorer window where you can add a new storage account as shown below.

image

Once you have added the account details, you will be able to view the contents of your stores.

VST_SE_DevStore

 

Message Queues

One important thing to remember about Azure queues after you get the message, you must exclusively delete it from the queue after your process finishes. If not the message will reappear on the queue after a configurable period.

The reason why your message reappears is your instance may get shut down half way through processing your data by the Azure AppFabric and then you are at risk of losing data before completing the process. Hence you should delete the message from the queue at the end of your process and not soon after getting the message off the queue.

Also remember the order of delivery is not guaranteed.

You must always build logic in your code to take into account that your process may get shutdown at any time.

 

Windows Live Federated Credentials

The Windows Live authentication only provides a “token string” as one of the SAML claims. Note that is token is unique to your application. If you need to get access to the users email address, you will have to capture that information within your application and persist it yourself.

Referencing a custom component inside a SSIS Script Task

If you need to access a custom component from a Script Task, the component must first be signed and gac’ed. After it has been gac’ed, the assembly must also be copied to  the .net2.0 framework under the windows directory.

The component I created was compiled under .net3.5 so I copied the assembly to the following folder C:\Windows\Microsoft.NET\Framework\v2.0.50727

Now click the Edit Script button on the Script Task Editor.

image

This will now open the editor. From here you can add the reference to your DLL by right clicking on the References folder in the Project Explorer and click Add Reference.

image

You should now be able to view your assembly in the list as highlighted above.

I you get the following error whilst adding a reference, click No. The default .Net framework is 2.0 so if you have compiled your custom component under .Net3.5 you must set the target framework of the script editor to 3.5

image

This is achieved by Right clicking on the project and select properties as shown below.

image

Now change the target framework to your compiled version. In my case is was 3.5

image

Enjoy.

Using SQL2008 Data Table Type & BizTalk to insert parent-child rows

I wanted a way to insert a parent row with multiple child rows in one single transaction into a database using BizTalk 2010.

The solution I came up with uses the table data type in SQL 2008 to insert parent-child relationship records and the BizTalk WCF SQL Adaptor. The data I will be inserting will consist of a Sales Order header record with several sales item lines.

Below are the steps I used to accomplish this.

1. Create the tables for the Sales header record and the Sales Item records.

CREATE TABLE [dbo].[SalesHeader]
(
    [SalesHeader] [int] IDENTITY(1,1) NOT NULL,
    [OrderNumb] [nvarchar](50) NOT NULL,
    [OrderDate] [datetime] NOT NULL
)

CREATE TABLE [dbo].[SalesItem]
(
    [SaleItem] [int] IDENTITY(1,1) NOT NULL,
    [SalesHeaderID] [int] NOT NULL,
    [ItemCode] [nvarchar](255) NOT NULL,
    [Qty] [int] NOT NULL,
    [Price] [money] NOT NULL
)

go
--add the FK Cconstraint
ALTER TABLE [dbo].[SalesItem]  WITH CHECK ADD  CONSTRAINT [FK_SalesItem_SalesHeader] FOREIGN KEY([SalesHeaderID])
REFERENCES [dbo].[SalesHeader] ([SalesHeader])
ALTER TABLE [dbo].[SalesItem] CHECK CONSTRAINT [FK_SalesItem_SalesHeader]
go

2. Create the table data types in SQL. One for the header record and another for the order lines. These new table types in SQL2008 allow you to pass multiple rows in as a single parameter. In the old  days you would have to pass the data in as a XML document or use updategrams.

CREATE TYPE [dbo].[SalesHeaderTableType] AS TABLE(
    [OrderNumb] [nvarchar](255) NULL,
    [OrderDate] [datetime] NULL
)
go
CREATE TYPE [dbo].[SalesItemTableType] AS TABLE(
    [ItemCode] [nvarchar](255) NULL,
    [Qty] [int] NULL,
    [Price] [money] NULL
)
go

3. Create the stored procedure to insert the data into the tables using the table data types as the input parameters. Also I am using the scope_identity function to get the parent header id. Note table parameters can only be used as read only.

CREATE procedure [dbo].[InsertSalesOrder]
(
    @SalesHeader as dbo.SalesHeaderTableType READONLY,
    @SalesItem as dbo.SalesItemTableType READONLY
)
as

declare @SalesHeaderID int

--add the sales header record
insert into dbo.SalesHeader(OrderNumb,OrderDate)
select OrderNumb, OrderDate from @SalesHeader
select @SalesHeaderID = scope_identity()

--add the line items
insert into dbo.SalesItem (SalesHeaderID,ItemCode,Qty,Price)
select @SalesHeaderID,ItemCode,Qty,Price from @SalesItem

return @SalesHeaderID
go

4. The next step is to generate the schema for the stored procedure created above using the “Consume Adapter Service” wizard.

image

This will create the following 3 files Sales.DataSetSchema.xsd, Sales.Procedure.dbo.xsd and Sales.TableType.dbo.xsd

5. Create the schema for the input data which is just a simple Header – Detail  data structure as shown below. Note the Detail node ‘Max Occurs’ is set to unbound as there can be multiple detail line items.

image

6. Create map from the input data schema to the generated schema for the stored procedure. The functiod in the map connecting the Detail node to the SaleItemTableType is just a Looping shape.

image

7. Create a simple orchestration to write the sales data to the database. This orchestration just simply picks up the xml sales data from a pickup folder and inserts the data using a static resquest/response port. The response is simply the identity id returned from the stored procedure.  The response is then written to a drop folder as xml.

image

8. Compile and deploy the application to BizTalk. The request/response port is configured as a WCF SQL.

image

The SOAP Action header of the WCF SQL Transport Properties are configured as below

image

That’s pretty much it. Just drop a test message and check the tables have been populated.

Below is the results after dropping the test message in the pickup folder a few times.

image

 

Enjoy.

Workflow 4.0 Re-Hosting

I had the privilege to work with Microsoft on extending their AppFrabric Workflow Manager.

The AppFabric Starter Kit is designed to be a reference implementation; a secondary objective is to provide sample code that developers could use to manage the life-cycle of the .NET 4 Workflow Services on Windows Server AppFabric. It does this by providing a complete client-side application that enables design of .NET Windows Workflow Services outside of the Visual Studio IDE; deployment of these Services from the Client to the AppFabric server; versioning of the deployed service definitions; and finally management of Service Instances in a custom monitoring console.

You can download it from here http://code.msdn.microsoft.com/Windows-Server-AppFabric-332d1ebc

RunAs command line for launching IE

I have used this on many occasions to run-up IE under different user credentials.

Below is a code snippet to use the the “RunAs.exe” utility.

C:\Windows\System32\runas.exe /user:mylogname@abc.com /savecred “C:\Program Files\Internet Explorer\iexplore.exe”

I normally create a batch file on my desktop to execute this command line. Use the switch /savecred to prompt for the password and once entered it will save it for latter use.

SQL2008 MARS & Parallel Framework in .Net4

I had a scenario where I wanted to query several database tables at once and process the data as fast as possible using a single database connection.

After some though I decided to look at MARS ( multiple active result set) which was available from SQL2005 and up. This gives the option to open multiple datareaders under the one SQL database connection and allow the data to be read in an interleaved fashion. Although not true parallel reading it does have the advantage of having only one database connection open.

Also I had been reading about the new parallel framework that is available in .Net 4.0. Because I have several datareaders open, I could process each reader in parallel (using multi-core processors) and take advantage of the yeild points. This is because statements which return rows to the client, are allowed to interleave execution before completion while rows are being sent to the client.

Below is the sample prototype code to test my theory.

   1: public void GetUserDetailsSharedConn()

   2:       {

   3:           List<mdAddress> addresses;

   4:           List<mdUser> users;

   5:  

   6:           string dbCon = "Server= .;Database=adventureworks;Trusted_Connection=yes;MultipleActiveResultSets=true;";           

   7:           using (SqlConnection conn = new SqlConnection(dbCon))

   8:           {

   9:  

  10:               string sql1 = "SELECT * FROM [Person].[Address]";

  11:               string sql2 = "SELECT * FROM [Person].[Contact]";

  12:  

  13:               SqlCommand cmd1 = new SqlCommand(sql1, conn);

  14:               SqlCommand cmd2 = new SqlCommand(sql2, conn);

  15:               cmd1.CommandTimeout = 500;

  16:               cmd2.CommandTimeout = 500;

  17:               conn.Open();

  18:               SqlDataReader dr1 = cmd1.ExecuteReader();

  19:               SqlDataReader dr2 = cmd2.ExecuteReader();

  20:  

  21:  

  22:               Parallel.Invoke

  23:                   (

  24:                       () => PopulateAddressSharedConn(dr1, out addresses),

  25:                       () => PopulateUsersSharedConn(dr2, out users)

  26:                   );                

  27:           }                       

  28:       }

  29:  

  30:  

  31:       private void PopulateUsersSharedConn( SqlDataReader rd, out List<mdUser> users)

  32:       {            

  33:            users = new List<mdUser>();           

  34:           // Call Read before accessing data.

  35:           while (rd.Read())

  36:           {

  37:               mdUser user = new mdUser();

  38:               user.Title = rd["Title"] == DBNull.Value ? string.Empty : (string)rd["Title"];

  39:               user.FirstName = (string)rd["FirstName"]; 

  40:               user.LastName = (string)rd["LastName"];

  41:               users.Add(user);

  42:           }     

  43:       }

  44:  

  45:       private void PopulateAddressSharedConn(SqlDataReader rd, out List<mdAddress> addresses)

  46:       {

  47:           addresses = new List<mdAddress>();

  48:           // Call Read before accessing data.

  49:           while (rd.Read())

  50:           {

  51:               mdAddress address = new mdAddress();

  52:               address.Address1 = (string)rd["AddressLine1"];

  53:               address.Address2 = rd["AddressLine2"] == DBNull.Value ? string.Empty :(string)rd["AddressLine2"];

  54:               address.City = (string)rd["City"];

  55:               addresses.Add(address);

  56:           }

  57:  

  58:       }

I used the Parallel.Invoke method because I need to wait until I received all the data back form all the threads.

Below is the SQL Profiler trace after executing the code in quick succession. On the first query set, the ‘Address’ batch yielded  until the ‘Contact’ batch completed. However on the highlighted second query the ‘Contact’ batch waited until the ‘Address’ batch completed.

image

How to gac a component at compile time

Rather than manually having to gac dll’s after compiling, you can add build events to the project to automatically unregister and register the compiled dll’s into the registry.

The following script below will unregister the dll

  • “%programfiles%\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\x64\gacutil.exe” /u “$(TargetName)”

This script will register the dll after it successfully compiles.

  • “%programfiles%\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\x64\gacutil.exe” /i “$(TargetPath)”

Note the gacutil.exe file may be at different path depending on the OS installed. The path above is for a Win2008 x64 operating system.

Add the scripts above to the Build Events of the project as shown below.

image

 

Enjoy

Writing to the Windows Event Log from a BizTalk Orchestration.

The following code writes a message to the windows event application log under the existing event source “BizTalk Server”.

System.Diagnostics.EventLog.WriteEntry("BizTalk Server","This is the error message",System.Diagnostics.EventLogEntryType.Error,0);

 

The code above is normally placed inside an Expression shape in the orchestration as shown as “Write eventlog”

image

However if you require to write to your own defined event source, you must create the key in the registry first under the following path “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application”. The BizTalk host account must have read access to the security key and Read/Write access the new event key.

I tend to create a new event source for each application I deploy, so I decided to use a PowerShell script to create the key and set the permissions for each of the environments to save time.

Use the script below and replace the 2 variables values $EventName and $AccountName with the name of your event source and the account of the host instance for the orchestration.

#Define variables 

[string]$EventName = "BizTalkTest2"

[string]$AccountName = "vm-tpdev01\zx_BTHost1Svc"


#Create the new key

$keyName = "HKLM:\SYSTEM\CurrentControlSet\services\eventlog\Application\" + $EventName

md $keyName


#set the permission

$acl = Get-Acl $keyName


# grant service full control to this key

$person = [System.Security.Principal.NTAccount]$AccountName

$access = [System.Security.AccessControl.RegistryRights]"FullControl"

$inheritance = [System.Security.AccessControl.InheritanceFlags]"None"

$propagation = [System.Security.AccessControl.PropagationFlags]"None"

$type = [System.Security.AccessControl.AccessControlType]"Allow"

$rule = New-Object System.Security.AccessControl.RegistryAccessRule($person,$access,$inheritance,$propagation,$type)

$acl.AddAccessRule($rule)

Set-Acl $keyName $acl


#Grant read-only to BTS Host account

$acl = Get-Acl HKLM:\SYSTEM\CurrentControlSet\services\eventlog\Security

$person = [System.Security.Principal.NTAccount]$AccountName

$access = [System.Security.AccessControl.RegistryRights]"ReadKey"

$inheritance = [System.Security.AccessControl.InheritanceFlags]"None"

$propagation = [System.Security.AccessControl.PropagationFlags]"None"

$type = [System.Security.AccessControl.AccessControlType]"Allow"

$rule = New-Object System.Security.AccessControl.RegistryAccessRule($person,$access,$inheritance,$propagation,$type)

$acl.AddAccessRule($rule)

Set-Acl HKLM:\SYSTEM\CurrentControlSet\services\eventlog\Security $acl


#Check permissions

get-acl  $keyName | Format-Table -wrap

get-acl HKLM:\SYSTEM\CurrentControlSet\services\eventlog\Security | Format-Table -wrap




Next open a PowerShell window as “Run as Administrator” and execute the script to create the key and permissions.

Enjoy.

Content based routing with WCF 4.0

This examples shows how easily it is to route messages based on message headers using WCF4

For this example I built two similar request-response WCF services with different service contract namespaces. One simply adds the two numbers together while the other service subtracts the two numbers and then returns the result.

A windows form was used as the client that accepts 2 numbers and the type of mathematical operation to perform, either addition or subtraction. Once the type of operation to perform is entered, the message header is updated with the operation and then submitted to the WCF routing service.  The routing service reads the header information and reroutes the message to the correct WCF service for the result to be displayed on the windows form.

The main focus of this blog will be the WCF routing service which I have hosted under IIS. It only consists of 2 files, the svc and the web.config file. Routing will be based on an element called “Operation” in the header section of the received soap message.

image

The svc file only contains the following entry which is very similar to a typical WCF web service except for the service model type.

<%@ ServiceHost Language="C#" Debug="true" Service="System.ServiceModel.Routing.RoutingService,System.ServiceModel.Routing, version=4.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35"  %>

 

 

The real business is in the web.config file below where the routing table and filters are defined. The key points are the routing and client endpoint sections which I will describe in more detail below.

<?xml version="1.0"?>
<configuration>

  <system.web>
    <compilation debug="true" targetFramework="4.0" />
  </system.web>
  <system.serviceModel>

    <serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
    <services>
      <service behaviorConfiguration="RoutingBehavior" name="System.ServiceModel.Routing.RoutingService">
        <endpoint address="" binding="basicHttpBinding" bindingConfiguration=""
           name="MathRouter" contract="System.ServiceModel.Routing.IRequestReplyRouter" />
      </service>
    </services>

    <!-- Behaviour section -->
    <behaviors>
      <serviceBehaviors>
        <behavior>
          <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment -->
          <serviceMetadata httpGetEnabled="true"/>
          <!-- To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information -->
          <serviceDebug includeExceptionDetailInFaults="false"/>
        </behavior>


        <!-- Routing behaviour-->
        <behavior name="RoutingBehavior">
          <routing routeOnHeadersOnly="true" filterTableName="filterTable1" />
          <serviceDebug includeExceptionDetailInFaults="true"/>
          <serviceMetadata httpGetEnabled="true" />
        </behavior>

      </serviceBehaviors>
    </behaviors>


    <!-- Routing section-->
    <routing>
      <namespaceTable>
        <add prefix="ha" namespace="http://www.examples.com/Adder/2010/10"/>
        <add prefix="hs" namespace="http://www.examples.com/Subtractor/2010/10"/>
        <add prefix="s" namespace="http://schemas.xmlsoap.org/soap/envelope/"/>
      </namespaceTable>

      <filters>
        <filter name="AddFilter" filterType ="XPath" filterData="/s:Envelope/s:Header/ha:Operation/text() = 'Add'"/>
        <filter name="SubFilter" filterType="XPath" filterData="/s:Envelope/s:Header/hs:Operation/text() = 'Sub'"/>
      </filters>

      <filterTables>
        <filterTable name="filterTable1" >
          <add filterName="AddFilter" endpointName="Adder" priority="0"/>
          <add filterName="SubFilter" endpointName="Subtractor" priority="0"/>
        </filterTable>
      </filterTables>
    </routing>


    <!-- Client endpoints for the services-->
    <client>
      <endpoint address="http://adderservice/adder.svc" binding="basicHttpBinding" bindingConfiguration="" contract="*" name="Adder"/>
      <endpoint address="http://subtractorservice/subtractor.svc" binding="basicHttpBinding" bindingConfiguration="" contract="*" name="Subtractor"/>
    </client>
  </system.serviceModel>


  <system.webServer>
    <modules runAllManagedModulesForAllRequests="true"/>
  </system.webServer>

</configuration>

 

 

Below is what the soap message looks like coming over the wire using MS WCF Test Client for the Adder web service.

<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
  <s:Header>
    <Action s:mustUnderstand="1" xmlns="http://schemas.microsoft.com/ws/2005/05/addressing/none">http://EquateRequestMessage/Action</Action>
    <h:Operation i:nil="true" xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns:h="http://www.examples.com/Adder/2010/10" />
  </s:Header>
  <s:Body>
    <EquateRequestMessage xmlns="http://www.examples.com/Adder/2010/10">
      <Value1>0</Value1>
      <Value2>0</Value2>
    </EquateRequestMessage>
  </s:Body>
</s:Envelope>

 

In the routing section below I have used a namespace table to alias the namespace of the service contracts. Lines 2 and 3 are aliases for the WCF Services that I built. Line 4 is the default soap message namespace.

   1: <namespaceTable>

   2:         <add prefix="ha" namespace="http://www.examples.com/Adder/2010/10"/>

   3:         <add prefix="hs" namespace="http://www.examples.com/Subtractor/2010/10"/>

   4:         <add prefix="s" namespace="http://schemas.xmlsoap.org/soap/envelope/"/>

   5:       </namespaceTable>

The other section of interest is the filters. Here I am using xpath queries to read the type of mathematical operation and depending on which filter matches, it will route to the correct endpoint as defined in the filter tables. Note that I am using the namespace aliases.

   1: <filters>

   2:        <filter name="AddFilter" filterType ="XPath" filterData="/s:Envelope/s:Header/ha:Operation/text() = 'Add'"/>

   3:        <filter name="SubFilter" filterType="XPath" filterData="/s:Envelope/s:Header/hs:Operation/text() = 'Sub'"/>

   4:      </filters>

 

The filter tables determine which endpoint to reroute the message to.

   1: <filterTables>

   2:        <filterTable name="filterTable1" >

   3:          <add filterName="AddFilter" endpointName="Adder" priority="0"/>

   4:          <add filterName="SubFilter" endpointName="Subtractor" priority="0"/>

   5:        </filterTable>

   6:      </filterTables>

 

These are the endpoint addresses for the WCF services that I had built.

   1: <!-- Client endpoints for the services-->

   2:    <client>

   3:      <endpoint address="http://adderservice/adder.svc" binding="basicHttpBinding" bindingConfiguration="" contract="*" name="Adder"/>

   4:      <endpoint address="http://subtractorservice/subtractor.svc" binding="basicHttpBinding" bindingConfiguration="" contract="*" name="Subtractor"/>

   5:    </client>

 

Thats it, routing using a configuration file. This is just one example on using the new routing features in WCF 4.0.

Enjoy.

Creating a composite response schema for the BizTalk SQL WCF Adapter

I had a scenario where I was requesting several different complex xml documents from a SQL database to a BizTalk application. Returning xml from the stored procedures were used because of the relational structure of the data and to avoid having to use multiple maps to get to the final xml structure.

Instead of having to create new send ports for every query because of the response schemas where different for each request, I decided to use a composite schema that allowed me to use a single send port.

Below are the steps I used to go about developing the solution for BTS2009 and SQL2008. I will use the sample database AdventureWorks and will use a simple xml structure for each entity involved.

1. Create the required stored procedures

Create procedure [BTS].[Sel_Contacts_XML]

as

WITH XMLNAMESPACES ('http://BT.Composite.Schemas.WcfSQL/ContactResponse' as nsql1)

select Title,FirstName,LastName,EmailAddress

from Person.Contact

for XML PATH('nsql1:Contact')

 

Create procedure [BTS].[Sel_Products_XML]

as

WITH XMLNAMESPACES ('http://BT.Composite.Schemas.WcfSQL/ProductResponse' as nsql2)

select ProductNumber,Name,ListPrice

from Production.Product

for XML PATH('nsql2:Product')

 

2. Create the schemas for each stored procedure in the BizTalk project. Ensure that the target namespace matches the namespace defined in the stored procedure. For this example I called them ContactResponse and ProductResponse.

3. Create a new schema that will import the above schemas. This schema will be used as the common composite response schema for each of the WCF SQL requests.

image

Below is the final composite schema with the imported ContactResponse and ProductResponse schemas. Note that the target namespace and the root element of this composite schema will be specified in the WCF adaptor properties latter on.

image

4. Create the orchestration to call the stored procedures. Note there is only one send port but there is a separate operation for each stored procedure.  Each operation was created manually and the name of the operation is important as it is used in the soap action header configuration latter.

For this demonstration the orchestration requested all the contacts first from the database and wrote the response to a file, then a second request was made to the database for the products and the response written to a file also.

image

5. Once the application is deployed to BizTalk, the WCF Custom adaptor must be configured.

When using stored procedures that output XML, the action property must be defined as following. XmlProcedure/<schema_name>/<procedure_name>

The operation name must match the name of the operation inside the orchestration

image

 

Below is the Action operation. Note the two operations that match port operation in the orchestration.

<BtsActionMapping xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">

  <Operation Name="SelContacts_OP" Action="XmlProcedure/BTS/Sel_Contacts_XML" />

  <Operation Name="SelProducts_OP" Action="XmlProcedure/BTS/Sel_Products_XML" />

</BtsActionMapping>

Next the binding properties for using stored procedures outputting xml must be set to the target namespace and the root name of the composite schema that was created.

image

 

This is sample output extract of the first request for the Contacts.

image

And this is the output from the second request for the products.

image