Thursday, November 17, 2011

TFS Build 2010: BuildNumber and DropLocation

Automatic Builds for Application Release is a current practice in every major development factory nowadays.

Using Team Foundation Server Build 2010 to accomplish this offers many opportunities to improve quality of your releases.

The following approach allow us to generate build drop folders including the BuildNumber and the Changeset or Label provided. Using this procedure we can quickly identify the generated binaries in the Drop Server with the corresponding Version.

  1. Branch the DefaultTemplate.xaml and renamed it with CustomDefaultTemplate.xaml

image

  1. Open it for edit (check out)
  2. Go to the Set Drop Location Activity and edit the DropLocation property.

image

  1. Write the following expression:

BuildDetail.DropLocationRoot + "\" + BuildDetail.BuildDefinition.Name + "\" + If(String.IsNullOrWhiteSpace(GetVersion), BuildDetail.SourceGetVersion, GetVersion) + "_" + BuildDetail.BuildNumber

  1. Check in the branched template.
  2. Now create a build definition named TestBuildForDev using the new template.

The previous expression sets the DropLocation with the following format: (ChangesetNumber|LabelName)_BuildName_BuildNumber

The first part of the folder name will be the changeset number or the label name (if triggered using labels). Folder names will be generated as following:

  1. C1850_TestBuildForDev_20111117.1 (changesets start with letter C)
  2. LLabelname_TestBuildForDev_20111117.1 (labels start with letter L)

Try launching a build from a Changeset and from a Label. You can specify a Label in the GetVersion parameter in the Queue new Build Wizard, going to the Parameters tab (for labels add the “L” prefix):

image

Evento Gratuito de Arquitectura / Architecture Event

El próximo 29 de noviembre voy a participar como orador en Microsoft en este evento gratuito sobre Herramientas de desarrollo y patrones de Arquitectura. Vamos a tomar varios temas interesantes sobre Arquitectura y construcción de Frameworks de Desarrollo.

Si están interesados en asistir, se pueden comunicar conmigo a través de un comentario en este blog.

English: Next November 29th I will be participating as a speaker in a free admission event at Microsoft México offices. We will be discussing very interesting topics about Architecture and the development of Technical Frameworks.

image

Sunday, May 22, 2011

Load Test in Windows Azure with Visual Studio Test Agents

Tomorrow I will be presenting “Load Test in Windows Azure with Visual Studio Test Agents” on MSDN. Hope to see you there.

Register here.

Tuesday, April 26, 2011

Using Moles with DLR (dynamic)

As several people have found, Moles does not work well with DLR. Check Cameron’s post for this issue.

In order to make it work do the following:

  1. Go to folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies\
  2. Backup files Microsoft.Moles.VsHost.exe.Config / Microsoft.Moles.VsHost.x86.exe.Config (depending on your application’s platform)
  3. Modify the starup element and set the useLegacyV2RuntimeActivationPolicy to false in the .config file.
  4. Modify the legacyCasPolicy element and set the enabled to false the .config file.
Microsoft.Moles.VsHost.x86.exe
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <startup useLegacyV2RuntimeActivationPolicy="false">
    <supportedRuntime version="v4.0" />
    <supportedRuntime version="v2.0.50727" />
  </startup>
  <runtime>
     <legacyCasPolicy enabled="false" />
  </runtime>
</configuration>

I haven’t done further testing on changing the use of legacy CAS Policy when using Moles & VS Test Framework.

Let me know how it works for you.

Moles and Linq to SQL: Mocking Linq to SQL Behavior

In recent years there’s been major investments in developing tools in order to provide good quality assurance features and incorporate industry practices into them.

TDD, DDD, Unit Testing, Continuous Integration, etc.

Microsoft has recently (not so recently actually :)) ship some very nice features to help us improve our code quality by allowing us to create better tests over units of code.

Unit testing is all about proving individual units of code (positive and negative testing). Since a great deal of software is not always designed with this “unit independence” in mind, it can be a challenge to isolate dependencies for every piece of source code (specially legacy code).

Moles is an Isolation Framework that mocks/emulates the behavior of our components and external libraries, even for private members. You could even mock how a particular System.* class object in .NET behaves (check Pex & Moles site in Microsoft Research).

The following is a Repository object that uses Linq to Sql Data Context internally and doesn't provide a simple way of decoupling database access from its behavior. It means we cannot unit test the following class without a SQL database.

Customer Repository
public class CustomerRepository
{
    public IList<Customer> GetCustomers()
    {
        using (var db = new AdventureWorksDataContext())
        {
            return db.Customers.ToList();
        }
    }

    public Customer GetCustomer(int id)
    {
        using (var db = new AdventureWorksDataContext())
        {
            return db.Customers.FirstOrDefault(c => c.CustomerID == id);
        }
    }
}

By using Moles, we could replace inner behavior of this class.

Note: This is White Box Testing, meaning you need to have access to internal implementation in order to know how to mock it.

Mocking with Moles

Download and install Moles.

  1. Add Mole Assemblies for the project hosting the previous class (or the code you wish to test)
  2. Add Mole Assembly for System.Data.Linq
  3. Add Mole Assembly for System.Core

Check this tutorial about Pex & Moles for more information about the previous steps.

The following code fragment unit test our GetCustomers method:

Overriding Enumeration
[TestMethod()]
[HostType("Moles")]
public void GetCustomersTest()
{
    System.Data.Linq.Moles.MTable<Customer>.AllInstances.GetEnumerator =
        tc =>
        {
            var mockCustomers = new List<Customer>()
            {
                new Customer() { FirstName = "Javier" },
                new Customer() { FirstName = "Jorge" },
                new Customer() { FirstName = "Dario" }
            };

            return mockCustomers.GetEnumerator();
        };

    CustomerRepository target = new CustomerRepository();
    var customers = target.GetCustomers();
    Assert.IsNotNull(customers);
    Assert.IsTrue(customers.Count == 3);
    Assert.AreEqual("Javier", customers[0].FirstName);
}

Check out the use of Moles to specify the behavior of enumeration for the Table<Customer> (Customers) that Linq to Sql Provider uses to execute the query in the database. In the previous code, we avoid querying data from database and return an in-memory list of customers.

When executing a lambda expression using Linq to Sql, the source Linq provider must interpret the query and then issue a command to the database.

In order to intercept the query and return mock results, we must override default behavior for the expression evaluation logic of the Linq Provider.

Overriding Expression Eval
[TestMethod()]
[HostType("Moles")]
public void GetCustomerTest()
{
    var expectedCustomer = new Customer() { FirstName = "Javier", CustomerID = 1 };

    System.Data.Linq.Moles.MTable<Customer>.AllInstances.ProviderSystemLinqIQueryableget =
        tc =>
        {
            var qp = new System.Linq.Moles.SIQueryProvider();
            qp.ExecuteExpression01(exp => expectedCustomer);
            return qp;
        };

    CustomerRepository target = new CustomerRepository();

    var actualCustomer = target.GetCustomer(expectedCustomer.CustomerID);
    Assert.IsNotNull(actualCustomer);
    Assert.AreEqual(expectedCustomer.CustomerID, actualCustomer.CustomerID);
    Assert.AreEqual(expectedCustomer.FirstName, actualCustomer.FirstName);
}

The previous mocking code returns a Mock QueryProvider that does not evaluate the expression, it simply returns the in-memory customer reference.

I encourage you that read more about Moles here.

PS: This cool framework intercepts any managed operation by creating a CLR Host for the application.

Monday, April 25, 2011

Visual Studio 2010 crashes with “Exception has been thrown by the target of an invocation”

If you haven´t installed Visual Studio 2010 SP1 yet and you’ve enabled automatic Windows Update, chances are you end up getting the following error when starting VS2010: Exception has been thrown by the target of an invocation.

There is a .NET 4 Framework in Windows update distributed recently that generates this conflict.

Solution: Install VS2010 Sp1.

Webcast MSDN: Parallel LINQ

Next Wednesday (April 27th) I will presenting Parallel LINQ and showing some not trivial stuff to optimize performance in parallel algorithms. Register here.

.NET Community Mexico City

I will be participating as an invited speaker in “Comunidad.NET” in Mexico City next Tuesday (April 26th). My presentation will be focusing code quality using Visual Studio Tools.

Agenda:

  1. Code Metrics
  2. Code Analysis
  3. Profiling
  4. IntelliTrace
  5. Pex & Moles

You can get more info from Raul’s blog.

Tuesday, April 12, 2011

Async C# 5: Using Async Pattern with WCF and Silverlight

Sometime ago I published a way to use WCF service with the new Async Pattern in C#. The problem is that this approach won’t work with Silverlight. More specifically, TaskFactory is not available in Silverlight.

So.. here is one solution (it does not provide full functionality as in TaskFactory, but you can manage 90% of use cases):

MyTaskFactory
public static class MyTaskFactory
{
    public static Task<TResult> FromAsync<TResult>(
        IAsyncResult asyncResult, Func<IAsyncResult, TResult> endMethod)
    {
        var completionSource = new TaskCompletionSource<TResult>();
        if (asyncResult.IsCompleted)
        {
            completionSource.TrySetResult(asyncResult, endMethod);
        }
        else
        {
            System.Threading.ThreadPool.RegisterWaitForSingleObject(
                asyncResult.AsyncWaitHandle,
                (state, timeOut) =>
                {
                    completionSource.TrySetResult(asyncResult, endMethod);
                }, null, -1, true);
        }

        return completionSource.Task;
    }

    static void TrySetResult<TResult>(
        this TaskCompletionSource<TResult> completionSource,
        IAsyncResult asyncResult, Func<IAsyncResult, TResult> endMethod)
    {
        try
        {
            var result = endMethod(asyncResult);
            completionSource.TrySetResult(result);
        }
        catch (OperationCanceledException)
        {
            completionSource.TrySetCanceled();
        }
        catch (Exception genericException)
        {
            completionSource.TrySetException(genericException);
        }
    }
}

The previous code fragment defines a helper method that behaves similar to TaskFactory.

Usage example:

Usage:
var client = new MyServiceClient();
// EchoTaskAsync is an extension method
var result = await client.EchoTaskAsync("zzzz");

And here is the service extension for MyServiceClient (Note that I use the service interface contract instead of the ClientBase<> as parameter since we need access to Begin/End methods):

Service Async Extensions
public static class ServiceExtensions
{
    public static Task<string> EchoTaskAsync(
        this IMyService client, string text)
    {
        return MyTaskFactory.FromAsync<string>(
            client.BeginEcho(text, null, null),
            ar => client.EndEcho(ar));
    }
}

Try it and let me know if it works for you..

Sunday, April 10, 2011

We are recruiting..

We are looking for smart and young .NET software developers, passionate about technology for our Mexico City office. If your are interested send us an email with your CV to rrhh@lagash.com specifying you are interested in Mexico City positions.

Thanks!

Tuesday, March 29, 2011

Team Foundation Server: Server-side Validation & Interception

Some time ago I presented a simple way to version TFS Web Services in order to intercept and perform server side validations.

In this post I will introduce another way of doing so.

Check out the  (poor documented) ITeamFoundationRequestFilter interface in MSDN. This interface is part of an extensibility method called TFS Filters.

Using this interface as a regular TFS plug-in (meaning you deploy it in the plugins directory as you do with subscribers) you can inspect method executions, requests, etc.

It is very useful when you need to create an audit log, measure performance (recording execution time or calls), diagnose connection problems, etc. In fact, when you connect to TFS from a version of Visual Studio  lower than 2010, an implementation of this filter (check the Microsoft.TeamFoundation.ApplicationTier.PlugIns.Core.UserAgentCheckingFilter) throws an exception indicating that you need a patch for doing so.

The latter is the functionality I am personally more interested in: validations.

The interface defines the following methods in order to handle a request life cycle (in chronological order):

  1. BeginRequest: called when a new TFS request (ASP.NET) is about to be executed. At this phase, you do not know which operation is about to be executed (at least not directly).
  2. RequestReady: called when security and message validations has already happened (only at Web Service level). At this phase, you do not know which operation is about to be executed (at least not directly).
  3. EnterMethod: called when a logical TFS method is about to be called. At this phase, you now know which operation TFS is about to execute (and you even have its parameters).
  4. LeaveMethod: called when a logical TFS method has already been executed.
  5. EndRequest: called when ASP.NET request is about to end.

Sample debugging session (method information available when EnterMethod is executed):

image

Turns out that you can only abort the execution of the current request in the BeginRequest and RequestReady methods:

  1. The only way of doing so, is throwing an exception that inherits from Microsoft.TeamFoundation.Framework.Server.RequestFilterException.
  2. At this phase (BeginRequest or RequestReady), you do not know yet which Method is being called by the client (at least not directly).
  3. Even if you throw exceptions from the EnterMethod method, you wouldn’t abort the current execution (there are some nice try-catch internal code preventing the error from aborting the execution). The exception will only be logged.

Note: As in my previous post, the next code sample is only meant to be an experiment in any case :).

By doing some ASP.NET HttRequest manipulation thought, you can implement a validation filter in the RequestReady or BeginRequest methods.

  1. Get the current ASP.NET HttpContext.Request
  2. Read the entire InputStream and then restore its original state (assuming it is not a forward only stream).
  3. Deserialize the SOAP Envelope and reading the parameters from there.

So… as I already made it clear, consider other alternatives before using this as production code.

TFS Filter Sample
public class TeamFoundationRequestFilterSample : ITeamFoundationRequestFilter
{
    public void BeginRequest(TeamFoundationRequestContext requestContext)
    {
        // Can abort here
        TfsPackageValidator.ValidatePackage();
    }

    public void RequestReady(TeamFoundationRequestContext requestContext)
    {
        // Can abort here
    }

    public void EndRequest(TeamFoundationRequestContext requestContext)
    {
    }

    public void EnterMethod(TeamFoundationRequestContext requestContext)
    {
        // Cannot abort here
    }

    public void LeaveMethod(TeamFoundationRequestContext requestContext)
    {
        // Cannot abort here
    }
}

The above sample code validates inputs in the BeginRequest method (you can always move it to the RequestReady). The TfsPackageValidator class inspects the current ASP.NET request and deserializes the input parameters. If it finds an Update operation over a Work Item (by looking at the SOAP Message Body), it continues with the validation process. What we are looking for here, is the Package element with the update information about the work item.

And here is my TfsPackageValidator class:

TfsPackageValidator class
static class TfsPackageValidator
{
    public static void ValidatePackage()
    {
        var soapXml = ReadHttpContextInputStream();

        var packageElement = ReadUpdatePackageFromSoapEnvelope(soapXml);

        if (packageElement != null)
            ValidatePackage(packageElement);
    }

    static void ValidatePackage(System.Xml.Linq.XElement package)
    {
        var updateWorkItemElement =
            package.Descendants("UpdateWorkItem").FirstOrDefault();

        if (updateWorkItemElement != null)
        {
            var priorityColumnElement =
                updateWorkItemElement.Descendants("Column").Where(
                    c => (string)c.Attribute("Column") == "Microsoft.VSTS.Common.Priority").FirstOrDefault();

            if (priorityColumnElement != null)
            {
                int priority = 0;
                var priorityText = priorityColumnElement.Descendants("Value").FirstOrDefault().Value;
                if (!string.IsNullOrEmpty(priorityText) && int.TryParse(priorityText, out priority))
                {
                    if (priority > 2)
                        throw new TeamFoundationRequestFilterException("Priorities grater than 2 are not allowed.");
                }
            }
        }
    }

    static XElement ReadUpdatePackageFromSoapEnvelope(string soapXml)
    {
        var soapDocument = XDocument.Parse(soapXml);

        var updateName =
            XName.Get(
                "Update",
                "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03");

        var updateElement =
            soapDocument.Descendants(updateName).FirstOrDefault();

        return updateElement;
    }

    static string ReadHttpContextInputStream()
    {
        var httpContext = System.Web.HttpContext.Current;

        string soapXml = null;
        using (var memoryStream = new MemoryStream())
        {
            byte[] buffer = new byte[1024 * 4];
            int count = 0;
            while ((count = httpContext.Request.InputStream.Read(buffer, 0, buffer.Length)) > 0)
                memoryStream.Write(buffer, 0, count);
            memoryStream.Seek(0, SeekOrigin.Begin);
            httpContext.Request.InputStream.Seek(0, SeekOrigin.Begin);

            soapXml = Encoding.UTF8.GetString(memoryStream.GetBuffer());
        }

        return soapXml;
    }
}

Try running this code when saving a Work Item with a priority field (by editing a single Work Item. If you edit the work item in a query view, the message passed to the server would be slightly different). You should see something similar to the following message in the client application:

image

The exception message is displayed in the dialog box (as a Technical information for administrator).

Download the sample code.

Friday, March 25, 2011

Webcast MSDN: Migración y construcción de aplicaciones para Windows Azure con VM Role y el modo de administración

I will participating in a new Screencast tomorrow about new Azure features requested by the community. Admin mode and VM Role.

Click here to join.

Tuesday, March 15, 2011

Asynchrony in C# 5: Dataflow Async Logger Sample

Check out this (very simple) code examples for TPL Dataflow.

Suppose you are developing an Async Logger to register application events to different sinks or log writers.

The logger architecture would be as follow:

image

Note how blocks can be composed to achieved desired behavior. The BufferBlock<T> is the pool of log entries to be process whereas linked ActionBlock<TInput> represent the log writers or sinks.

The previous composition would allows only one ActionBlock to consume entries at a time.

Implementation code would be something similar to (add reference to System.Threading.Tasks.Dataflow.dll in %User Documents%\Microsoft Visual Studio Async CTP\Documentation):

TPL Dataflow Logger
var bufferBlock = new BufferBlock<Tuple<LogLevel, string>>();

ActionBlock<Tuple<LogLevel, string>> infoLogger =
    new ActionBlock<Tuple<LogLevel, string>>(
        e => Console.WriteLine("Info: {0}", e.Item2));

ActionBlock<Tuple<LogLevel, string>> errorLogger =
    new ActionBlock<Tuple<LogLevel, string>>(
        e => Console.WriteLine("Error: {0}", e.Item2));

bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None);
bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None);

bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message"));
bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message"));

Note the filter applied to each link (in this case, the Logging Level selects the writer used). We can specify message filters using Predicate functions on each link.

Now, the previous sample is useless for a Logger since Logging Level is not exclusive (thus, several writers could be used to process a single message).

Let´s use a Broadcast<T> buffer instead of a BufferBlock<T>.

Broadcast Logger
var bufferBlock = new BroadcastBlock<Tuple<LogLevel, string>>(
    e => new Tuple<LogLevel, string>(e.Item1, e.Item2));

ActionBlock<Tuple<LogLevel, string>> infoLogger =
    new ActionBlock<Tuple<LogLevel, string>>(
        e => Console.WriteLine("Info: {0}", e.Item2));

ActionBlock<Tuple<LogLevel, string>> errorLogger =
    new ActionBlock<Tuple<LogLevel, string>>(
        e => Console.WriteLine("Error: {0}", e.Item2));

ActionBlock<Tuple<LogLevel, string>> allLogger =
    new ActionBlock<Tuple<LogLevel, string>>(
    e => Console.WriteLine("All: {0}", e.Item2));

bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None);
bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None);
bufferBlock.LinkTo(allLogger, e => (e.Item1 & LogLevel.All) != LogLevel.None);

bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message"));
bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message"));

As this block copies the message to all its outputs, we need to define the copy function in the block constructor. In this case we create a new Tuple, but you can always use the Identity function if passing the same reference to every output.

Try both scenarios and compare the results.

Asynchrony in C# 5 (Part II)

This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information.

So, let’s continue with TPL Dataflow:

  1. Asynchronous functions
  2. TPL Dataflow
  3. Task based asynchronous Pattern

Part II: TPL Dataflow

Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.”

This means: data manipulation processed asynchronously.

TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”.

Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc. 

Dataflow Blocks

TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>).

So, let’s review data processing blocks available. Blocks a categorized into three groups:

  1. Buffering Blocks
  2. Executor Blocks
  3. Joining Blocks

Think of them as electronic circuitry components :)..

1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it).

image

2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary.

image

3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value.

image

4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue.

You could also specify how many parallel executions to allow (degree of parallelism).

image

5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order.

image

6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input.

image

7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs).

image

8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.).

image

9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>).

image

Next time I will show some examples of usage for each TDF block.

* Images taken from Microsoft’s Async CTP documentation.

Monday, March 14, 2011

Microsoft Team Foundation Server 2010 Service Pack 1

Last week Microsoft has released the first Service Pack for Team Foundation Server. Several issues have been fixed and included in this patch.

Check out the list of fixes here.

Cool stuff has been shipped with this new released, such as the expected Project Service Integration.

PS: note that these annoying bugs has been fixed:

  1. Team Explorer: When you use a Visual Studio 2005 or a Visual Studio 2008 client, you encounter a red "X" on the reporting node of the team explorer.
  2. Source Control: You receive the error "System.IO.IOException: Unable to read data from the transport connection: The connection was closed." when you try to download a source.

Async CTP (C# 5): How to make WCF work with Async CTP

If you have recently downloaded the new Async CTP you will notice that WCF uses Async Pattern and Event based Async Pattern in order to expose asynchronous operations.

In order to make your service compatible with the new Async/Await Pattern try using an extension method similar to the following:

WCF Async/Await Method
public static class ServiceExtensions
{
    public static Task<DateTime> GetDateTimeTaskAsync(this Service1Client client)
    {
        return Task.Factory.FromAsync<DateTime>(
            client.BeginGetDateTime(null, null),
            ar => client.EndGetDateTime(ar));
    }
}

The previous code snippet adds an extension method to the GetDateTime method of the Service1Client WCF proxy.

Then used it like this (remember to add the extension method’s namespace into scope in order to use it):

Code Snippet
var client = new Service1Client();
var dt = await client.GetDateTimeTaskAsync();

Replace the proxy’s type and operation name for the one you want to await.

Sunday, March 13, 2011

Asynchrony in C# 5 (Part I)

I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET.

In these post series I’ll review some major features contained in this release:

  1. Asynchronous functions
  2. TPL Dataflow
  3. Task based asynchronous Pattern

Part I: Asynchronous Functions

This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await.

  • async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation.
  • await: allows us to define operations inside a function that will be awaited for continuation (more on this later).

Async function sample:

Async/Await Sample
async void ShowDateTimeAsync()
{
    while (true)
    {
        var client = new ServiceReference1.Service1Client();
        var dt = await client.GetDateTimeTaskAsync();
        Console.WriteLine("Current DateTime is: {0}", dt);
        await TaskEx.Delay(1000);
    }
}

The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together.

The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works.

The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one.

Now, compare the following code snippet  in contrast to the previuous async/await:

Traditional UI Async
void ComplicatedShowDateTime()
{
    var client = new ServiceReference1.Service1Client();
    client.GetDateTimeCompleted += (s, e) =>
    {
        Console.WriteLine("Current DateTime is: {0}", e.Result);
        client.GetDateTimeAsync();
    };
    client.GetDateTimeAsync();
}

The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example). 

How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one.

The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

Thursday, March 10, 2011

Webcast MSDN: El Futuro de C# y Visual Basic

I will be presenting an screencast tomorrow about next versions of C# and VB.NET. We will review some exciting new features.

Register here.

Saturday, February 26, 2011

Visual Studio CodedUI Tests Search Criteria

CodedUI Test allows developers/testers to perform UI automated Tests over Windows desktop applications.

Theses kind of features are well documented over the web and several blogs.

In this post I’ll show how to modify generated CodedUI tests to overwrite UI elements search criteria.

This kind of test modifications are very frequent in some type of applications.

Some background..

CodedUI technology works over MSAA API (Microsoft Active Accesibility) and in order to automate user interaction with Windows, buttons, tabs, etc. (any interactive UI element), it needs first to find it and then perform one or more actions over it (this is Windows implementation for Assistive Technologies).

The initial finding or search process is a necessary step. The same way we search an element by seeing the screen, MSAA allows automatized applications to find elements in different ways.

Searching an UI element involves matching specific attributes against expected values. When all attributes or properties meet this “search criteria” a control is found and referenced from CodedUI test for its later used.

Sometimes, finding an UI element using an automated algorithm could be a non trivial task, since its properties or attributes change dynamically during the application lifecycle or between different executions (name, position, text, etc.).

Creating a CodedUI tests is done through a Visual Studio Wizard. It allows us to record our actions and then generating the proper source code (in VB.NET, C#, etc.). The generated source code is implemented using partial classes, thus allowing us to add custom behavior without modifying the auto generated file.

Example, finding a WPF button:

this.mUIInnerButton1Button.SearchProperties[WpfButton.PropertyNames.Name] = "Inner Button 1";

The SearchProperties collection contains an standard set of attributes used for searching a certain kind of control (in this case, a WPF Button).

We can always change and pass the values we decide to this property bag or collection (overwriting the generated default behavior). However, finding a control sometimes require us to perform more elaborated procedures.

Overwriting Control Definition and Search Criteria..

This means, modifying the generated code to change default search criteria logic.

We do not modify .Designer.cs file. We add our new search criteria in the .cs file associated with the UIMap visual studio element:

image

Let´s work over the next sample. Suppose we have defined a WPF button with dynamic text (current DateTime for instance, only for demo purposes). The following XAML is the static content of our main window application.

Code Snippet
<Grid>
    <TabControl>
        <TabItem Header="Tab 1">
            <Button>
                <ContentControl>
                    <StackPanel x:Name="ButtonStackPanel">
                        <TextBox>Text 1</TextBox>
                        <TextBox>Text 2</TextBox>
                        <Button Click="Button1_Click">Inner Button 1</Button>
                        <Button Click="Button2_Click">Inner Button 2</Button>
                        <ComboBox>
                            <ComboBoxItem Content="Item 1" IsSelected="True" />
                            <ComboBoxItem Content="Item 2" />
                            <ComboBoxItem Content="Item 3" />
                        </ComboBox>
                    </StackPanel>
                </ContentControl>
            </Button>
        </TabItem>
        <TabItem Header="Tab 2" />
        <TabItem Header="Tab 3" />
    </TabControl>
</Grid>

Then, by code we create a third button with dynamic text on it:

Code Snippet
public MainWindow()
{
    InitializeComponent();
    this.Loaded += (s, e) =>
    {
        var dynamicButton =
            new Button()
                {
                    Content = "Dynamic Button: " + DateTime.Now.ToString(),
                };
        dynamicButton.Click += (sender, args) => MessageBox.Show("Dynamic Button Click");
        this.ButtonStackPanel.Children.Add(dynamicButton);
    };
}

If your record actions using CodedUI wizard you will get something similar to the following search criteria:

this.mUIDynamicButton2602201Button = new WpfButton(this);
#region Search Criteria
this.mUIDynamicButton2602201Button.SearchProperties[WpfButton.PropertyNames.Name] = "Dynamic Button: 26/02/2011 06:22:29 p.m.";
this.mUIDynamicButton2602201Button.WindowTitles.Add("MainWindow");
#endregion

Try running the test and you’ll end up with an error on the button click (note the name attribute contains the current DateTime, thus every run will generate a different button name). This was expected of course.

So, lets modify the generated UI test code to find the control. In the UIMap.cs (non designer file), add the following code:

Code Snippet
public partial class UIMap
{
    public UIMap()
    {
        this.UIMainWindowWindow.UIItemTabList.UITab1TabPage.Find();
        var tabItemStackPanel = this.UIMainWindowWindow.UIItemTabList.UITab1TabPage.GetChildren()[0];
        var dynamicButton =
            tabItemStackPanel.GetChildren().FirstOrDefault(c => c.Name.Contains("Dynamic"));
        this.UIMainWindowWindow.UIItemTabList.UITab1TabPage.UIDynamicButton2602201Button.CopyFrom(dynamicButton);
    }
}
  1. The custom code is added in the constructor of the UIMap, but you can easily create an Init() method (recommended) and then call it before invoking the test.
  2. We first need to find the control by searching the children of the parent StackPanel (note in the XAML that the StackPanel is the first and only child of the TabItem).
  3. Once we’ve found the StackPanel we enumerate its children, by filtering the results by control name (we are looking for something that begins with the word “Dynamic”).
  4. Finally, we need to clear the recorded search properties values and add the correct name filter value. You can do this by copying the button’s definition (with the CopyFrom() method). This copies all properties from the just found button to the one used by the test.

Final notes..

  • This is a simple example to show how you can manually search and iterate through the entire UI structure of an application to find elements. You can do this by position, name, display text, control type, etc.
  • You can implement the same behavior to find any control in the hierarchy.
  • This API could be used to automate applications for any purpose (not only for tests).

Tuesday, February 22, 2011

Webcast MSDN: Mejores Prácticas en la Gestión de Requisitos con Visual Studio 2010

I am going to participate in this MSDN WebCast (spanish only) with Maximiliano Déboli from Lagash today.

See you there..

Saturday, February 19, 2011

Overwriting TFS Web Services

In this blog I will share a technique I used to intercept TFS Web Services calls.

This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point).

For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers).

So let’s get started.

The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions.

For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at:

%Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx

The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3:

Code Snippet
  1. <%-- Copyright (c) Microsoft Corporation.  All rights reserved. --%>
  2. <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %>

The inheritance hierarchy for this service class follows:

image

Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService).

We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010).

The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template.

Important: Backup the original .asmx file and create one of your own.

  1. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project
  2. Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\):
    • Microsoft.TeamFoundation.Framework.Server.dll
    • Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll
    • Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll
    • Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll
    • Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll
  3. Replace the default service implementation with the something similar to the following code:
Code Snippet
  1. /// <summary>
  2.     /// Inherit from ClientService3 to overwrite default Implementation
  3.     /// </summary>
  4.     [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")]
  5.     public class CustomTfsClientService : ClientService3
  6.     {
  7.         [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]
  8.         public override bool BulkUpdate(
  9.             XmlElement package,
  10.             out XmlElement result,
  11.             MetadataTableHaveEntry[] metadataHave,
  12.             out string dbStamp,
  13.             out Payload metadata)
  14.         {
  15.             var xe = XElement.Parse(package.OuterXml);
  16.             
  17.             // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).
  18.             var wit = xe.Element("UpdateWorkItem");
  19.             if (wit != null)
  20.             {
  21.                 if (wit.Attribute("WorkItemID") != null)
  22.                 {
  23.                     int witId = (int)wit.Attribute("WorkItemID");
  24.                     // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).
  25.                     var stateChanged =
  26.                         wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");
  27.  
  28.                     if (stateChanged != null)
  29.                     {
  30.                         var newStateName = stateChanged.Element("Value").Value;
  31.                         if (newStateName == "Resolved")
  32.                         {
  33.                             throw new Exception("Cannot change state to Resolved!");
  34.                         }
  35.                     }
  36.                 }
  37.             }
  38.  
  39.             // Finally, we call base method implementation
  40.             return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);
  41.         }
  42.     }

4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first).

5. Copy your project’s .dll into the following path:

%Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin

6. Try saving a WorkItem into the Resolved state.

Enjoy!