Monday, December 23, 2013

Building a cross platform solution for Windows Phone and Windows 8. Part V: Binding events to commands

In the previous part of this series, we did quite a lot. We mocked out any application differences in our ViewModels, we introduced dependency injection and we created base classes for our platform specific pages.

This part will be less complicated, but will again contain a very necessary part if you want to end up with a reusable framework for your Windows Phone and Windows 8 applications. Up until now we have created Views that we can databind to our ViewModels. For this our ViewModels contain either data properties or command properties. It is with this last kind of properties you might have some issues in databinding scenarios. And this doesn't only go for cross platform applications, you will have this issue every time you want to take the MVVM approach.

The thing with command properties is that, out of the box, they are only databindable to, well, Commands. So, a button for instance, poses no problem, since it has a Command property. But what if, for instance, you wish to trigger a command based on a Tap event. This will not be possible by default. But, you can provide some extra pipe-lining to make this possible nonetheless.

What we'll do to get around this, is write our own TriggerAction. A TriggerAction describes an action to perform by a trigger. The good thing is that you can attach a TriggerAction to an EventTrigger. An EventTrigger will then execute your TriggerAction based on a certain event.

To get all this working, first thing you will have to do is add a reference to the Systems.Windows.Interactivity namespace. On Windows Phone you can find this reference under Assembly - Extensions (I had to look for it too). For Windows 8 you will not find this assembly in your references, but you can use this nuget package.

Once you've done this, you will find you can add an EventTrigger for any event you would like. In the following example I have added one for the Tap event on a Border control.



Next step, now that we have our EventTrigger, is write a TriggerAction that translates our event to a command. This is actually not that hard. You notice in the code sample above I use a framework:EventToCommand type, which I provided myself. This is my TriggerAction which has one property, Command. It looks like this.



This class inherits from a TriggerAction for a FrameworkElement (this is just the base class for all possible controls). The first line registers my Command property as a DependencyProperty. This is needed so you can assign values to this property from within your XAML. That is also why, in the actual property, you see the use of GetValue and SetValue - that is just because we are using a DependencyProperty.

The Invoke method in my EventToCommand is more important. It gets executed each time the associated event gets fired. What it does is, it looks wether the Command can be executed and when it does, it executes the Command.

With this, you can more or less databind anything in your views. More or less, because if you want to databind your application bar in a Windows Phone application, you will have to go through some extra effort, which involves a lot more code. You can get a pointer to what needs to be done here.

Next part, I will show you another little trick to easily react to orientation changes.

Other parts in this series:

Part 0: Intro
Part I: Quick sharing of code
Part II: The class library approach
Part III: We need a pattern
Part IV: Mocking out the differences
Part V: Event to command
Part VI: Behaviors for coping with orientation changes
Part VII: Tombstoning

Sunday, December 15, 2013

Building a cross platform solution for Windows Phone and Windows 8. Part IV: Mocking out the differences

Other parts in this series:

Part 0: Intro
Part I: Quick sharing of code
Part II: The class library approach
Part III: We need a pattern
Part IV: Mocking out the differences
Part V: Event to command
Part VI: Behaviors for coping with orientation changes
Part VII: Tombstoning

In the previous part I talked about the MVVM pattern as a great way to share logic in your application. Your ViewModel (and Model) code can be placed in a portable class library, thus making it reusable. But, since it is not possible to put any platform specific code in your PCL, you cannot directly use platform specific services from within a ViewModel (this would make your PCL unsharable between platforms). This poses some challenges, but these can quite easily be overcome. This part in our series will be the most extensive thus far, and will contain the most advanced concepts, like inversion of control, dependency injection, and reflection. But bear with me, it will all be worth it in the end!

Let's first look at one of these platform specific behaviors: Windows Phone and Windows 8 both allow you to navigate from one page to another. The way they do this, however, is different between the two platforms. They both have a Navigate method for this, but the method exists on two different types, and also, the actual parameterlist of this method is different on each platform. You will need to find a solution to get a comparable way of navigating between pages on both platforms. You can do this by wrapping the platform specific functionality with an interface. Your ViewModels can then talk to this interface instead of talking directly to the actual navigation service of each platform.

I have for instance a MainViewModel which acts as start page for my application. From this MainViewModel you can navigate to other pages in the application. For this, the MainViewModel gets a reference to a INavigate interface through its constructor (this is my own INavigate interface, and not the one you can find in the Windows Store or Windows Phone API). Since I will be navigating in my ViewModels and not in my Views, I decided that when I navigate to another page, this is actually the same as navigating to another ViewModel. Also, I want to be able, when I navigate, to send along an additional parameter, which contains extra data for the ViewModel I navigate to.



In the above code you can see it is quite easy to navigate to another ViewModel, and send along extra data (navigate to the SudokuBoardViewModel with a specific game level). But you can also choose not to use this extra data (navigate to the SudokuRulesViewModel).

The interface itself has one extra method for navigating back.



What you will still have to do now is write platform specific implementations of the INavigate service for each platform you want to support. The good thing is, you will need to write this kind of implementations only once. After your first app, you can reuse this effort to a next application (and I will also make sure my own little API will become available on github in the next couple of weeks).


1. Navigating in a Windows Store application


Let's first look at the WindowsStoreNavigator, since this is the easiest to understand. To navigate from one page to the next, you need to make use of your Window.Current.Content property (or rootframe). In a clean Windows Store project you get a reference to this in your OnLaunched event (I set up the entire framework from the OnLaunched handler in the app.xaml.cs file). Next, we will also need some means of associating the ViewModel we wish to navigate to, with a certain View, since navigating in Windows Store applications is done to views and not to viewmodels. For this I will utilize a viewlocator (IKnowWhereYourViewIs). This viewlocator is nothing more than a dictionary that associates views to viewmodels. This way I don't need to code the binding between view and viewmodel directly in the view or in the viewmodel.



As you can see, I ask the viewlocator for the view that is associated with the viewmodel I wish to navigate to. Next I tell the rootframe to actually navigate to this view. The second parameter in the second line of the NavigateTo method instantiates my viewmodel. The inversion of control container has been set up with a list of all my viewmodels. For this I utilized TinyIoc, but you can use any wich IoC you like. The main advantage using and IoC container will be that any dependencies your viewmodel uses, will get injected by the IoC, as long as you registered them at startup.

Navigating with the extra data is a bit more complex. I created a IHandle interface which indicates wether a ViewModel can handle a certain message. If it does, it implements the IHandle interface. Before navigating to the view, I first let the ViewModel handle the actual request.



One more thing missing after navigating to a certain view, is the actual databinding between View and ViewModel. For this I created an extra base class for each View. In the OnNavigatedTo handler, we will bind the NavigationEventArgs, which in our case is our ViewModel, to the Views' DataContext.



And that's it. This allows us to navigate from one ViewModel to another, while sending along data. Also aer we now able to actually databind our ViewModel to the actual View.


2. Navigating in a Windows Phone application


Now that we can navigate in a Windows Store application, let's do the same thing for Windows Phone. As you will see, the API for navigating in Windows Phone is different from the one Windows Store applications use. In Windows Phone you navigate based on a Uri. Also, we will not be able to send along our ViewModel and request as easily. In this case, we will serialize everything before we navigate. Ie. we will serialize our ViewModel type, the actual request and the request type.



Again, we will use a MvvmPage base class, which all our views can inherit from. In its NavigatedTo event handler we will now have to (re)create our ViewModel and request. I pulled out this code to another class which implements ICreateViewModels. Since views cannot use dependency injection, I wrapped the IoC in a singleton, which can give me my viewmodelcreator.



The ViewModelCreator then uses a deserializer and some reflection to recreate the viewmodel and to make it handle the actual request.



The above code is actually the most complex part of the entire framework. An, as said, code like this, you will only have to provide once, after that your can reuse it for your next applications.

Once you got thi plumbing in place, you can navigate between ViewModels in a way that is reusable on Windows Store and Windows Phone applications. It wil make it easier to put your logic code in a shared library and to easily make changes to it that will get reflected on each platform.

Hang on for the next part, where I will talk about some extra helper classes to make MVVM work better for you.


Other parts in this series:

Part 0: Intro
Part I: Quick sharing of code
Part II: The class library approach
Part III: We need a pattern
Part IV: Mocking out the differences
Part V: Event to command
Part VI: Behaviors for coping with orientation changes
Part VII: Tombstoning

Sunday, December 8, 2013

Building a cross platform solution for Windows Phone and Windows 8. Part III: We need a pattern.

Other parts in this series:

Part 0: Intro
Part I: Quick sharing of code
Part II: The class library approach
Part III: We need a pattern
Part IV: Mocking out the differences
Part V: Event to command
Part VI: Behaviors for coping with orientation changes
Part VII: Tombstoning

In the previous part I talked about portable class libraries and mentioned the need for a pattern in our code. This pattern will be the Model - View - ViewModel pattern, or MVVM for short. It divides the pieces of your application in more sensible parts and allows you to better define the responsibilities of each individual part. The View for instance will be responsible for nothing more than UI related code. This will be your XAML file. When using MVVM, I try to define my entire user interface in the XAML of my application. I hardly ever put anything in the code behind (except maybe for some UI specific code), all application logic will go someplace else.

The Model in this pattern can be either data you need or services you will be using. These will contain quite some logic and we can write them in such a way that they are unit testable.




The ViewModel will sit between the View and the Model and its responsibility will be to shape the data of the Model in a way that is specific for a certain View. As with the Model, it will also be unit testable.

The ViewModel will contain all the properties for the View to databind to. For these properties you can choose to fire the PropertyChanged event - and thus your ViewModel will implement INotifyPropertyChanged.



The above piece of code shows two properties of one of my viewmodels. The first one is for a collection of items. The second one is for a simple property. Both properties fire the PropertyChanged event. As you can see this is done through a lambda expression. I prefer putting this kind of PropertyChanged event wrapping in a base class for my viewmodels and providing the easier lambda syntax. This provides me with compile time checking (and less runtime errors when I make typos). The way to do this, is shown in the following code example.




Next to simple data properties, your ViewModel can also contain command properties. The View can also databind to these (for instance from the Command property of a Button). A command property will ascertain something gets executed in the ViewModel or in the Model.



As you can see, I use a RelayCommand class, which again makes the creation of command properties easier. The RelayCommand class will implement the ICommand interface and will provide an API by which you can createICommands based on lambdas or delegates. The RelayCommand class looks like this:



When creating commands for your view to databind to, you can also use the CanExecute property, to indicate when a command can be fired, and thus, when the button you databind do should be enabled. This is quite a neat way to automatically enable or disable buttons on your view.



Databinding to all this, is now quite easy from the view.



Using this pattern we get a better separation of concerns in our application. The only thing to watch out for is the danger of your ViewModels getting too big. Keep in mind that the ViewModels' purpose is to shape the data of the Model specifically for a certain View. This means that most of the logic will be the responsibility of the Model, and not necessarily of the ViewModel.

The advantage will be that you can put both Model and ViewModel in your portable class library, giving you the ability to reuse all of the logic between different platforms. And since both Model and ViewModel can be unit tested, you gain a great advantage.



But we will still have to add some extra techniques if we want the ViewModel to be able to use platform specific services. For this we will introduce dependency inversion in the next part of this series.

Other parts in this series:

Part 0: Intro
Part I: Quick sharing of code
Part II: The class library approach
Part III: We need a pattern
Part IV: Mocking out the differences
Part V: Event to command
Part VI: Behaviors for coping with orientation changes
Part VII: Tombstoning

Tuesday, December 3, 2013

Building a cross platform solution for Windows Phone and Windows 8. Part II: The class library approach

Previous parts in this series:

Part 0: Intro

Part I: Quick sharing of code

In the previous part of this series, you learned about linked files. While these are a good technique for quickly sharing pieces of code of your application, I prefer using a class library for shared functionality. The problem is however, you cannot use a regular class library in Windows Phone or Windows 8. What you can use though is a portable class library. This is a special kind of class library that makes it possible for you to target multiple platforms.


What you will get when using a portable class library is the greatest common divisor between platforms of functionalities you will be able to use.

For creating one, you can chose 'Portable Class Library' as a project template.



When creating the PCL you will get an additional pop up asking you for the platforms you wish to target with your portable class library.



In our case choosing Windows Phone and Windows 8 will suffice. For Windows Phone you might also choose to target at least version 7.5 or 8. The higher the version you choose the more possibilities you will have in your portable class library. This is due to the fact that Windows Phone 7 doesn't support the entire .Net framework, so you will bump into some coding restrictions when specifying Windows Phone 7. I myself, most of the time, choose at least version 7.5, just for convenience and since I assume most Windows Phone 7 users will have by now upgraded to this free version.

So, now we have a place to put all of our shared code. But, what will this shared code be? It's not like we can put the code behind pages of the XAML files in this class library. So we will have to take the not so conventional path and make use of an additional pattern which will make it possible for us to put the business logic of our application in the shared class library. This pattern will be the MVVM pattern, since it can very easily be used in databinding scenarios and since it gives us a reactive user interface with INotifyPropertyChanged. This will be explained in the next part of this series.

You can find additional information on Portable Class Libraries here.


Other parts in this series:

Part 0: Intro
Part I: Quick sharing of code
Part II: The class library approach
Part III: We need a pattern
Part IV: Mocking out the differences
Part V: Event to command
Part VI: Behaviors for coping with orientation changes
Part VII: Tombstoning

Monday, December 2, 2013

Building a cross platform solution for Windows Phone and Windows 8. Part I: Quick sharing of code

The intro to this series you can find here.

If you want to be able to quickly share things between a Windows Phone and Windows 8 application, you can always use the technique of linked files. Linked files are existing files you add to a project, for instance a file that already exists in your Windows 8 project and which you want to add to your Windows Phone project. But, instead of adding this existing file in the normal way, you add it as link. This is an additional option you can choose in the 'add existing item' dialog box.


This will give you this same file in your other project, but not as a copy. Meaning that if you make a change to the file in one of the two projects, you will see this change reflected in the other project as well.

You can use this technique for reusing code files or for reusing XAML files. But, as stated in the intro of this series, with XAML files you will have to:
  • remove any platform specific namespaces.
  • only reuse small pieces of XAML (you will for instance see that for pages the start tag in your XAML is different, for Windows Phone this is a PhoneApplicationPage, for Windows 8 this is a Page tag).
If you use the technique of linked files for sharing code files, you can additionally use partial classes to cope with any platform differences. The shared code you can put in a partial class that you add as a linked file for the other platform. You then add another partial class file (with the same class name of course) in each project with the platform specific stuff. If you don't like partial classes that much, you can also choose to use inheritance for this. You will add your base class as link and create child classes with platform specific code in them.

Another technique for coping with platform differences when using linked files is the use of preprocessor directives. These look like if-statements you put in between your code. They will contain the lines that are specific for a certain platform.

  public static void Initialize() 
  { 
#if WINDOWS_PHONE
      if (UIDispatcher != null) 
#else 
#if NETFX_CORE 
      if (UIDispatcher != null) 
#else 
      if (UIDispatcher != null && UIDispatcher.Thread.IsAlive) 
#endif 
#endif 
      { 
          return; 
      } 
#if NETFX_CORE 
      UIDispatcher = Window.Current.Dispatcher; 
#else 
#if WINDOWS_PHONE
      UIDispatcher = Deployment.Current.Dispatcher; 
#else 
      UIDispatcher = Dispatcher.CurrentDispatcher; 
#endif 
#endif 
  }

(The above code is courtesy of MVVMLight, a very good starter framework for doing MVVM).

As you can see in the above code snippet, there are some specific if-statements, called preprocessor directives, added. The ones for NETFX_CORE indicate to the compiler the lines of code that need to be compiled for Windows 8. The WINDOWS_PHONE ones will be compiled for Windows Phone. This can help you add platform specific code as well.

With the technique of linked files you can get quite some reuse of code between Windows Phone and Windows 8 applications. You can also use partial classes, inheritance or preprocessor directives to add platform specific code. But I do want to point out the following flaws in using this technique (and this is why I don't prefer using it myself):
  • Each technique (partial classes, inheritance, preprocessor directives) makes it hard to unit test your code, since it directly contains platform specific code.
  • Preprocessor directives, especially when used a lot, make it harder for you to read and understand your code (I've had projects going from a couple of preprocessor directives in one file to multiple and thus even more unreadable code).
In the following parts of this series, I will show you another technique for adding platform specific code, that will, at the same time, keep your code cleaner, more readable, more reusable and more maintainable (now, wouldn't that be nice).


Other parts in this series:

Part 0: Intro
Part I: Quick sharing of code
Part II: The class library approach
Part III: We need a pattern
Part IV: Mocking out the differences
Part V: Event to command
Part VI: Behaviors for coping with orientation changes
Part VII: Tombstoning

Sunday, December 1, 2013

Building a cross platform solution for Windows Phone and Windows 8

This is Part 0 of a multi-part series blog post. In this series I will give you a walk through of how you can build a cross platform solution that targets both Windows Phone and Windows 8. I will try and give you pointers on how to keep your code as reusable and clean as possible. We will especially look at a couple of patterns and good practices for this. And in the mean time you will get to know some more advanced MVVM.

But before we start, what are actually the problems you face when trying to target two platforms like Windows Phone and Windows 8? They are both XAML based, right, so this shouldn't be too big of a problem. Well, although both platforms are on the path of convergence, meaning moving towards each other, there are still quite some differences you need to take into account.

One of those differences is the actual XAML you can use on each platform. Although the controls on each platform might be the same, they often live in different namespaces. There are also controls that are specific to either Windows Phone - like the PivotControl - or to Windows 8 - like the FlipView control.




This makes sharing your XAML less obvious.

I wouldn't recommend sharing too much of your XAML between Windows Phone and Windows 8 applications Anyway. You will see that you will mostly try and build a UX experience that is specific for the platform at hand. Keep in mind as well that both platforms support quite different form factors. The things that will be most easy to share will be small pieces of XAML - like things you put into user controls or small data templates.

Apart from differences in your XAML you will also see differences in the actual API's you can use from Windows Phone and Windows 8. Both for instance provide you with an API for navigating between pages in your application, but if you look at the actual Navigate call for each platform, you will find that the call takes different parameters.



This makes it less obvious to share code between the two platforms. However, later in this series, I will show you a technique that will give you quite some advantages for reusing for instance your navigation code.

We just have to use a couple of techniques to get around these differences. I will try and give you some pointers in this series. Mainly I want you to get to know some techniques that will allow you to reuse as much of your business logic (the heart of your application) as possible while targetting multiple platforms.

In this series you will find the following parts (and hopefully, if time permits, more to come):

Part 0: Intro
Part I: Quick sharing of code
Part II: The class library approach
Part III: We need a pattern
Part IV: Mocking out the differences
Part V: Event to command
Part VI: Behaviors for coping with orientation changes
Part VII: Tombstoning

I also recently did a talk on this topic at the 2013 MCT Summit, which got filmed. I hope I can soon add the video of it to this series, so you have some extra reference material there. Other extra reference material to come:

  • A list of useful links on cross platform development
  • The slides of my 2013 MCT Summit talk
  • The start of a little framework you can use to build applications that target Windows Phone and Windows 8 (work in progress for now)
So, hope to see you soon for the first part in this series! I'll keep you posted!



Friday, August 16, 2013

Automated JavaScript tasks with Grunt

I am currently working on a JavaScript only project and was looking for a way to automate some of the JavaScript processes. I was primarily looking for a way to get my tests automated, more or less the same as when you check-in code, your tests get automatically run.

For this I chose grunt, which can easily execute different tasks, not only can it automate your test runs, but it can also automate other JavaScript tasks, like minifying your files, running them through jslint, ... You can find a complete list on the grunt site.

The project I am working on is a Windows 8 application (not XAML this time, but JavaScript, HTML5, ...). So I am working in a .Net environment with a solution and a couple of projects.

Grunt Basics


To get started with grunt, first thing you will need to do is install node. All tasks you will be running in grunt will need to be installed using nodes packaging system npm (you can compare npm to the gem install process in ruby or nuget in .Net). After installing node, you can install grunt using npm. Once you've done this, you can start creating a grunt file in your web projects. A grunt file contains different tasks that need to be run, like uglify, jslint, minify, ...). You can find all info on getting started here.

Running jasmine tests with grunt


Now, the thing I wanted to be able to do, was run my jasmine tests though grunt. For this I have some simple jasmine tests set up. I could already run these through the jasmine browser runner. I also tried out a setup that ran my jasmine tests browserless through phantomjs and with the help of the build in Resharper runner, which works great. 

For running these same tests with the help of grunt I needed to install a couple of extra grunt tasks through npm: contrib-jasmine and contrib-connect. The first one, obviously is needed to run your jasmine tests. The second one, connect, can be used to set up a browserless environment for grunt, so it won't start up a browser session with every run of your grunt file. Connect can also be replaced by node itself.

The grunt file itself contains tasks for connect and jasmine:

module.exports = function (grunt) {

    // Project configuration.
    grunt.initConfig({
        pkg: grunt.file.readJSON('package.json'),
        jasmine: {
            src: '<%= pkg.name %>.Web/*.js',
            options: {
                vendor: ['<%= pkg.name %>.Web/Scripts/*.js', '<%= pkg.name %>.Web/lib/*.js'],
                host: 'http://127.0.0.1:<%= connect.test.port %>/',
                specs: '<%= pkg.name %>.Web/specs/*.js'
            }
        },
        connect: {
            test: {
                port: 8000,
                base: '.'
            }
        }
    });

    grunt.loadNpmTasks('grunt-contrib-jasmine');   
    grunt.loadNpmTasks('grunt-contrib-connect');   

    // Default task(s).
    grunt.registerTask('default', ['connect', 'jasmine']);
       
};

You can see that in the jasmine task, I use the port number of my connect task. In my default task, at the bottom of the file, I first fire up the connect port and once that one is running, I let grunt run my jasmine tests.

Once I now issue the grunt command at the command line,I can see it running my tests.

One step further: automated build


Running my jasmine tests locally this way, is cool by itself, but it would be even nicer if I could integrate this in some sort of automated build process. Ie. check-in my code, trigger the grunt task runner, which will run my tests, run jslint, run uglify, ... Basically, get a finished product at the end of the pipeline. 

The project I am working on right now, I got it using tfsservice, since I wanted to find out what its' pro's and cons are. This means, for automating my build, I had to rely on msbuild to do the trick for me. Now, tfsservice has got iisnode installed on it, so it should be possible to have it run grunt tasks as well. 

To get this working, I altered some things in my grunt setup. First of all, I reinstalled grunt and all the packages it uses, so that they got saved locally into my project. This means reissuing the npm install command for grunt, contrib-jasmine, ... but WITH the --save-dev option. This will create a packages folder inside your project with all necessary files for each plugin saved locally inside your project. Once you commit your project to your source control system, all packages will also be present locally on the build server and don't need to be installed globally on your build server (something you just cannot do on tfsservice, being, that you're not sure on which build server you will be running next, which means reinstalling all packages with every run, which is just time consuming. You just don't want to do that.). 

I additionally installed the grunt-cli package locally (so, again, with the --save-dev option). Grunt-cli is the grunt command line interface. It gives you the grunt command locally (read: on your build server). 

Once you have done this, you can alter your csproj file of the project you want to use grunt in (remember: I am working in a .Net context here). For this, I added an additional target at the end of my csproj file: 

  <Target Name="RunGrunt">
    <Exec ContinueOnError="false" WorkingDirectory="$(MSBuildThisFileDirectory)\.." Command="./node_modules/.bin/grunt --no-color" />
  </Target>

This target uses my locally installed version of grunt. The --no-color option I added to get the grunt output nicely formatted. If you don't add this option, your output will look pretty messy.

You will also need to tell your build process to also run grunt after your build. So also add:

<Project ToolsVersion="4.0" DefaultTargets="Build;RunGrunt" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

After these alterations, you will see your grunt output in the output window of Visual Studio after building your project. This should also make it possible to automatically run grunt on my tfsservice.

For this I made a build definition for my tfsservice. After checking in my code, the grunt task gets run as well, only, for now, it exits with errors (return fs.existsSync(filepath);
              ^
  TypeError: Object # has no method 'existsSync'). I looked in to these errors and apparently they are due to the fact that tfsservice doesn't use the latest version of node. I asked the tfs team if they could fix this and they promised me they'd get it done by the end of the month. So, for now, I'm still hoping to get this up and running in a couple of weeks. Normally, once node gets upgraded, there shouldn't be a problem. 





Friday, May 3, 2013

Testing an Aspnet MVC app with Ruby

Why testing through the UI is hard(ly done)


Writing UI tests for a web application can be quite cumbersome and is a painstakingly repetitive task. Also, the UI of an application tends to change often during development, making UI tests a brittle resource. To top this, when running UI tests, you need to count on a browser to act as a host for them, making them a lot slower than regular unit tests.

It is for these and other reasons that people often don't go through the trouble of writing UI tests. And they are, in my opinion, right to do so.

But still, I have seen a lot of applications break in the UI after making minor changes. If these errors don't pop up in your staging environment, you are left with a red face once you deploy to production (and yup, I have seen this happening too often, to not take steps to fix this). And no, people don't always go through the entire application when it's on the staging environment. The times where I've seen full test plan execution for an application, before it is allowed to go through to production, can be counted on (less than) one finger. The only fallback we have here are our unit tests and as the name says, they are for testing (small) units, not entire applications (read: end-to-end testing).

There are of course testing frameworks you can use to run your UI tests, Selenium being one of them. I have been using Selenium on a couple of projects, but often give up quickly because of the slowness of the tests, because tests are hard to maintain, ... and so on.

Now, I recently followed the Ruby and Rails for n00bs session of my good friend Michel (@michelgrootjans) and got introduced to a couple of Ruby test gems that actually open up a couple of opportunities you can use in your .NET applications as well.

In this blog post I want to give you an overview of how you can set these Ruby test gems up, so they can run UI tests for an ASP .NET MVC application (or any .NET web application that is). The technologies (and gems) I will be describing here are:
  • Ruby (of course)
  • RSpec
  • Capybara
  • Selenium
  • PhantomJS
  • Poltergeist 

Setting up the test environment


First of all, let's get our environment setup. For this, you will need Ruby, which you can find here. Once installed, you can test if all went well by running the following command from the command line:

irb

This should open up a Ruby REPL loop where you can test out statements (for instance 1+1, should give you the answer of 2).

Once this is setup. Exit the REPL loop and start installing all the necessary gems:

gem install rspec

gem install capybara
gem install selenium-webdriver
gem install poltergeist

One other thing we now need to install is PhantomJS, which you can find here and Firefox (which is used by the default selenium web driver, I will give you other options further down this blog post). And that's all we need to get started.

Writing a first test with rspec


We can now start writing tests. Our tests will be placed in separate _spec.rb files in a spec folder. For instance if you want to write specification tests for customers, you will have a customers_spec.rb file, for products, you will have a products_spec.rb file. Of course you are free to use whatever naming and grouping of your specification tests, just make sure you have a spec folder with in it at least one _spec file.

A spec looks like this:

describe 'adding a new customer' do
  it 'should be able to create a new customer' do

  end
end

As you can see, you start of with a describe, which tells what kind of behavior you want to test. The it part contains the expected outcome (we will add this later). Don't worry about the Ruby syntax for now. For people using a BDD-style testing framework the describe and it syntax should look familiar.

The things we will want to test, are going to be the behavior of a website. For this we will use the capybara and selenium gems. And since we want this first spec and all following specs to be using the same setup for our tests (eg. url of the site we will be testing), we will use a spec_helper.rb file for this. In this file we will require all the necessary gems and do all of the setup. Each individual spec file will then require this one spec_helper file:

require 'capybara/rspec'
require 'selenium/webdriver'

Capybara.run_server = false
Capybara.default_driver = :selenium
Capybara.app_host = 'http://localhost/TheApplication'

After the require statements, we configure capybara. First of all, we tell it to not run its server. Since capybara is essentially a rails testing framework, it runs under a rack application by default. We won't have this, since our app will be running in IIS (Express) or something similar. Next we tell it which driver to use and last where our application is located (you could test www.google.com if you'd like).

Now that we have this setup, we can start writing a first test. Capybara actually allows you to click buttons and fill out fields in a web application, and that's what we will be doing:

require 'spec_helper'

describe 'adding a new customer', :type => :feature do
  it 'should be able to create a new customer' do
    visit '/Customer'

    click_link 'Add'

    fill_in('first_name', :with => 'Gitte')
    fill_in('last_name', :with => 'Vermeiren')
    fill_in('email', :with => 'mie@hier.be')

    click_button 'Create'

    page.should have_text 'New customer created'
  end
end

As you can see, capybara has methods for clicking links, filling out fields, choosing options, ... and in the end for testing whether you get an expected outcome in your application. You can find the capybara documentation with additional capabilities here.

Also notice I added the :type => :feature hash to the describe, so the test will be picked up as an rspec test.

Once you have this spec file, you can actually run your test. For this fire up a command prompt, cd to the directory just above your spec directory and run the rspec command:

rspec

You will notice this command will take a bit to start up, but once it's running, it will start up a firefox window and show you an error in the command prompt, since I assume you don't have anything implemented yet to get the above test to succeed. I leave it up to you, the reader, to write a small web app to get this test passing. Once you have this and you run the rspec command, you will see the different fields in your browser filled out with the values you specified in your test.

Improving our test

Now, I promised you, the return on investment with rspec, capybara, ruby, ... would be worth the effort. First of all, I can tell you from experience that the above capybara test is a lot cleaner than comparable tests written in .NET. But above that, we can do more. 

We can start of by having our tests run headless, meaning that we won't be needing a browser window anymore. For this we will use poltergeist and phantomjs. You will need to alter the spec_helper file for this:

require 'capybara/rspec'
require 'selenium/webdriver'
require 'capybara/poltergeist'

Capybara.run_server = false
Capybara.default_driver = :selenium
Capybara.javascript_driver = :poltergeist
Capybara.app_host = 'http://localhost/TheApplication'

Capybara.register_driver :poltergeist do |app|
  Capybara::Poltergeist::Driver.new(app,
                                    :phantomjs => 'C:\\some_path\\phantomjs.exe')
end

This adds the extra requirement for poltergeist and adds the javascript_driver configuration setting. Additionally we tell capybara where it can find the phantomjs process.

You will also need to add the :js => true hash to your describe statement for it to run in phantomjs:

describe 'adding a new customer', :type => :feature, :js => true  do


If you run your tests now with rspec, you will notice, firefox will not run, you will just get a summary in the command prompt of which tests succeeded or failed.

I ran some tests with our own web application and noticed that the headless tests ran twice as fast as the tests using the browser.

Further improvements

I am actually already quite happy with this first test setup. It's still not the fastest running tests, but it does allow me to get in some easy end-to-end testing, or to even start doing some BDD style testing with this. You can also use cucumber if you'd want to to instead off rspec. 

In my own setup I extended the above example with some extra configuration settings to be able to easily switch my testing environment from my local to the staging environment and from headless to browser testing. 

I also did some tests with chrome (doable) and IE (very, very slow test runs!), but for now prefer the firefox and headless setup. 

I would also like to add some extra stuff to setup and clear my database before and after each test. That is something I haven't figured out yet, but should be easily added. 


Monday, February 11, 2013

A Better Dispatcher with the Factory Facility

In the applications we write, we often use the same principles. First off, there is dependency injection, preferably through the constructor of a class. Second we often use CQS, to get a clear separation between the commands and queries of our application.

For implementing CQS we often utilize some kind of a dispatcher. This is a simple class which has Dispatch methods for commands and queries. We can use it like so:

public class DoubleAddressController : Controller
{
    readonly IDispatcher _dispatcher;

    public DoubleAddressController(IDispatcher dispatcher)
    {
        _dispatcher = dispatcher;
    }

    public ActionResult GetExistingFilesForAddress(FindDoubleAddressRequest request)
    {
        var result = _dispatcher.DispatchQuery<FindDoubleAddressRequest, FindDoubleAddressResponse>(request);
        return PartialView(result);
    }
}


We ask the dispatcher to dispatch a query for a certain request and response. I ommitted error handling and additional mapping from the above example.

So, the dispatcher gets a certain command or query as an argument and based on this needs to ask a certain query or commandhandler to handle this command or query. Up untill now I used the IoC repository to get hold of this queryhandler:

public class Dispatcher : IDispatcher
{
    readonly IWindsorContainer _windsorContainer;

    public Dispatcher(IWindsorContainer windsorContainer)
    {
        _windsorContainer = windsorContainer;
    }

    public TResponse DispatchQuery<TRequest, TResponse>(TRequest request)
    {
        var handler = _windsorContainer.Resolve<IQueryHandler<TRequest, TResponse>>();

        if (handler == null)
            throw new ArgumentException(string.Format("No handler found for handling {0} and {1}", typeof(TRequest).Name, typeof(TResponse).Name));

        try
        {
            return handler.Handle(request);
        }
        finally
        {
            _windsorContainer.Release(handler);
        }
    }
}

The dispatcher has a dependency on our IoC container, in the example above a WindsorContainer, and asks this container to get hold of the specific handler that can handle the request we just got in.

With this solution however there are a couple of problems. First of all, we need a dependency on our IoC container (in the IoC configuration we configure the container with a reference to itself). Second, we resolve a dependency from the container, which is just not done (service location is known as an anti-pattern). And we need to think about releasing the handler dependency we just resolved. Something the developer needs to think about and which I've seen forgotten.

So, there should be a better solution for this, and there is! I recently started a refactor on another piece of code which used a factory to get hold of instances and which did not make use of our Castle Windsor container. I started looking at the Castle Windsor documentation and you can actually configure it with classes which act as factory and with a factory facility. After this refactoring I thought this factory facility might as well be usable for our not so ideal dispatcher.

First we got rid of the WindsorContainer dependency in the dispatcher:

public class Dispatcher : IDispatcher
{
    private readonly IFindHandlersFactory _handlerFactory;

    public Dispatcher(IFindHandlersFactory handlerFactory)
    {
        _handlerFactory = handlerFactory;
    }

    public TResult Dispatch<TRequest, TResult>(TRequest request)
    {
        var handler = _handlerFactory.CreateFor<TRequest, TResult>(request);
        if (handler == null)
            throw new HandlerNotFoundException(typeof(IQueryHandler<TRequest, TResult>));
        
        return handler.Handle(request);
    }
}


We now use a IFindHandlersFactory. A factory which just finds handlers for us. This interface just has one method defined, CreateFor, with generic type parameters. The thing is that we will not make an implementation for this interface. This interface will be configured in our Castle Windsor container as a factory.


container.AddFacility<typedfactoryfacility>();

container
    .Register(
        Component
            .For<IDispatcher>()
            .ImplementedBy<Dispatcher>()
            .LifestyleTransient())
    .Register(
        Component
            .For<IHandlerFactory>()
            .AsFactory()
            .LifestyleTransient())
    ;

Once you have done this, Castle Windsor will automatically resolve your handlers, without you having to actually call Resolve for these dependencies.