Friday, August 16, 2013

Automated JavaScript tasks with Grunt

I am currently working on a JavaScript only project and was looking for a way to automate some of the JavaScript processes. I was primarily looking for a way to get my tests automated, more or less the same as when you check-in code, your tests get automatically run.

For this I chose grunt, which can easily execute different tasks, not only can it automate your test runs, but it can also automate other JavaScript tasks, like minifying your files, running them through jslint, ... You can find a complete list on the grunt site.

The project I am working on is a Windows 8 application (not XAML this time, but JavaScript, HTML5, ...). So I am working in a .Net environment with a solution and a couple of projects.

Grunt Basics


To get started with grunt, first thing you will need to do is install node. All tasks you will be running in grunt will need to be installed using nodes packaging system npm (you can compare npm to the gem install process in ruby or nuget in .Net). After installing node, you can install grunt using npm. Once you've done this, you can start creating a grunt file in your web projects. A grunt file contains different tasks that need to be run, like uglify, jslint, minify, ...). You can find all info on getting started here.

Running jasmine tests with grunt


Now, the thing I wanted to be able to do, was run my jasmine tests though grunt. For this I have some simple jasmine tests set up. I could already run these through the jasmine browser runner. I also tried out a setup that ran my jasmine tests browserless through phantomjs and with the help of the build in Resharper runner, which works great. 

For running these same tests with the help of grunt I needed to install a couple of extra grunt tasks through npm: contrib-jasmine and contrib-connect. The first one, obviously is needed to run your jasmine tests. The second one, connect, can be used to set up a browserless environment for grunt, so it won't start up a browser session with every run of your grunt file. Connect can also be replaced by node itself.

The grunt file itself contains tasks for connect and jasmine:

module.exports = function (grunt) {

    // Project configuration.
    grunt.initConfig({
        pkg: grunt.file.readJSON('package.json'),
        jasmine: {
            src: '<%= pkg.name %>.Web/*.js',
            options: {
                vendor: ['<%= pkg.name %>.Web/Scripts/*.js', '<%= pkg.name %>.Web/lib/*.js'],
                host: 'http://127.0.0.1:<%= connect.test.port %>/',
                specs: '<%= pkg.name %>.Web/specs/*.js'
            }
        },
        connect: {
            test: {
                port: 8000,
                base: '.'
            }
        }
    });

    grunt.loadNpmTasks('grunt-contrib-jasmine');   
    grunt.loadNpmTasks('grunt-contrib-connect');   

    // Default task(s).
    grunt.registerTask('default', ['connect', 'jasmine']);
       
};

You can see that in the jasmine task, I use the port number of my connect task. In my default task, at the bottom of the file, I first fire up the connect port and once that one is running, I let grunt run my jasmine tests.

Once I now issue the grunt command at the command line,I can see it running my tests.

One step further: automated build


Running my jasmine tests locally this way, is cool by itself, but it would be even nicer if I could integrate this in some sort of automated build process. Ie. check-in my code, trigger the grunt task runner, which will run my tests, run jslint, run uglify, ... Basically, get a finished product at the end of the pipeline. 

The project I am working on right now, I got it using tfsservice, since I wanted to find out what its' pro's and cons are. This means, for automating my build, I had to rely on msbuild to do the trick for me. Now, tfsservice has got iisnode installed on it, so it should be possible to have it run grunt tasks as well. 

To get this working, I altered some things in my grunt setup. First of all, I reinstalled grunt and all the packages it uses, so that they got saved locally into my project. This means reissuing the npm install command for grunt, contrib-jasmine, ... but WITH the --save-dev option. This will create a packages folder inside your project with all necessary files for each plugin saved locally inside your project. Once you commit your project to your source control system, all packages will also be present locally on the build server and don't need to be installed globally on your build server (something you just cannot do on tfsservice, being, that you're not sure on which build server you will be running next, which means reinstalling all packages with every run, which is just time consuming. You just don't want to do that.). 

I additionally installed the grunt-cli package locally (so, again, with the --save-dev option). Grunt-cli is the grunt command line interface. It gives you the grunt command locally (read: on your build server). 

Once you have done this, you can alter your csproj file of the project you want to use grunt in (remember: I am working in a .Net context here). For this, I added an additional target at the end of my csproj file: 

  <Target Name="RunGrunt">
    <Exec ContinueOnError="false" WorkingDirectory="$(MSBuildThisFileDirectory)\.." Command="./node_modules/.bin/grunt --no-color" />
  </Target>

This target uses my locally installed version of grunt. The --no-color option I added to get the grunt output nicely formatted. If you don't add this option, your output will look pretty messy.

You will also need to tell your build process to also run grunt after your build. So also add:

<Project ToolsVersion="4.0" DefaultTargets="Build;RunGrunt" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

After these alterations, you will see your grunt output in the output window of Visual Studio after building your project. This should also make it possible to automatically run grunt on my tfsservice.

For this I made a build definition for my tfsservice. After checking in my code, the grunt task gets run as well, only, for now, it exits with errors (return fs.existsSync(filepath);
              ^
  TypeError: Object # has no method 'existsSync'). I looked in to these errors and apparently they are due to the fact that tfsservice doesn't use the latest version of node. I asked the tfs team if they could fix this and they promised me they'd get it done by the end of the month. So, for now, I'm still hoping to get this up and running in a couple of weeks. Normally, once node gets upgraded, there shouldn't be a problem. 





Friday, May 3, 2013

Testing an Aspnet MVC app with Ruby

Why testing through the UI is hard(ly done)


Writing UI tests for a web application can be quite cumbersome and is a painstakingly repetitive task. Also, the UI of an application tends to change often during development, making UI tests a brittle resource. To top this, when running UI tests, you need to count on a browser to act as a host for them, making them a lot slower than regular unit tests.

It is for these and other reasons that people often don't go through the trouble of writing UI tests. And they are, in my opinion, right to do so.

But still, I have seen a lot of applications break in the UI after making minor changes. If these errors don't pop up in your staging environment, you are left with a red face once you deploy to production (and yup, I have seen this happening too often, to not take steps to fix this). And no, people don't always go through the entire application when it's on the staging environment. The times where I've seen full test plan execution for an application, before it is allowed to go through to production, can be counted on (less than) one finger. The only fallback we have here are our unit tests and as the name says, they are for testing (small) units, not entire applications (read: end-to-end testing).

There are of course testing frameworks you can use to run your UI tests, Selenium being one of them. I have been using Selenium on a couple of projects, but often give up quickly because of the slowness of the tests, because tests are hard to maintain, ... and so on.

Now, I recently followed the Ruby and Rails for n00bs session of my good friend Michel (@michelgrootjans) and got introduced to a couple of Ruby test gems that actually open up a couple of opportunities you can use in your .NET applications as well.

In this blog post I want to give you an overview of how you can set these Ruby test gems up, so they can run UI tests for an ASP .NET MVC application (or any .NET web application that is). The technologies (and gems) I will be describing here are:
  • Ruby (of course)
  • RSpec
  • Capybara
  • Selenium
  • PhantomJS
  • Poltergeist 

Setting up the test environment


First of all, let's get our environment setup. For this, you will need Ruby, which you can find here. Once installed, you can test if all went well by running the following command from the command line:

irb

This should open up a Ruby REPL loop where you can test out statements (for instance 1+1, should give you the answer of 2).

Once this is setup. Exit the REPL loop and start installing all the necessary gems:

gem install rspec

gem install capybara
gem install selenium-webdriver
gem install poltergeist

One other thing we now need to install is PhantomJS, which you can find here and Firefox (which is used by the default selenium web driver, I will give you other options further down this blog post). And that's all we need to get started.

Writing a first test with rspec


We can now start writing tests. Our tests will be placed in separate _spec.rb files in a spec folder. For instance if you want to write specification tests for customers, you will have a customers_spec.rb file, for products, you will have a products_spec.rb file. Of course you are free to use whatever naming and grouping of your specification tests, just make sure you have a spec folder with in it at least one _spec file.

A spec looks like this:

describe 'adding a new customer' do
  it 'should be able to create a new customer' do

  end
end

As you can see, you start of with a describe, which tells what kind of behavior you want to test. The it part contains the expected outcome (we will add this later). Don't worry about the Ruby syntax for now. For people using a BDD-style testing framework the describe and it syntax should look familiar.

The things we will want to test, are going to be the behavior of a website. For this we will use the capybara and selenium gems. And since we want this first spec and all following specs to be using the same setup for our tests (eg. url of the site we will be testing), we will use a spec_helper.rb file for this. In this file we will require all the necessary gems and do all of the setup. Each individual spec file will then require this one spec_helper file:

require 'capybara/rspec'
require 'selenium/webdriver'

Capybara.run_server = false
Capybara.default_driver = :selenium
Capybara.app_host = 'http://localhost/TheApplication'

After the require statements, we configure capybara. First of all, we tell it to not run its server. Since capybara is essentially a rails testing framework, it runs under a rack application by default. We won't have this, since our app will be running in IIS (Express) or something similar. Next we tell it which driver to use and last where our application is located (you could test www.google.com if you'd like).

Now that we have this setup, we can start writing a first test. Capybara actually allows you to click buttons and fill out fields in a web application, and that's what we will be doing:

require 'spec_helper'

describe 'adding a new customer', :type => :feature do
  it 'should be able to create a new customer' do
    visit '/Customer'

    click_link 'Add'

    fill_in('first_name', :with => 'Gitte')
    fill_in('last_name', :with => 'Vermeiren')
    fill_in('email', :with => 'mie@hier.be')

    click_button 'Create'

    page.should have_text 'New customer created'
  end
end

As you can see, capybara has methods for clicking links, filling out fields, choosing options, ... and in the end for testing whether you get an expected outcome in your application. You can find the capybara documentation with additional capabilities here.

Also notice I added the :type => :feature hash to the describe, so the test will be picked up as an rspec test.

Once you have this spec file, you can actually run your test. For this fire up a command prompt, cd to the directory just above your spec directory and run the rspec command:

rspec

You will notice this command will take a bit to start up, but once it's running, it will start up a firefox window and show you an error in the command prompt, since I assume you don't have anything implemented yet to get the above test to succeed. I leave it up to you, the reader, to write a small web app to get this test passing. Once you have this and you run the rspec command, you will see the different fields in your browser filled out with the values you specified in your test.

Improving our test

Now, I promised you, the return on investment with rspec, capybara, ruby, ... would be worth the effort. First of all, I can tell you from experience that the above capybara test is a lot cleaner than comparable tests written in .NET. But above that, we can do more. 

We can start of by having our tests run headless, meaning that we won't be needing a browser window anymore. For this we will use poltergeist and phantomjs. You will need to alter the spec_helper file for this:

require 'capybara/rspec'
require 'selenium/webdriver'
require 'capybara/poltergeist'

Capybara.run_server = false
Capybara.default_driver = :selenium
Capybara.javascript_driver = :poltergeist
Capybara.app_host = 'http://localhost/TheApplication'

Capybara.register_driver :poltergeist do |app|
  Capybara::Poltergeist::Driver.new(app,
                                    :phantomjs => 'C:\\some_path\\phantomjs.exe')
end

This adds the extra requirement for poltergeist and adds the javascript_driver configuration setting. Additionally we tell capybara where it can find the phantomjs process.

You will also need to add the :js => true hash to your describe statement for it to run in phantomjs:

describe 'adding a new customer', :type => :feature, :js => true  do


If you run your tests now with rspec, you will notice, firefox will not run, you will just get a summary in the command prompt of which tests succeeded or failed.

I ran some tests with our own web application and noticed that the headless tests ran twice as fast as the tests using the browser.

Further improvements

I am actually already quite happy with this first test setup. It's still not the fastest running tests, but it does allow me to get in some easy end-to-end testing, or to even start doing some BDD style testing with this. You can also use cucumber if you'd want to to instead off rspec. 

In my own setup I extended the above example with some extra configuration settings to be able to easily switch my testing environment from my local to the staging environment and from headless to browser testing. 

I also did some tests with chrome (doable) and IE (very, very slow test runs!), but for now prefer the firefox and headless setup. 

I would also like to add some extra stuff to setup and clear my database before and after each test. That is something I haven't figured out yet, but should be easily added. 


Monday, February 11, 2013

A Better Dispatcher with the Factory Facility

In the applications we write, we often use the same principles. First off, there is dependency injection, preferably through the constructor of a class. Second we often use CQS, to get a clear separation between the commands and queries of our application.

For implementing CQS we often utilize some kind of a dispatcher. This is a simple class which has Dispatch methods for commands and queries. We can use it like so:

public class DoubleAddressController : Controller
{
    readonly IDispatcher _dispatcher;

    public DoubleAddressController(IDispatcher dispatcher)
    {
        _dispatcher = dispatcher;
    }

    public ActionResult GetExistingFilesForAddress(FindDoubleAddressRequest request)
    {
        var result = _dispatcher.DispatchQuery<FindDoubleAddressRequest, FindDoubleAddressResponse>(request);
        return PartialView(result);
    }
}


We ask the dispatcher to dispatch a query for a certain request and response. I ommitted error handling and additional mapping from the above example.

So, the dispatcher gets a certain command or query as an argument and based on this needs to ask a certain query or commandhandler to handle this command or query. Up untill now I used the IoC repository to get hold of this queryhandler:

public class Dispatcher : IDispatcher
{
    readonly IWindsorContainer _windsorContainer;

    public Dispatcher(IWindsorContainer windsorContainer)
    {
        _windsorContainer = windsorContainer;
    }

    public TResponse DispatchQuery<TRequest, TResponse>(TRequest request)
    {
        var handler = _windsorContainer.Resolve<IQueryHandler<TRequest, TResponse>>();

        if (handler == null)
            throw new ArgumentException(string.Format("No handler found for handling {0} and {1}", typeof(TRequest).Name, typeof(TResponse).Name));

        try
        {
            return handler.Handle(request);
        }
        finally
        {
            _windsorContainer.Release(handler);
        }
    }
}

The dispatcher has a dependency on our IoC container, in the example above a WindsorContainer, and asks this container to get hold of the specific handler that can handle the request we just got in.

With this solution however there are a couple of problems. First of all, we need a dependency on our IoC container (in the IoC configuration we configure the container with a reference to itself). Second, we resolve a dependency from the container, which is just not done (service location is known as an anti-pattern). And we need to think about releasing the handler dependency we just resolved. Something the developer needs to think about and which I've seen forgotten.

So, there should be a better solution for this, and there is! I recently started a refactor on another piece of code which used a factory to get hold of instances and which did not make use of our Castle Windsor container. I started looking at the Castle Windsor documentation and you can actually configure it with classes which act as factory and with a factory facility. After this refactoring I thought this factory facility might as well be usable for our not so ideal dispatcher.

First we got rid of the WindsorContainer dependency in the dispatcher:

public class Dispatcher : IDispatcher
{
    private readonly IFindHandlersFactory _handlerFactory;

    public Dispatcher(IFindHandlersFactory handlerFactory)
    {
        _handlerFactory = handlerFactory;
    }

    public TResult Dispatch<TRequest, TResult>(TRequest request)
    {
        var handler = _handlerFactory.CreateFor<TRequest, TResult>(request);
        if (handler == null)
            throw new HandlerNotFoundException(typeof(IQueryHandler<TRequest, TResult>));
        
        return handler.Handle(request);
    }
}


We now use a IFindHandlersFactory. A factory which just finds handlers for us. This interface just has one method defined, CreateFor, with generic type parameters. The thing is that we will not make an implementation for this interface. This interface will be configured in our Castle Windsor container as a factory.


container.AddFacility<typedfactoryfacility>();

container
    .Register(
        Component
            .For<IDispatcher>()
            .ImplementedBy<Dispatcher>()
            .LifestyleTransient())
    .Register(
        Component
            .For<IHandlerFactory>()
            .AsFactory()
            .LifestyleTransient())
    ;

Once you have done this, Castle Windsor will automatically resolve your handlers, without you having to actually call Resolve for these dependencies.

Thursday, November 1, 2012

Unit Testing a Windows 8 Application

I am a big fan of test driving an application. That's why I write almost no code without writing a test for it.

I recently began writing a Windows Store application for Windows 8 and again wanted to test drive this app.   If you look at MSDN at how to add a test project to you Windows 8 app, you will see, you can add a Windows Store test project to your solution. Now, handy as this is, the down sight is that this test project is as well a Windows Store project type, meaning you won't be able to add a whole bunch of references to it. I wasn't able to add RhinoMock or Moq to it, nor was I able to add MSpec. The problem being these frameworks were build to the wrong version of the .Net framework.

Now, I could have made my own builds of the frameworks I am happy working with, but I started looking if there is no other way, like using a simple class library for unit testing a Windows Store app. Now, when you try this, the problem is, you can't get a reference of your Windows Store app added to your class library for testing.

To get this fixed, I tried out a couple of things. First I tried adding the same references to my class library project as were in the Unit Test Library for Windows Store apps. This didn't work, however. Also, I couldn't find the '.NET for Windows Store apps' reference dll. I also tried playing around with the output type of my project. Changing it to something different than class library. But then again, adding the necessary unit testing references became a problem. I also played around with the conditional compilation symbols (hey, I was trying to find where the differences lay between my own class library and a Unit Test Library for Windows Store apps).

After trying all that, I unloaded my project from solution explorer and started editing it, comparing it to the Unit Test Library for Windows Store apps and trying out different combinations. Now, after a couple of tries, the setting in your project file, you need is the following:

<Import Project="$(MSBuildExtensionsPath)\Microsoft\WindowsXaml\v$(VisualStudioVersion)\Microsoft.Windows.UI.Xaml.CSharp.targets" />

You can put this line as a replacement of the other csharp.targets file at the bottom of your project file. Once you have added this, you can add a reference of your Windows Store app to your class library. You will even see the '.NET for Windows Store apps' reference show up in your references. You will even be able to add all the additional references you want, like NUnit, Moq or RhinoMock, ...

One additional problem I had was with the System.Data dll, which was build to a different framework. Since I didn't need it in my test project, I happily removed it.


Monday, September 3, 2012

WCF on IIS and Windows 8

Just spend some time getting a simple WCF service up and running under IIS on my Windows 8 machine. Apparently, if you install IIS after installing the last .NET framework (ie. installing Visual Studio 2012), not all necessary handlers, adapters and protocols for WCF are installed on your IIS.

First time I tried to reach the WCF service it gave me an error that said 'The page you are requesting cannot be served because of the extension configuration.' This means you need to run Servicemodelreg.exe to reregister WCF (same kind of registration you needed from time to time for ASP .NET on previous versions of IIS, remember aspnet_regiis). You can find this tool at C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation for version 3.0 or at C:\Windows\Microsoft.NET\Framework\v4.0.30319 for version 4.0.

Now looking at the MSDN site, they will tell you to run the 3.0 version. After doing so, however, you won't have the necessary handlers installed for a WCF service created with Visual Studio 2012, since they are all version 4.0. So that won't work.

Running the 4.0 version however won't work either. If you try Servicemodelreg.exe -i it will tell you to register with the -c option (register a component). If you try the -i -c option you will get the error (and the solution!) that this tool cannot be used on this version of windows. What you can use, though, is the 'Turn Windows Features On/Off' dialog.

So simply open this up from your control panel (I pinned my control panel to my start page on day one of installing Windows 8, otherwise I wouldn't be able to find it again) and check the necessary WCF features you want to have available on your WCF. In most cases, just flagging HTTP will do. You can find the WCF features under .NET Framework 4.5 Advanced Services.


And that's what you need to get yuor WCF service running under IIS on Windows 8. Hope this was helpful.

Addendum: This fix recently also solved the issue of a 'HTTP Error 400. The request hostname is invalid' error of a colleague when trying to run a WCF service on Windows 8.

Saturday, July 14, 2012

Knockout an MVC Ajax call

I recently needed to build an Ajax data search, showing the result of the search in an MVC web page. Great opportunity, I thought, to give Knockout.js a try. Knockout.js lets you apply databindings to a web page and for this it uses an MVVM (Model View ViewModel) approach.

On the knockout site, you can find some great examples to get started, even give it a try in jsFiddle. But since my solution is a bit different from their tutorials, I wanted to share it with you.

The actual problem at hand was a page on which I needed to link a scanned in document to a dossier. Most of the times the number of the dossier can be picked up from the scanned in document, but this is not always the case. In case where the number of the dossier can't be determined from the scan, we want our users to go look for the dossier and link the dossier manually to the scanned in document.


The page to do this more or less looks like this (I rebuild the solution without most of the formatting we have in the initial application).



The initial setup for this contains a ScanController that gives you this Scan Detail page.

public class ScanController : Controller
{
        public ActionResult Detail(int id)
        {
            var scan = new Scan
                        {
                            Id = id,
                            File = "This is the file",
                            DossierId = 0,
                            DossierNumber = string.Empty
                        };
            return View(new ScanViewModel(scan, new Dossier()));
        }
}

This uses the Detail view, which consists of two divs for the accordion. The top div looks like this:

@using (Html.BeginForm("Link", "Scan"))
{
  &lt;div>
    &lt;fieldset>
      &lt;dl>
        &lt;h2>Scan info&lt;/h2>
        @Html.Label("Receipt date") 
        @Html.TextBox("receiptdate", string.Empty, new { @class = "date"  })                
      &lt;/dl>
      &lt;h2>Dossier info&lt;/h2>
      &lt;div>
        @Html.Hidden("file", Model.Scan.File)

        &lt;div id="linkDiv" @(Model.Scan.DossierId == 0 ? "class=hidden" : "")>
          &lt;div>Dossier: &lt;span id="dossierNumberText">@Model.Scan.DossierNumber&lt;/span>&lt;/div>
          @Html.Hidden("dossierNumber", Model.Scan.DossierNumber)
          &lt;input type="submit" value="Link Scan to this dossier"/>
        &lt;/div>
        &lt;div id="message" @(Model.Scan.DossierId == 0 ? "": "class=hidden")>
          There is no dossier to link to. You should search for one.
        &lt;/div>
      &lt;/div>
  &lt;/fieldset>
  &lt;/div>      
}

The bottom div looks like this:


&lt;div id="search">
    @using(Html.BeginForm("Search", "Dossier", FormMethod.Post, new { @id = "dossierForm" }))
    {
        &lt;fieldset>
            &lt;dl>
                @Html.LabelFor(m => m.Dossier.DossierNumber) 
                @Html.TextBoxFor(m => m.Dossier.DossierNumber)
            &lt;/dl>
            &lt;dl>
                @Html.LabelFor(m => m.Dossier.OwnerLastName) 
                @Html.TextBoxFor(m => m.Dossier.OwnerLastName)
            &lt;/dl>
            &lt;dl>
                @Html.LabelFor(m => m.Dossier.OwnerFirstName) 
                @Html.TextBoxFor(m => m.Dossier.OwnerFirstName)
            &lt;/dl>
            &lt;dl>
                &lt;input type="submit" value="Search"/>
            &lt;/dl>
        &lt;/fieldset>
    }
&lt;/div>
&lt;div id="searchResult" class="hidden">
    &lt;table id="resultTable">
        &lt;thead>
            &lt;th>DossierNumber&lt;/th>
            &lt;th>OwnerLastName&lt;/th>
            &lt;th>OwnerFirstName&lt;/th>
            &lt;th>Detail&lt;/th>
            &lt;th>Link&lt;/th>
        &lt;/thead>
        &lt;tbody>
                    
        &lt;/tbody>
    &lt;/table>
&lt;/div>

It's this bottom div that we are most interested in. The top form (Search Dossier) will be used to perform an Ajax search. The result of this search will have to be shown in the table of the bottom searchResult div.

For this search I already added a Dossier controller with a Search action:

public class DossierController : Controller
{
  public ActionResult Search(DossierSearchViewModel searchVm)
  {
    return Json(new
                  {
                    Success = true,
                    Message = "All's ok",
                    Data = new List&lt;Dossier>
                            {
                              new Dossier
                                  {
                                    DossierNumber = "123abc",
                                    OwnerFirstName = "John",
                                    OwnerLastName = "Doe"
                                   },
                              new Dossier
                                  {
                                    DossierNumber = "456def",
                                    OwnerFirstName = "Jeff",
                                    OwnerLastName = "Smith"
                                  },
                              new Dossier
                                  {
                                    DossierNumber = "789ghi",
                                    OwnerFirstName = "Peter",
                                    OwnerLastName = "Jackson"
                                  },
                              new Dossier
                                  {
                                    DossierNumber = "321jkl",
                                    OwnerFirstName = "Carl",
                                    OwnerLastName = "Turner"
                                  },
                             }
                   });
  }
}

If you click the Search button you will be get to see the Json result in the Dossier/Search page. This is not what we want, this form should perform an asynchronous Ajax post. That's not yet the case. For this I used the JQuery forms plugin. Which has a handy ajaxForm method you can add to your form. (you could also use the mvc ajax extensions for this, they should already be in your initial MVC setup).

&lt;script type="text/javascript">
    $(function () {
        $("#dossierForm").ajaxForm({
            success: render_dossier_grid
        });
    });

    function render_dossier_grid(ctx) {
    }
&lt;/script>

All of the knockout magic can now be added in the render_dossier_grid function. Before we can do this, make sure to add the knockout.js files to your solution. This can be easily done using nuget. Reference them in your _layout file, so you can use them.

First, let's create a viewmodel. This will be nothing more than a list of dossiers we get back from our dossier search. Since we want to be able to add and remove items from this list and at the same time have our table automatically show the new items, we will use a knockout observable array. To have the databindings applied to your view, you should call applyBindings for your viewmodel.

$(function () {
    $("#dossierForm").ajaxForm({
        success: render_dossier_grid
    });

    ko.applyBindings(viewModel);
});

function render_dossier_grid(ctx) {
}

var viewModel = {
    dossiers: ko.observableArray([])
};

This viewModel can now be filled when we get the result of our Ajax call.

function render_dossier_grid(ctx) {
    $("#searchResult").removeClass("hidden");
    viewModel.dossiers.removeAll();
    viewModel.dossiers(ctx.Data);
    myAccordion.accordion("resize");
}

That's pretty easy. I just reset the dossiers observable array of the viewmodel and add the dossiers that come from the Ajax call. Last bit is having the JQuery accordion control perform a resize, just to get rid of any scroll bars.

Next step is actually binding this viewmodel to something. So, we need to specify in our view what data needs to go where. For this we will extend the table we have on our page.

&lt;table id="resultTable">
    &lt;thead>
        &lt;th>DossierNumber&lt;/th>
        &lt;th>OwnerLastName&lt;/th>
        &lt;th>OwnerFirstName&lt;/th>
        &lt;th>Detail&lt;/th>
        &lt;th>Link&lt;/th>
    &lt;/thead>
    &lt;tbody data-bind="foreach:dossiers">
        &lt;tr>
            &lt;td data-bind="text:DossierNumber">&lt;/td>
            &lt;td data-bind="text:OwnerLastName">&lt;/td>
            &lt;td data-bind="text:OwnerFirstName">&lt;/td>
            &lt;td>
                &lt;a data-bind="attr: {href: DetailLink}">Detail&lt;/a>
            &lt;/td>
            &lt;td>
                &lt;a href="#" data-bind="click: $parent.useDossier">Use this file for linking&lt;/a>
            &lt;/td>
        &lt;/tr>
    &lt;/tbody>
&lt;/table>

Adding the databindings is not that hard. I added a foreach binding, which will make a new table row for each dossier in the dossiers observable array. Each table row has its own binding for each td element. The first three are quite obvious. You can databind to the names of the properties of your domain (or MVC viewmodel) object.

For the second to last binding I added a databinding to the href attribute of an a tag. The DetailLink is a property on the Dossier domain object that gives you a link to Dossier/Detail/id.

The last one is a special binding to the click event of an a tag. It is bound to the useDossier function of the parent of the binding. The parent of our binding is the actual viewmodel. We still need to add this useDossier function:

var viewModel = {
    dossiers: ko.observableArray([]),

    useDossier: function (dossierVM) {
        var dossierNumber = dossierVM.DossierNumber;
        myAccordion.accordion({ active: 0 });
        $("#dossierNumber").val(dossierNumber);
        $("#dossierNumberText").text(dossierNumber);

        $("#linkDiv").removeClass('hidden');
        $("#message").addClass('hidden');
    }
};

In this function I use the viewmodel. In this case this is the actual row/dossier of the table. I take the DossierNumber of the current dossier and use it to set the value of the dossierNumber and dossierNumberText fields in the top piece of the accordion. I also do some extra bits to update the UI accordingly.

That's it, not much to it. I really like this knockout framework. I did have some trouble to get started at first, but once you know your way around the viewmodel, it's pretty easy to use.

The full code can be found on github.

Tuesday, May 22, 2012

Async VII: WinRT class library

In the previous post I talked about async from JavaScript. Now, for this I had to make a WinRT class library for my asynchronous RestCaller class, so this functionality could be used from a HTML5 JavaScript WinRT application. Now, once you do this, there are some things you need to know and some restrictions on the types you use for your publicly visible methods.

First of all you need to create a class library. Pretty simple, but once it's created, you need to change its' project type to WinMD file. This stands for Windows MetaData file.


Also, in the first version, my class library was called BlitzHiker.Lib. This name gave me runtime errors when called from JavaScript. At compile time everything was fine, but at runtime it would just crash for no reason. Removing the point from its' name fixed this.

Once I had the class library ready, I copied my RestCaller class to it and started compiling. The errors this gave were quite abundant. For starters, you are not allowed to use a return type of Task or Task&lt;T> in a WinMD file. This forced me to define all of my existing methods as private. Which made it all compile just fine. But then again, you're nothing with private methods, you need some public ones as well.

private async Task PublishHikeRequestTaskAsync(CancellationToken token)
{
    var hikeRequest = GetHikeRequest();

    try
    {
        var client = new HttpClient();

        var url = string.Format("{0}/HikeRequest", BASE_URL);
        var response = await client.PostAsync(url, await BuildJsonContent(hikeRequest), token);
        response.EnsureSuccessStatusCode();
    }
    catch (HttpRequestException exc)
    {
        var dialog = new MessageDialog(exc.Message);
        dialog.ShowAsync();
    }
}

For each of these private methods I needed to define a public wrapper. The return types you are allowed to use on them are AsyncAction (as a replacement for Task) and AsyncOperation&lt;T> (as a replacement for Task&lt;T>).

The original method gets called by using AsyncInfo.Run (this changed in the recent CTP, the original type used was AsyncInfoFactory, but that one's gone. If you watch the Build sessions on async, you'll see them use the AsyncInfoFactory, but that just won't work anymore).

public IAsyncAction PublishHikeRequestAsync()
{
    return (IAsyncAction)AsyncInfo.Run((ct) => PublishHikeRequestTaskAsync(ct));
}

You can also see me sending along a CancellationToken. Since it's not passed along to the PublishHikeRequestAsync method, you will have to use a different way of cancelling than I showed in one of the previous posts.

protected async override void OnNavigatedTo(NavigationEventArgs e)
{
    cts = new CancellationTokenSource();

    try
    {
        await _restCaller.PublishHikeRequestAsync().AsTask(cts.Token);

        var driverIds = await _restCaller.GetDriversAsync().AsTask(cts.Token);

        if (driverIds != null)
            await ShowDriversOnMap(driverIds, cts.Token,
                new Progress&lt;int>(p => statusText.Text = string.Format("{0} of {1}", p, driverIds.Count)));
    }
    catch (OperationCanceledException exc)
    {
        statusText.Text = exc.Message;
    }
}

Since you get an AsyncAction, and not a Task, you can call AsTask on this. With AsTask, you can send along your CancellationToken. This shows that before you all start using WinMD class libraries, just keep in mind that you are working with different types and that the usage will be different (more verbose) as well.

As for the AsyncOperation, that's pretty much the same as AsyncAction.

public IAsyncOperation&lt;UserModel> GetUserAsync(Guid userId)
{
    return (IAsyncOperation&lt;UserModel>)AsyncInfo.Run((ct) => GetUserTaskAsync(userId, ct));
}

And that's it. I hope you found these posts on async useful. Just keep in mind that all info given here is still based on a preview version of this library and things can still change a bit in the eventual release.

There are some drawbacks on using async. As with a lot in the .NET Framework, they make things really simple for you. The danger is that it's easy to forget what you are actually doing, defining callbacks. Just keep that in mind and you'll be fine.