Thursday, December 22, 2011

Project Roslyn

I just had a first peak at Project Roslyn, Microsofts' Compiler As A Service project, that will be included in the second to next installment of the .NET Framework. The next installment is actually going to be all about the new async capabilities, on which I will blog some in the next couple of weeks, since it is quite exciting to work with as well. But Project Roslyn had me startled. I knew Microsoft was working on a CAAS project, but hadn't really seen much demo's or peak previews pop up. Now I have and I must say it looks very promising.

What will Project Roslyn bring to you. First of all, you will get the ability to compile code on the fly. Kind of like how you would take a script file and run it through an interpreter and get extra functionality. This you can now do as well in your statically compiled C# programs. It actually looks a lot like what you can already do with IronRuby or IronPython.

Another thing is that Project Roslyn, besides compiling code on the fly, also gives you the ability to rewrite code. Now, wouldn't that be nice. You can dive right into the syntax tree and do your own thing with it. It gives you the ability to write your own code refactorings  (the ones that ReSharper left out, that is) and make them available in let's say a context menu in Visual Studio. I hope it will also be possible to hook into the actual compile process and add extra stuff to your code there as well. It would give you an alternative to frameworks like PostSharp. I still have to play around with this to see what will be possible.

They also included a C# interactive window with a Read Eval Print Loop. A lot like the F# interactive window. You even get autocompletion and syntax highlighting in it and it tells you when you made mistakes with squiglies under your code.

Project Roslyn is still a work in progress, and not everything is possible, ... yet. I am really looking forward to the metaprogramming capabilities they will be providing. Next thing will be actually playing some with the CTP and hopefully I will be able to post some concrete stuff for this. Next up will also be some new posts about the async possibilities in C# 5.

Saturday, October 15, 2011

MVVM Step By Step, Part III


In the previous posts, which you can find here and here, we created a first, simple MVVM app. We already set up some basic databinding between our view, MainView, and viewmodel, MainViewModel, to show something in the ContentPresenter of the view. What we still need to do, however, is react to the button clicks of the buttons present in the MainView. In a normal, non-MVVM application, you would use eventhandlers for the button clicks. This is, however, something we will not do in an MVVM application, mostly because we want our viewmodels to be testable. Instead off an eventhandler with a sender object and eventargs, in MVVM, we create properties of the ICommand interface.

ICommand properties are properties which return a type that implements the ICommand interface. They are bindable to commands in a view, like the Command property of a Button. And second, they consist of an Execute and CanExecute method. The combination of these two gives a nice way of indicating which code needs to be executed and whether or not it can be executed. On the downside of ICommand properties is the fact that they can only be bound to commands in a view and not to events. This is why a lot of MVVM frameworks add an EventToCommand kind of type which makes it possible to bind ICommand properties to events. More on that in a later post.

For now, let's add a first ICommand property to the MainViewModel and bind it to the Command property of a button. Let's start with the ICommand for a new customer, the NewCustomerCommand.

public class MainViewModel
{
    private ICommand _newCustomerCommand;

    public ICommand NewCustomerCommand
    {
        get
        {
            if (_newCustomerCommand == null)
                _newCustomerCommand = new NewCustomerCommand();
            return _newCustomerCommand;
        }
    }
}

The NewCustomerCommand is a class we still need to write, so lets add it to our solution. I prefer adding an extra Commands folder for these kinds of classes (not that we will be creating much of them, you will soon see why). Since the NewCustomerCommand needs to implement the ICommand interface it will initially look like this.

public class NewCustomerCommand : ICommand
{
    public bool CanExecute(object parameter)
    {
        throw new NotImplementedException();
    }

    public void Execute(object parameter)
    {
        throw new NotImplementedException();
    }

    public event EventHandler CanExecuteChanged;
}

Since there are no restrictions on executing this command, we will have the CanExecute method always return true. The Execute method needs to open the NewCustomerView within the ContentPresenter of the MainView. We will again use the same technique of initializing a view and viewmodel and then binding the two together using the DataContext property of the view (didn't we do this three times allready? As any good (lazy) programmer, might we not think about making this something more generic??? Yes, we will, but not yet).

public class NewCustomerCommand : ICommand
{
    public bool CanExecute(object parameter)
    {
        return true;
    }

    public void Execute(object parameter)
    {
        var view = new NewCustomerView();
        var viewmodel = new NewCustomerViewModel();
        view.DataContext = viewmodel;
    }

    public event EventHandler CanExecuteChanged;
}

Now that we have this view, we need to assign it to the ViewToShow property of our MainViewModel. This is kind of hard, since the NewCustomerCommand and the MainViewModel don't know each other. We need to add a constructor to the NewCustomerCommand that takes the MainViewModel as a parameter and then uses it as a private datamember. Now we can assign the ViewToShow property.


public class NewCustomerCommand : ICommand
{
    private MainViewModel _mainViewModel;

    public NewCustomerCommand(MainViewModel mainViewModel)
    {
        _mainViewModel = mainViewModel;
    }

    public bool CanExecute(object parameter)
    {
        return true;
    }

    public void Execute(object parameter)
    {
        var view = new NewCustomerView();
        var viewmodel = new NewCustomerViewModel();
        view.DataContext = viewmodel;
        _mainViewModel.ViewToShow = view;
    }

    public event EventHandler CanExecuteChanged;
}

The call to this constructor in the MainViewModel needs to pass itself (this) now to the NewCustomerCommand.


public ICommand NewCustomerCommand
{
    get
    {
        if (_newCustomerCommand == null)
            _newCustomerCommand = new NewCustomerCommand(this);
        return _newCustomerCommand;
    }
}

The only thing left now, is add a binding to the MainView so the Command property of the NewCustomer button is bound to the NewCustomerCommand property.


<Button Content="New Customer" Margin="10" Command="{Binding NewCustomerCommand}" />

Again, add a TextBlock with some dummy text to your NewCustomerView to test this out. If you run the application you will find, however, the NewCustomer button does not seem to work. Clicking it, will not result in the other view being shown. If you place breakpoints in your code however, you will see that the Execute method of your NewCustomerCommand does get called.

What is actually wrong is the fact that the ViewToShow property gets a new value, but it does not announce this fact. For this you need the INotifyPropertyChanged interface. Implement it in your MainViewModel and change the ViewToShow property from an autoproperty to a property with a backing field and call the PropertyChanged event.

public class MainViewModel : INotifyPropertyChanged
{
    private ICommand _newCustomerCommand;
    private FrameworkElement _viewToShow;

    public MainViewModel()
    {
        var view = new AllCustomersView();
        var viewmodel = new AllCustomersViewModel();
        view.DataContext = viewmodel;
        ViewToShow = view;
    }

    public FrameworkElement ViewToShow
    {
        get { return _viewToShow; }
        set
        {
            _viewToShow = value;
            if (PropertyChanged != null)
                PropertyChanged(this, new PropertyChangedEventArgs("ViewToShow"));
        }
    }

    public ICommand NewCustomerCommand
    {
        get
        {
            if (_newCustomerCommand == null)
                _newCustomerCommand = new NewCustomerCommand(this);
            return _newCustomerCommand;
        }
    }

    public event PropertyChangedEventHandler PropertyChanged;
}


If you now run the application, you can use the NewCustomer button to show the other view.

In a next post I will show you how to add some more framework-like capabilities. As in, how can you provide certain parts, so you don't need to rewrite masses of code every time you need to add something new (like a new command).

Monday, October 10, 2011

MVVM Step By Step, part II

In the previous post, we created the initial setup of an MVVM application. We already created a first view, MainView, and a first viewmodel, MainViewModel. Both were hooked up using the DataContext property of the view. What we didn't do yet, was show something in the ContentPresenter on the MainView.

Initially, we want to show all of the customers. In our MVVM app, this will be another view, AllCustomersView, and another viewmodel, AllCustomersViewModel. It will be the responsibility of the MainViewModel to initialize these two. The basics of hooking the view and viewmodel together will be similar to what we did for the MainView and MainViewModel. Lets initialize all this in the constructor of the MainViewModel.

public MainViewModel()
{
    var view = new AllCustomersView();
    var viewmodel = new AllCustomersViewModel();
    view.DataContext = viewmodel;
}

For the AllCustomersView just create a new UserControl in your views folder. For the AllCustomersViewModel, create a new class in your ViewModels folder.

Now that we have this view and viewmodel, we need to be able to show them in the ContentPresenter of the MainView. For this, you will need a property in your MainViewModel to which the MainView ContentPresenter can bind to. We will add this property to the MainViewModel class.

public FrameworkElement ViewToShow { get; set; }

Make sure you assign the view you created in the constructor to this property.

public MainViewModel()
{
    var view = new AllCustomersView();
    var viewmodel = new AllCustomersViewModel();
    view.DataContext = viewmodel;
    ViewToShow = view;
}

Last thing you need to add is the binding in your view.

<ContentPresenter Grid.Column="1" Content="{Binding ViewToShow}" />

Just for testing purposes, you can add a TextBlock with some dummy text in your AllCustomersView and run the application. You should see the content you placed in your AllCustomersView in your application.

That's it, as simple as that. A viewmodel provides properties for your view to bind to. In the next post we will see how we can react to the clicks of the buttons in the MainView.

Sunday, September 25, 2011

MVVM Step By Step, part I

Since we are doing quite a lot of Silverlight, WPF and Windows Phone 7 development at QFrame, we also train our new people in patterns like MVVM. For future reference I thought it would be nice to write the basics of our MVVM session down. It is actually a step by step approach to implementing MVVM in a Silverlight application (which is usable in WPF and Windows Phone development as well). I also don't use any predefined framework in the initial explanation of the MVVM pattern. This is to let you see what can be done without using frameworks and for which parts some library functionality might be nice.

As a startup, I will create a simple Silverlight application which is hosted in a ASP .NET web application. In the ASP .NET web application I will also place a small WCF Data Service, which will provide the model data I will use in the Silverlight MVVM app. The database (a simple mdf file) contains two tables customers and companies. I created an Entity Data Model for these in the ASP .NET web application and hosted them in a WCF Data Service. The data service looks like this:


public class CustomerService : DataService<CustomersEntities>
{
    // This method is called only once to initialize service-wide policies.
    public static void InitializeService(DataServiceConfiguration config)
    {
        config.SetEntitySetAccessRule("Customers", EntitySetRights.All);
        config.SetEntitySetAccessRule("Companies", EntitySetRights.All);
        config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
    }
}

This is the initial project setup:


The service reference to the CustomerService is already present in the Silverlight project.

First thing I do in a new MVVM application, is creating separate folders for my models, views and viewmodels. In this demo, my model will consist of simple data classes, but actually my service is also part of my model. My models, viewmodels and views will all be placed in the same project, but this is not a necessity.



For the application I am building, I want a kind of menu on the left of the screen, with links to, for instance, an 'all customers' page, a 'new customer' page, an 'all companies' page and a 'new company' page. On the right of the page the corresponding page will be shown. To achieve this, I will create a new view, called MainView.xaml in which I place a couple of buttons and a ContentPresenter.

<UserControl x:Class="Demo1.Views.MainView"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable="d"
    d:DesignHeight="300" d:DesignWidth="400">
    
    <Grid x:Name="LayoutRoot" Background="White">
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="30*" />
            <ColumnDefinition Width="70*" />
        </Grid.ColumnDefinitions>
        <StackPanel Grid.Column="0" Background="AliceBlue">
            <Button Content="All Customers" Margin="10" />
            <Button Content="New Customer" Margin="10" />
            <Button Content="All Companies" Margin="10" />
            <Button Content="New Company" Margin="10" />
        </StackPanel>
        <ContentPresenter Grid.Column="1" />
    </Grid>
</UserControl>


Now, since we are using MVVM, I also need to create a viewmodel to go with this view. So, in the ViewModels directory I will create a new class and call it MainViewModel.

namespace Demo1.ViewModels
{
    public class MainViewModel
    {
        
    }
}

We now have our view and our viewmodel. If we'd run this application now, this wouldn't show much on our screen yet. In fact, you would see a blanc page. This is because the application is still wired up to show the default xaml page. Open up your app.xaml.cs file and you'll see the following piece of code.

private void Application_Startup(object sender, StartupEventArgs e)
{
    this.RootVisual = new MainPage();
}


This will have to be changed so your MainView is set to the RootVisual of the application. And also, the view (MainView) and viewmodel (MainViewModel) will have to be linked to each other. Now, for this, there are two options. Either your view knows who its viewmodel is (view-first) or it doesn't (viewmodel-first). I will use the viewmodel-first approach.

To link the two up, change the default code in app.xaml.cs as follows:


private void Application_Startup(object sender, StartupEventArgs e)
{
    var view = new MainView();
    var viewmodel = new MainViewModel();
    view.DataContext = viewmodel;
    RootVisual = view;
}


If you would do this linking view-first, you would set the DataContext property in the xaml of your view. In this demo I've chosen to use the viewmodel-first approach.

After doing this, and running the application, you will get to see your view. pressing the buttons, however, will not have much effect, yet. We will make them send messages to your viewmodel in the next part.

Thursday, September 22, 2011

Highcard Game: First App in the Marketplace

At QFrame, we are very proud to announce that as of today, we've got our first app up and running in the Windows Phone Marketplace.


It is a very simple game, based on the drawing of cards, which can be played from 2 up to 4 players. After a draw, the highest card wins. It is excellent to use if you'd need to decide who needs to do the dishes, who should go for bread in the morning or, who should pay the next round of drinks, ...

This little experiment thought us a lot about the basics of Windows Phone development and about the approval process. We were revoked at first for the marketplace, since we only took a dark background into account. This was an easily fixed issue, though.

So, if you're interested, go and download it to your phone, and let us know what you think about it.

Mobile Cross Platform MV Solution

After the initial hours lost on installing everything, I have been playing some more with MonoDroid lately, trying to port a basic Windows Phone 7 application to an Android device. Part of this was quite easy (thanks to MonoDroid that is) and part of it took some refactoring of the existing code. All in all, the experience was very insightful.

The existing WP7 application uses the MVVM pattern, which makes it very easy to get a separation of concerns between your model, view and viewmodel (I sometimes like to rant about this separation of concerns not being as clear as it should be in some applications). It also works very well in combination with the databinding that is provided in Silverlight. If you want to know more about the MVVM pattern, check out the article by Josh Smith in MSDN Magazine.

While MVVM works very well with Silverlight, WPF and Windows Phone apps, I did have some issues using this pattern in combination with MonoDroid. Let me walk you though this.

I started off thinking which parts of the existing code I would be able to reuse for the Android application. Reusing the views was out of the question, since Android views are written differently than Windows Phone views. But my models should be easily reusable, as should be my viewmodels.

The existing application already had the model and viewmodel classes in a separate Windows Phone class library. The views were contained in a Windows Phone 7 application (MemoChallenge.Wp7). So, reusing the models and viewmodels, would just be a matter of using this same class library for my Android application. Now, you can't just reference a Windows Phone 7 class library in an Android project. For this to work you need a second class library, specific for the Android/Monodroid project and some clever project linking.

For the Android application I actually created two more projects, one for the Android  views (MemoChallenge.droid), and one class library (MemoChallenge.droidlogic), which would be a linked project to the Windows Phone 7 class library (MemoChallenge.logic). You can use the Project Linker tool for this. This will link the two class library projects and will make sure that any change you do in one project, gets reflected in the other project.


After doing this, I immediately got my first compile errors, and they were abundant. Why did I get all these errors?

The thing is, viewmodel classes use commands all over the place. Or actually, they use the ICommand interface. This interface turned out to be quite a big problem. It is contained in a Windows specific dll, so the Android SDK does not know what to do with it.

To solve this problem, I could have rewritten the ICommand interface for Android. This would also mean I would have to add conditional compilation attributes all over the viewmodel code. I was not willing to do this, since it would be a lot of work. And I am not that fond of conditional compilation attributes.

Instead I used a combination of the MVVM and MVP pattern. I did not want to throw away the existing viewmodels, since they were working just fine. I did move them though, to the views project (MemoChallenge.Wp7), so they were no longer present in the shared class library. As a replacement for the Android application (and the WP7 application, remember these projects are linked), I added presenter classes. My viewmodels became very slim, I practically removed all of their logic and moved it all to the corresponding presenters. Now only thing my viewmodels had to do is forward the call to a presenter and we're done.

   public class GameViewModel : ViewModelBase, IGameViewModel 
   { 
     private ICommand _startCommand; 
     private ICommand _sameCommand; 
     private ICommand _notSameCommand; 
     private GamePresenter _presenter; 
     private IGameData _gameData; 

     public GameViewModel() 
     { 
       _presenter = new GamePresenter(this); 
     } 

     public ICommand StartCommand 
     { 
       get 
       { 
         if (_startCommand == null) 
           _startCommand = new RelayCommand(() => _presenter.Start(), 
             () => _presenter.CanStart()); 
         return _startCommand; 
       } 
     } 

     //rest of the code omitted 
 }  

I hope this gives you an easy way to get MVVM to work on Android/MonoDroid.

Tuesday, September 20, 2011

dotnetAcademy

At QFrame we are very committed to get our new, young people quickly up to speed with .Net development. To achieve this, we have started with the dotnetAcademy project in which we give our youngster all the necessary info to become good developers. This also allows us to get them up to speed with new technologies once they get on the job.

For the dotnetAcademy we work together with bitconsult, who have already been giving this kind of training for some time now. The training itself consists of actual workshops on topics they will need in real application development (WCF, patterns, ALM, ... to name a few), training on presentation and communication skills, and an internal project they will be developing during the next two months. This project will also be closely monitored and coached by myself as will I give some of the workshop trainings.

Starting this week, our junior programmers have also started blogging. You can read about their experiences at the dotnetAcademy site. Check it out, I'd say.


Saturday, September 17, 2011

Some MVVM Badness

I must say, I really like the MVVM pattern for doing both Silverlight and WPF development. I have been using it for quite some time now, with and without a standard framework. And although I really like the pattern, I have also seen some bad code pop-up in applications that are more like anti-MVVM. The reason for this is that the responsibilities of each part of the MVVM pattern get mixed up. People seem to think that once their class has the 'ViewModel' extension it actually behaves like a viewmodel, which is not always the case.

For starters, I have seen people use MVVM without the first M. That is, people tend to do MVVM without a model and just put everything in their viewmodel, even their model data. It pops up like private data members in the viewmodel, where they actually should use a model. The temptation for doing this can be quite big, since often you have to put propertychanged notifications in your viewmodel. It just seems that much easier to skip the model for this. The temptation seems to be biggest in applications where there is no real back-end, like a service, that needs to be called, that hands you off something model-ish.

Another sin in MVVM applications is the over-use of the commands in a viewmodel. They are there as a hook for your view to tell the viewmodel to 'do something' or that 'something needs to be processed'. This doesn't mean that all of the logic for the something that needs to be done, has to be put in the viewmodel. It is ok for the viewmodel to ask some other component to handle the details of the something that needs to be done. For this, keep the rule of single responsibility in mind. The moment your viewmodel starts to do more than what it's supposed to do, you probably need to refactor out some of the logic (maybe hand it of to the model or to some other component).

Your viewmodel should only have the logic for bridging between your view and your model. It is there to present the data of the model in such a way so that it makes sense to the view. And it also contains commands so that the view can interact with the model. And that is all it should do. This also implies that the model can be (even has to be) more than just a POCO or DTO, it can contain logic if it needs to. This is often a misconception in MVVM, where the model is seen as just a class that contains data. But this is often not the case. Often, the model is a back-end, which contains business logic and which hands you off POCO or DTO classes. Often it is more than just one model class, it can be a whole bunch of classes, all interacting to provide your viewmodel with services.

A last offense can be found in the view. It is still ok to put logic in the xaml.cs file, but make sure you limit this to view-specific logic. That is, for instance, the logic that is needed for a certain animation to display properly. And that's all. All of the other logic should be in your viewmodel or should be delegated from your viewmodel to other components. So don't start putting state specific logic in the code behind of the view. This means that, if you have animation specific code in your code behind, the actual check 'IsAnimating' should be in your viewmodel.

That's the end of my MVVM pattern rant. I really really like this pattern, but just keep in mind that it is not because you use a certain pattern, your code keeps to it. Keep the different responsibilities of each part of the pattern in mind, so you don't accidentally mix them up.

Friday, September 16, 2011

Resharper Shadow Copy

When running your unit tests with ReSharper, apparently your assemblies all get copied to a temporary directory, where your project gets build and in which the actual tests are run. This allows you to start a second build of a project, while your tests are still running. Handy as this might be, it does give problems sometimes. If, for instance, some of your tests rely on additional files to be present in your output directory (like resource files), they will not be found, since they are not being copied to the temp directory in which the ReSharper tests run.

Only option is, to turn off the shadow-copy capability in the ReSharper Options window. Now your tests will be run in your actual output directory.



I ran into this issue with classes that utilize Assembly.GetExecutingAssembly to get the full file path to a resource. While my test cases ran fine while using the MSpec runner (which uses a rake script which copies all necessary files), they failed using the NUnit runner utilized by ReSharper. Turning of the shadow-copy option resolved the problem.

Friday, August 5, 2011

RTFM: First MonoDroid Steps

This will be a short post on my first experiences in using MonoDroid. It will primarily be a small list of the mistakes I made in getting a simple app to run on a configured device. This actually didn't work quite as easy as I thought it would.

I started with following the download and installation instructions for Visual Studio 2010 on the Xamarin site. Clicking all the SDK's and packages you need to download and installing all of those.

First error I made here is, they indicate at their site you need to install the Android SDK on a path with no spaces in it. This recommendation, actually, is one you should follow. If you install the Android SDK in the regular Program Files location, the Visual Studio plugin won't be able to find the SDK. That's the first mistake that made me have to reinstall the Android SDK.

After reinstalling the Android SDK, just check if the new path is filled out correctly under Tools - Options - Mono for Android in Visual Studio. If it is not, fill it out and restart your Visual Studio (and recheck if it is filled out correctly).


Another handy option here is the Adb logging option, which puts a log file on your desktop, so you can see a bit what's going on if things go terribly wrong.

That done, I tried to run the default project that's created when you start a new MonoDroid project. This failed miserably. I kept getting the error message that my activity could not be found: "Activity class {\} could not be found". I struggled quite some time with this error, figuring the mandroid process must have created an incorrect package name or maybe created an incorrect folder structure under my obj folder of the project. Whatever I tried, nothing worked:

  • renaming folders,
  • renaming the namespace, 
  • renaming the project, 
  • creating my own AndroidManifest.xml file
  • ...

After a lot of hair pulling and rereading the installation instructions and the first tutorial for the 100th time, my eye fell on the Java SDK version they recommended you download. You should install version 6, but if you click the link for downloading this, you get the download site for version 7. After uninstalling version 7, googling for version 6 and installing that one I finally could get the tutorial app deployed on an Android device.

Another error I thought I made along the way, is one I made out of laziness. While downloading the Android SDK I got tired by the size of the download, skipping it and only downloading the latest version (3.3 at the time of writing). And apparently MonoDroid doesn't support this latest version (yet). So installing revision 8 and configuring an Android device for this, made it all ok. But on the other side, deploying to the 3.3 version after I got revision 8 to work, went just fine. The only problem is that your UI gets stretched, because the 3.3 device is a tablet device. I still need to figure out how you can set op MonoDroid for bigger screens.

Monday, August 1, 2011

Playing GOLF, hole 3

So in hole 1 and 2 we started with the most simple tests and we already implemented a standard way of iterating over a game board, which is a list of lists. Time to add some more tests and to expand our example. Before I do, though, just one little side note on the F# development environment. I regularly use functions before they exist. The F# environment constantly checks my program for these loose, non-existent functions, which slows down my Visual Studio. Kind of like the VB compiler does. This is quite annoying. I haven't yet found how I can make the F# compiler not constantly check for coding errors.

That said, on to the next test. Let's make it a little more complex and skip two live cells, but go directly to three live cells.

 [<TestFixture>] 
 type ``Given a gameboard with three live neighbour cells`` ()= 
   let gameboard =  
     [[0; 0; 0]; 
     [1; 1; 1]; 
     [0; 0; 0]] 
  
   let gameboard_after_gen_1 =  
     [[0; 0; 0]; 
     [0; 1; 0]; 
     [0; 0; 0]] 
    
   [<Test>] member test. 
    ``When I ask for generation 1, the middle cell should still live`` ()= 
     conway gameboard 1 |> should equal gameboard_after_gen_1   

This test will be red, because all cells after a first generation will be dead. We will need to add extra code to keep certain cells (those with two living neighbours) alive. Remember the code we had up until now:

 module GOL 
  
 let kill gameboard = 
   let rec kill_row new_row =  
     let length = List.length (List.head gameboard) 
     if List.length new_row = length then 
       new_row 
     else 
       kill_row (0 :: new_row)  
          
   [1 .. List.length gameboard] |> 
      List.map(fun x -> kill_row List.Empty)  
  
 let conway gameboard n = 
   match n with 
     | 0 -> gameboard 
     | 1 -> kill gameboard  

The line that needs to be altered is the one where we prepend (cons in F# terms) the number 0 with the new_row parameter. In stead of always consing 0, we will need to calculate which number needs to be consed. To do this calculation we will also need some notion of the row and cell index for which we need to calculate this value, so this needs to be added somehow as well. The row index can be send along via the List.map function at the bottom. The cell index can be calculated from the length of the new_row parameter up until now.

If we add the notion of a row_index, the code will look like this (changes are marked in red):

 let kill gameboard = 
   let rec kill_row row_index new_row =  
     let length = List.length (List.head gameboard) 
     if List.length new_row = length then 
       new_row 
     else 
       kill_row row_index (0 :: new_row)  
          
   [1 .. List.length gameboard] |> 
     List.map(fun x -> kill_row x List.Empty)  
   
If we add a calculation of a new_cell_value for a certain cell index, it will look like this:
  
 let kill gameboard = 
   let calc_next_cell_value x y =  
     0 
   let rec kill_row row_index new_row =  
     let length = List.length (List.head gameboard) 
     if List.length new_row = length then 
       new_row 
     else 
       let new_cell_value = 
            calc_next_cell_value 
            (length - (List.length new_row)) row_index 
       kill_row row_index (new_cell_value :: new_row)  
          
   [1 .. List.length gameboard] |> 
      List.map(fun x -> kill_row x List.Empty)   

As for the calc_next_cell_value function, let's do this as naively as we did before. Implement just enough to make the test pass. I first tried it this way, using the build in Item function to get an item from a list:

 let calc_next_cell_value x y =  
   let current_cell_state = gameboard.Item(y).Item(x) 
   let number_of_alive_neighbours =  
     gameboard.Item(y).Item(x - 1) + 
     gameboard.Item(y).Item(x + 1) 
   if current_cell_state = 0 then 
     0 
   else if current_cell_state = 1 && 
     number_of_alive_neighbours = 2 then 
     1  
   else 0 

This might work, but the compiler seems to need type information concerning the list you get once you execute Item(y) on it (it is needed to resolve the Item(x) call). Adding this type information however, was not something I wanted to do. So instead of adding a type to the gameboard parameter, I wrote my own functions to get at a certain cell.

 let cell x y = 
   nth x (nth y gameboard) 
 let calc_next_cell_value x y =  
   let current_cell_state = cell x y 
   let number_of_alive_neighbours =  
     cell (x - 1) (y) + 
     cell (x + 1) (y) 
   if current_cell_state = 0 then 
     0 
   else if current_cell_state = 1 && 
     number_of_alive_neighbours = 2 then 
     1 
   else 0  

The nth function is written as a recursive function on the list and it actually adds some boundary checking. Once we go out of bounds (our n is bigger than our list length or smaller than 1), we always return 0. If we stay in bounds, we check if n is 1, this means we can return the head of the list. In all other cases, we decrement n and execute the nth function on the tail of the list.

 let rec nth n list =  
   if n > List.length list then  
     0 
   else if n < 1 then  
     0 
   else if n = 1 then  
     List.head list 
   else 
     nth (n - 1) (List.tail list)  

To make this implementation of the nth function more visible, suppose our list is as follows: [1; 2; 3; 4; 5] and we ask for the third element. The bottom else case will get executed. The nth function is called again with an n of 2 and a list of [2; 3; 4; 5] (the tail of the original list). Again the bottom else statement will be executed, which will call the nth function again with a n of 1 and a list of [3; 4; 5]. This time the n parameter equals 1 and the head of the list will be taken, which has the value of 3. This again is a very commonly used pattern when working with lists in functional languages.

But this implementation of nth actually keeps the problem of the compiler complaining about the type we get when executing the function (it is more or less the same problem we had with the Item(y) call). F# is quite strict in working with types. And this was one of the primary problems I had while implementing the Game Of Life. Where as a language as Scheme (from which I ripped the solution) doesn't bother with types, F# does. And if you execute the nth function as follows: (nth y gameboard), you actually want a list at a certain index in a list of lists. And if you execute the nth function again for the x parameter, you actually want the cell at position x in a list of integers. So one time you are operating on a list of lists and the next on a list of integers. F# doesn't like that. So, to solve this problem I added another function,to get the nth row in a list of lists.

 let rec nth_row n list = 
   if n > List.length list then 
     [] 
   else if n < 1 then 
     [] 
   else if n = 1 then 
     List.head list 
   else nth_row (n - 1)(List.tail list)  

This function looks very similar to the nth function, with the difference that it explicitly tells the compiler it will return something of the list type.

On the calling side, I changed the code to this:

 let cell x y = 
   nth x (nth_row y gameboard)  

The entire implementation now looks like this:

 let rec nth_row n list = 
   if n > List.length list then 
     [] 
   else if n < 1 then 
     [] 
   else if n = 1 then 
     List.head list 
   else nth_row (n - 1)(List.tail list) 
  
 let rec nth n list =  
   if n > List.length list then  
     0 
   else if n < 1 then  
     0 
   else if n = 1 then  
     List.head list 
   else 
     nth (n - 1) (List.tail list) 
  
 let do_generation gameboard = 
   let cell x y = 
     nth x (nth_row y gameboard) 
   let calc_next_cell_value x y =  
     let current_cell_state = cell x y 
     let number_of_alive_neighbours =  
       cell (x - 1) (y) + 
       cell (x + 1) (y) 
     if current_cell_state = 0 then 
       0 
     else if current_cell_state = 1 && 
       number_of_alive_neighbours = 2 then 
       1 
     else 0 
   let rec kill_row row_index new_row =  
     let length = List.length (List.head gameboard) 
     if List.length new_row = length then 
       new_row 
     else 
       let new_cell_value = 
         calc_next_cell_value 
         (length - (List.length new_row)) 
         row_index 
       kill_row row_index (new_cell_value :: new_row)  
          
   [1 .. List.length gameboard] |> 
     List.map(fun x -> kill_row x List.Empty)  
  
 let conway gameboard n = 
   match n with 
     | 0 -> gameboard 
     | 1 -> do_generation gameboard  

You might have noticed I also changed the name of the kill function to do_generation. Refactoring this was needed!

And this is actually already the main part of our Game Of Life. The only thing that still needs to be added, are some more calculations for the alive count of a cell and some extra rule checks for keeping cells alive or killing them off, and off course the ability to calculate more than 1 generation.

I will add all of those at the next hole.

Playing GOLF, hole 2

The first hole of playing GOLF, involved the first most simple tests one could write for Conway's Game Of Life. In this next post we will add tests for a gameboard with life cells (actually just one life cell for now). For this I created a new TestFixture, so all tests are nicely separated in the ReSharper runner.

 [<TestFixture>] 
 type ``Given a gameboard with 1 live cell`` ()= 
   let gameboard =  
     [[0; 0; 0]; 
     [0; 1; 0]; 
     [0; 0; 0]] 
   let dead_gameboard =  
     [[0; 0; 0]; 
     [0; 0; 0]; 
     [0; 0; 0]]
 
   [<Test>] member test. 
    ``When I ask for generation 0, should not change gameboard`` ()= 
     conway gameboard 0 |> should equal gameboard 
   [<Test>] member test. 
    ``When I ask for generation 1, the life cell should be dead`` ()= 
     conway gameboard 1 |> should equal dead_gameboard  

The example above immediately adds two new tests. The first one will be green immediately using the current implementation, this is why I didn't bother to split this example up. The second test, however, will not pass. The current implementation simply returns the gameboard as is and doesn't yet alter it. We need to implement our conway function further to make this test turn green. This is the code, I will explain it part by part, so it's not too overwhelming if you've done little to no F# coding.

 module GOL 
  
 let kill gameboard = 
   let rec kill_row new_row =  
     let length = List.length (List.head gameboard) 
     if List.length new_row = length then 
       new_row 
     else 
       kill_row (0 :: new_row)  
          
   [1 .. List.length gameboard] |> List.map(fun x -> kill_row List.Empty)  
  
 let conway gameboard n = 
   match n with 
     | 0 -> gameboard 
     | 1 -> kill gameboard 
   

This is actually quite a naive way of implementing the red test case. For making a first generation, we kill off the entire gameboard. I did this on purpose, because this way, you can see the basic way you can use for working with lists, or better, lists of lists in our case. The gameboard is actually a list of lists, so if we want to do some operation on it, we do this list per list (or row per row). This basic implementation is also the most simple implementation you can give, to make the test pass.

As for the starting point of our conway function, which you can find at the bottom of the example code, I used F#'s feature of pattern matching. This is actually something very powerful and looks a lot like a switch case in regular programming languages. Except that pattern matching in F# is more like a switch case on speed, since it gives you more opportunities. In this example the speed is still a bit missing, we just have two cases up until now, a 0 and a 1 case. For the 0 case we just return the gameboard as is. For generation 1 we kill off the gameboard.

As for the kill function, the actual starting point is the bottom line with the List.map in it (the other lines represent an inner function, but more about that in a while). This tells the compiler to execute the function kill_row for the input we give it. This input is actually a list of integers of 1 up until the length of our game board (which is the number of rows in it).

The kill_row function is an internal function of the kill function. This has the advantage that the gameboard input parameter is accessible and we can use it to calculate the length of one row. For this we take the first row, using List.head and of this we take the length, with List.length. This calculation of the length is something that will be done only once, not for each recursive call.

The input parameter for kill_row is an empty list. Item per item in a row, we will prepend the number 0 (kill off one cell) to this list. This process continues until we get a row of all zero's that has the same length as the length of one row of our game board (the length variable). And since the input for the List.map function is a list (one of integers), each resulting row will be put in this list. We actually transform a list of integers into a list of rows (where each row is again a list of integers).

This might seem an unusual way of working, but is actually quite a common pattern of iterating over lists and building new lists in a functional language. It takes some getting used to as well. And from time to time it just looks weird, since most of us are used to doing this with loops.

This implementation also made our tests green, so on to hole number 3.

Playing GOLF, hole 1

A while back I started with the idea of writing the Game Of Life test first using F#. I already blogged some about this, but this week I finally found the time to actually implement all this from start to finish. I did start all over again though, since my last post didn't really contain a functional starting point. And I did give it some tries to get the code right and I also took a peek at the Scheme implementation that can be found at this link (hey, it's been five years since I've done functional programming, I am allowed to make mistakes and to see what others have done). So it is quite possible that there are similarities between my implementation and the Scheme version. There is also an F# version that can be found at that site, but it's implementation uses a bit too many nested for loops for my liking.

First of all, I rewrote my first test. The idea of having cells that know how many living neighbours they have, was nice at first, but with this implementation I wanted to start of with a gameboard of all dead cells. For my first most simple test, I started of with: If I have all dead cells, than after zero generations all cells should still be dead.

 namespace GolF.Tests
 
 open NUnit.Framework 
 open FsUnit 
 open GOL 
 [<TestFixture>] 
 type ``Given a dead gameboard`` ()= 
   let gameboard =  
     [[0; 0; 0]; 
     [0; 0; 0]; 
     [0; 0; 0]] 
   [<Test>] member test. 
    ``When I ask for generation 0, should not change gameboard`` ()=  
     conway gameboard 0 |> should equal gameboard  

To make the test compile I added the conway function, which just returned a string (I didn't want to make the test green, yet).

 module GOL 
 let conway gameboard n = 
   "test"  

This made the test compile, but off course it was still red.


To make the test pass, I altered the conway function a bit.

 let conway gameboard n = 
   gameboard  

Plain and simple. Implement the most simple thing to make the test turn green.

So, time to add a second test: If I ask for generation 1 of a dead gameboard, the gameboard should still be dead.

   [<Test>] member test. 
    ``When I ask for generation 1, gameboard should still be dead`` ()= 
     conway gameboard 1 |> should equal gameboard  

This test actually immediately passes with the current implementation, but it is a good test to have around once we start adding the actual code of calculating generations.

That's all for the first hole of playing GolF, on to the second one.

Tuesday, July 26, 2011

Book review: Professional F# 2.0

In one of my last posts I wrote about a F# application written test first. While working on this application I read the book 'Professional F# 2.0' from Wrox. Personally I believe the Wrox books are most of the time the best books you can find on a subject, especially their 'Professional' series.They go very deep into each subject and give you all the details you need to know. So here is what I think about the Professional F# book, by Ted Neward, Aaron Erickson, Talbott Crowell and Rick Minerich.



The book begins with an extensive explanation of the different F# language constructs you can use. The same explanation you would get in a typical C# book, in fact. While this section gives you all the necessary language details: what types look like in F#, how certain constructs, like ifs and whiles, need to be written. It lacks, however, the basic explanation of how a language like F# should be used. As a reader you don't get much of a clue why these kinds of constructs might be useful. The few points where you do get some extra info on the style in which to write F# code, a reader without any knowledge of functional programming, will be nonplussed about why certain things are done in a certain way. That really is a shame, because I think a lot of readers will drop off once they see the extensive differences in language constructs they need to overcome when coming from a more traditional language.

I was hopeful though that in 'Part III: Functional Programming' there would actually be some good information about how to program using a functional style. But alas, again the author goes on and on about language constructs, what functions look like and how you write them, not how you use them properly. Some more advanced topics also come into play, like closures. Not that this is so extremely advanced, but it is a concept you don't hear every day, you use it quite often, though, even in C#, but you don't always think of it as being a closure. The explanation the author gives for the closure concept, however, is quite difficult to understand, up to the point where I had to use Google to get the right explanation. Same goes for the principle of currying, which, in my opinion, is explained the wrong way.

As for the F# language itself, it adds quite a lot of special constructs you do need to get used to. Especially when writing your first F# program, be sure to have a language reference nearby. For such a reference, this book is ideal. It explains all the little tidbits you need to know. But still, in my opinion, F# adds a lot of unnecessary language constructs. When you are, for instance, defining a recursive function, you are obliged to use the rec keyword, otherwise the compiler won't allow you to call yourself. Coming from a Lisp kind of language, this adds noise to your code (although I have to admit Scheme has its own weird language constructs).

Since this book was quite a disappointment for me, I would gladly like to guide you to some other useful resources:

  • The MSDN library will not be of much help. It will give also you a lot of info about the language, as this book does, but it doesn't say much about the programming style to use. What you can find on the MSDN site, however is how to get started setting up your first F# project in Visual Studio. This was something I struggled with in the beginning of programming F#, how a project is structured.
  • Tryfsharp: This site by the Microsoft research team lets you go through an interactive tutorial in which you can immediately try out different examples in an F# interactive console. In their tutorial they explain all the important F# constructs that are different from the other programming languages you come across (without too much other explanatory crap that you do not need to know). The examples given are also quite easy to follow (with some small exceptions). The examples given were also quite close to the examples you will find in introductory books on functional programming style. This site is based on the book 'Programming F#' by Chris Smith (O' Reilly). I actually good much better information on this site than in the Wrox book.
  • fsharp community samples: These can be found at codeplex. You can find some samples here, but also quite some links to other resources and blogs. Good starting place, if you ask me.
  • F# wikibook: Explains not only the basic concepts, but also some advanced topics such as advanced data structures in F#, which, after you've gone through the introductory stuff are quite useful.
With all this material I now hope to get a better Game Of Life written. I will probably throw away what I already have and not try it with a class, but with only functions. We'll see where this ends.

Tuesday, July 19, 2011

Cross Platform Mobile Development

At this years NDC conference, I followed a couple of talks about mobile development. It was announced there that the people behind MonoTouch and MonoDroid were moving to a new company named Xamarin. Well, as of yesterday, they have actually made the move and all sources for doing cross platform development can be found at their site.

They are actually providing a really cool way of doing cross platform development. You don't need to learn a whole lot of Objective-C or Java to develop applications for iPhone or Android, but you can develop in C# .Net, the language you use for doing Windows Phone 7 development. The only thing you still need to provide, that is specific for the iPhone and Android devices, is the user interface specific for each device. If you develop your applications wisely, though, - that is, using one of the MV* patterns - this is a walk in the park.

For iPhone development you will need a MAC, because the iPhone SDK only works on iOS. You can then install Mono and MonoDevelop (instead of Visual Studio) for OSX to start developing. For people like me, who are used to tools like ReSharper, you will, however, again need to learn how to type.

For Android development, the Java SDK is needed, and again MonoDevelop or Visual Studio 2010. With some handy project linking in Visual Studio, you can easily provide a shared code base for all three platforms. On this, watch the NDC talk, by Jonas Follesoe, about cross platform development!

The good news also is, that Xamarin offers both MonoTouch and MonoDroid as a free trial version that doesn't expire. The only drawback to the trial is that you can't deploy to actual iPhone or Android devices, only to their respective emulators. But this should be enough to get you started. Once you want to try out specific features the emulators don't offer (like gps, camera, ...) you will need to move to a paid license (or once you want to deploy to the marketplace, that is).

You will however need a little bit of extra effort if your initial WP7 projects are hosted on TFS. MonoDevelop for now does not offer a build in way for synchronising projects with TFS. You can use Microsoft Visual Studio Team Explorer Everywhere 2010, which offers a command line utility for connecting to TFS. Also SvnBridge gives you an SVN kinda way for connecting to TFS. And also, you could move your code base to SVN or GitHub (I love that place, not only for its name) alltogether.

This weekend there is also the MonoSpace conference in Boston going on, specifically for Mono development. They have a couple of sessions on mobile development as well. I hope they will put videos on-line of their talks.

Since I don't own a MAC (yet), I will try out the MonoDroid SDK in the next couple of weeks. Be sure to check back in to watch the progress (and the walls I will probably hit).

Tuesday, July 12, 2011

Going to the Opera

I recently installed the Opera browser on my laptop. Main reason for this was the fact that I thought the Opera guys gave some really good HTML 5 talks during this years' NDC conference. And they just looked like really nice guys (I'm allowed to judge people on nicety). The Opera browser is also pretty far in implementing HTML 5 features as well, so might be interesting to try it out.

So, after installing this browser, I only used it for a day or so. Good points were its speed, which is faster than IE, faster then Firefox and faster than Chrome, this last one, up until now, being the fastest browser I used thus far. Bad points however are the way it treats shortcut keys. Ctrl click, opens a new tab, like Chrome, but also immediately places focus on this tab, unlike Chrome. To open the tab in the background on Opera, you need Ctrl Shift click, something I am not used to. This, sadly enough, was actually the main reason to have me use Opera only for a day. On a side note, one thing to always keep in mind when designing user interfaces: don't change the way things look and feel for a user too much (did you hear me, Office guys?), they get scared and run.

But then recently I gave it another shot. I primarily tried it out again because my laptop while running an instance of Visual Studio, a VM and Chrome with a couple of tabs opened, was getting really, really really slow. Looking at my CPU usage I saw that Chrome was actually using more resources than my Visual Studio while compiling. Some googling showed me that Chrome uses a sandbox for each tab, which also means it uses the same amount of resources for each tab, since resources are not shared. This has the advantage that when one tab crashes, it doesn't crash your other tabs. A phenomenon I haven't really experienced yet (in my case my other tabs actually did crash as well. not good!). This has the disadvantage of eating CPU cycles like a madman if you like to have many tabs open all the time.

So I reconfigured Opera as my default browser and immediately got annoyed again by the missing ctrl click. There are some workarounds for this (google them, you'll find them), but they didn't really work for me. Altering your shortcuts in the advanced preferences didn't do the trick. And the JavaScript solution that you can add didn't work either (my clicks stopped working altogether). I do find the fact that you can add your own JavaScript code to this browser pretty cool, though. Anyhow, while trying this out, I restarted my browser a couple of times and during my last restart I got the message that there was an update ready to install. This, I must say, I didn't really like, I like the fact that Chrome installs updates in the background and that I'm not bothered by this. My initial annoyance with update windows probably stems from the iTunes app that starts nagging about updates every friggin' time you start it.

So I installed the update, reopened the Opera browser and as a miracle I now have ctrl click open tabs in the background. I went from version 11.11 to 11.50, so I skipped a few versions as well, it seems. I suppose this is a new feature, I have no idea, but I'm happy with it. And the most important thing: my laptop doesn't become overly slow while I leave some browser tabs open all day. And apart from this, Opera offers pretty much the same features as Chrome. So for now, I'm sticking with it!

---

On a side note: I have been using Opera now for a week or two. I am still very happy with its performance (read this as: the performance of the rest of my system). But there are some drawbacks, though. Not all CSS is rendered as it should be. My own blog for instance doesn't render correctly using Opera and when I watch my blogs' statistics, they are just not shown in Opera. Also, yesterday, I was editing a new blog post, only to find out today that the autosave or Ctrl + S, which works just fine in Chrome, didn't function in Opera. I lost my changes, which I don't like! I'm going to keep using it for a couple of weeks (not for blog editing) and re-evaluate again.

Tuesday, July 5, 2011

Playing GOLF

Qframe recently held a bootcamp where all employees were invited to follow sessions on all kinds of things. One of those was a code retreat in which we tried to implement as much of the Game Of Life as possible in 45 minutes time, while giving ourselves a couple of restrictions (implement this without using if statements, implement this test first, ...). We redid this exercise a couple of times and during the last go, I and some colleagues gave ourselves the restriction of using a completely different language than the one we are used to. So we tried implementing this in F# (hence the name GOLF: Game Of Life in F#). I must say, we didn't get very far, but ever since I have been playing with the idea of finishing this exercise. In this post you will find my experience in trying this.

First of all, I must admit, I do have some experience in functional programming. This doesn't mean however I am a bright and shining light in functional programming (all but). But I have used Scheme for a year at university and I still know what a car and cdr are, what a REPL loop is, and so on. On the other hand, I am pretty sure that in the GOLF exercise I made some very big errors in functional programming and I hope you, as a reader, are a bit forgiving for those. One thing that's new to me is the fact that I can mix a functional style with objects. So, feel free to comment on the code, since I do want to learn from my mistakes!

To get started I needed an F# project. I choose to use an F# Library project, since I wanted to start with all the logic of the game first, a fancy user interface can be added later. This type of project gives you two files, a Module1.fs file, in which to place your code and a Script.fsx file, in which you can write code to informally test your F# program. Since I am going to use a unit testing framework to test the F# code, I removed the .fsx file.

Another thing you need to be aware of when writing F# code, is the order in which you place your files. They are processed in order by the F# compiler, so make sure a type (or function, or whatever) is known by the compiler before you use it. This sort of looks like the behaviour you have with a C++ compiler, where you need the correct include statements to let types know each other. A C# programmer doesn't need to worry about this, since all types within the same project know each other.

After the basic project setup, I wanted to write this application test first. So after some Google research I found a couple of unit test libraries you can use with F#. In fact you can use the NUnit framework if you want to, but I choose to use FsUnit, which adds a more natural style of writing tests for F# (and uses NUnit to annotate its tests). Since the FSUnit project is hosted on nuget, installing it, was quite easy (install-package fsunit and you're done). The nuget package also adds a .fs file that gives you some example test cases to get started. I didn't go into the effort at first to place my test code in a second F# library project. I can still do this later on. I did place the test code in a second .fs file (after the Module1.fs file).

Ok, so on to writing my first test. I started simple in saying that when I create a new Cell in the Game Of Life, it shouldn't have any neighbours. F# has the notion of namespaces, you can add those at the top of your file. If you want to use functionality from other libraries (like NUnit, FsUnit and the module1 file I am writing tests for), you need to add an open statement for each. This is actually similar to writing using statements for namespaces you want to use. After this you can place your first test class. in my case this class is called ''Given a new cell''. This naming style is handy when looking at your tests in a test runner. F# (or FsUnit, I actually don't know which) permits spaces in the class name, as long as you surround it with quotes. Beware though, these are two single quotes that start at the top left corner and end at the bottom right corner (they do NOT go straight down, look for them on your keyboard!). Since this is a test class, you need to decorate it with the TestFixture attribute.

 namespace GolF.Tests 
  
 open NUnit.Framework 
 open FsUnit 
 open Module1 
  
 [<TestFixture>]  
 type ``Given a new cell`` ()= 
   let cell = new Cell() 
  
   [<Test>] member test. 
    ``when I ask how many neighbours it has, should answer 0.`` ()= 
       cell.numberOfNeighbours |> should equal 0  

The actual test creates a new Cell class, which I still need to implement, I am working test first. And after that you can find my first test. Again, I use the fact that you can place an entire sentence between those funky quotes. The test is annotated with the Test attribute and I also indicate this test method as being a member of the test class. Beware though that F# does a lot of its parsing based on indentation, so indenting in the correct way is really important. After I wrote my first test, the F# compiler was actually complaining about the ``when I ask how many neighbours it has, should answer 0.`` sentence, which I found strange. The FsUnit example file, with the same sort of syntax, compiled just fine. After some really hard looking at the two files, I found out that before the sentence there is not only a tab, but also a white space, which is very easy to miss.

With this test written, it was time to get it compiling, meaning I had to provide a Cell type with a numberOfNeighbours member.

 module Module1 
  
 type Cell =  
   val numberOfNeighbours : int 
   new() = { numberOfNeighbours = 0 } 

This immediately made my test green, which actually isn't good practice, but I'll ignore that for now.

Time to add some more tests. I will finish this little application during the next weeks and will post some more about my findings during that time, until we get a finished Game Of Life example. I hope this short post will be enough to get you started in F# projects for now.

Sunday, June 12, 2011

NDC 2011

Last week some colleagues and I went to the NDC conference in Norway.



Apart from the weather, which was crappy, it was an awesome conference. I've seen some great talks and am looking forward to watching some of the talks I missed (because I was in another session) online. I also went home with some new great ideas for books I want to read:

  • Introducing HTML 5, by Bruce Lawson and Remy Sharp. They gave the HTML 5 talks during the second day of the conference in a small and very crowded room. They convinced me even more of the amazing things you can do for web pages with the new upcoming standard. It was a relieve as well to hear someone from the Opera browser team talking about HTML 5 instead of the standard Microsoft talks I heard thus far. 
  • Test-Driven JavaScript Development, by Christian Johansen. Too bad his talk was given simultaneously with Rob Ashton's (Document databases with ASP.NET MVC), Kevlin Henney and Anders NorĂ¥s's (Introducing The FLUID Principles) and Hadi Hariri's (Dynamic in a Static World). I went to this last session, which was very good. It gave me some ideas and examples of more things I can start doing with dynamic. I would really like to try to get a DSL written with it (I would probably start off by copying a Ruby example, since it is not easy stuff). The talk about the FLUID principles was very good as well, my colleagues went to that one and it is one of the talks to catch on rerun, once they put the videos up. I did follow the talk about the SOLID principles, which was nice to refresh again. I went to the RavenDB by Example talk on day  3 of the conference. It was a good thing the speaker also mentioned some of the problems he had with a document database, having to rethink the design of your data as opposed to relational databases.
  • The Joy of Closure, by Michael Fogus and Chris Houser. I didn't get to catch any of the Closure and F# talks and it would be nice to get up to speed with this. Also something to watch on rerun and see what we can do with it
  • 97 Things Every Programmer Should Know, by Kevlin Henney and 97 Things Every Software Architect Should Know, by Richard Monson-Haefel. I only went to Kevlin's talk about the 101 things he learned in architecture school, which was light, but enlighting. His other two talks apparently were very good as well, as my colleague went to those.
  • Specification By Example, by Gojko Adzic. His talk wasn't so good, but I think a lot can be learned from this book. One thing I will really remember from the conference is the multiple question marks that speakers had with BDD, DDD and agile. While they are all good techniques, they have their flaws and the software community really needs to figure out how we can do these things even better. Gojko's post on his blog about one of these talks really explains the problem a bit as well. He also mentioned our Cronos colleagues from iLean in his talk, which I think was pretty cool. And he is also one of the creators of cuke4ninja, a port of cucumber for .Net.


Apart from those books and talks I already mentioned, I also followed some of the talks on mobile development. The ones about multi platform development were really informative. The MonoTouch and MonoDroid projects have moved from Novell to Xamarin and are planning on a next release in the coming months. Biggest take-away there was: use the latest MonoTouch and MonoDroid builds for now and switch to the Xamarin builds once they are published. Jonas Follesoe's talk on this topic was great, if you're doing mobile, catch it on rerun! 

I also really liked the AOP talks given by the PostSharp people. They have a great framework for doing AOP, which is really powerfull and which gives you a lot of cool features for keeping your code nice and, well, sharp. They also mentioned some other AOP frameworks, which I think is a nice gesture, since they are not the only ones out there.

The CQRS talk by Fredrik Kalseth was inspiring as well, although he only mentioned one part of CQRS. It was explained really well and can be used as a basis on future projects. I also learned in his talk that JetBrains have a Ruby IDE which I didn't know about. As I look at their site now, I see they're also working on an Objective-C IDE. 



So, all in all, a very good conference, which I hope to catch again next year, and hopefully without the rain. I learned a lot and have now a whole lot of stuff to read and learn even more about. Too bad there's only 24 hours in a day (of which I really need 9 to sleep, since I'm a sleepy head).

Thanks as well to my colleague, Guy, for providing some very nice pictures. You can find the entire collection here.

Monday, June 6, 2011

More on DynamicObject: TryInvokeMember

I just came along another old example of DynamicObject, I would like to share with the world.

What I didn't mention in my last post is that DynamicObject has more methods you can override besides TryGetMember and TrySetMember. These two are very useful when working with properties you want to add on the fly. For methods the TryInvokeMember is a better choice to override. This method will get called on your DynamicObject when it can't resolve a method call.

Let's for instance write a very simple dynamic tracer class. First of all I will inherit again from DynamicObject. This time the class I am creating, DynamicTracer, will not hold an XML tree, but it will wrap another object.

Every time a method is called on this object we will print out (hence the name DynamicTracer) the operation that is being called.

 class DynamicTracer : DynamicObject 
 { 
   object theObject; 
   public DynamicTracer(object theObject) 
   { 
     this.theObject = theObject; 
   } 
   public override bool TryInvokeMember(InvokeMemberBinder binder, 
      object[] args, out object result) 
   { 
     try 
     { 
       Console.WriteLine("Invoking {0} on {1}", binder.Name, 
            theObject.ToString()); 
       Type objectType = theObject.GetType(); 
       result = objectType.InvokeMember(binder.Name, 
            System.Reflection.BindingFlags.InvokeMethod, null, 
            theObject, args); 
       return true; 
     } 
     catch 
     { 
       Console.WriteLine("Oops, cannot resolve {0} to {1}", 
            binder.Name, theObject.ToString()); 
       result = null; 
       return false; 
     } 
     return true;
   } 
 }  

The usage is quite similar to using TryGetMember and TrySetMember, only now are we getting a InvokeMemberBinder as parameter, together with the argument list that is being used and an out parameter for returning the result. I use reflection to invoke the method on the actual object.

To use this tracer, the only thing you need to do is wrap an object with it:

 FileInfo fi = new FileInfo("c:\\temp\\test.txt"); 
 dynamic tracedFI = new DynamicTracer(fi); 
 tracedFI.Create();  

When Create is called, there will be no matching method found in the DynamicTracer class, which will instead trigger the TryInvokeMember method. Although this seems pretty cool and useful, beware you loose all auto completion on any object you wrap this way, which is, for all things dynamic, a downside.