Category Archives: Development

Debugging C# on OS X with Visual Studio Code

Thanks to this helpful Tweet I was able to use Visual Studio Code to debug C# code on Mac OS X:

Here’s how you can get going from start to finish:

  1. Install DNX
  2. Fire up your Terminal emulator
  3. Execute yo aspnet & choose Console Application
  4. Execute dnu restore
  5. Launch Visual Studio Code & open the ConsoleApplication folder generated by Yeoman
  6. Click Debug button followed by the Gear button
  7. Replace or add this entry to launch.json:
    {
        "name": "Launch ConsoleApplication",
        "type": "mono",
        "program": "Program.exe",
        "stopOnEntry": true       
    }
  8. Invoke the Command Palette (⌘P) & choose Configure Task Runner
  9. Replace the existing entry for tsc in tasks.json with the following entry:
    {
        "version": "0.1.0",
        "command": "mcs",
        "args": [
            "-debug",
            "Program.cs"
        ],  
        "showOutput": "silent",
        "taskSelector": "/t:",
        "tasks": [
            {
                "taskName": "exe",
                "isBuildCommand": true,
                "problemMatcher": "$msCompile"
            }
        ]
    }
  10. Invoke the Command Palette and choose Build Task (↑⌘B)
  11. Click the Debug button and, finally, click the Play button (F5)

Debug C# on OS X with VS Code

\o/

Thanks @weinand!

Downgrading the SendGrid Add-On for Windows Azure

One of the (many) awesome aspects of Windows Azure and the Azure Portal is the support for Add-Ons. This feature allows 3rd-parties to closely integrate their services into Azure and offer their products via the Azure Portal Store. The Add-Ons feature also allows you to manage much of those services right within the Azure Management Portal.

Azure and SendGrid

I’ve been using the SendGrid service off-and-on for the past year, having set it up initially via the Windows Azure Add-Ons Store. SendGrid is an awesome service for sending emails from a variety of platforms. It offers great prices and rich analytics, and can be used from the Node.js JavaScript environment found throughout the Windows Azure services.

And, thanks to the integration I mentioned above, it’s very easy to scale your SendGrid service from the free option up to the plans with higher allowances and richer analytics.

The Big But

Pee-Wee Big But

Now that’s all great, but

While it’s as simple as a few clicks in the Azure Management Portal to upgrade your SendGrid plan…

If you have created your SendGrid plan via the Windows Azure Add-On Store, there is literally no way to downgrade the plan.

After searching both the Add-Ons area in the Azure Portal as well as the SendGrid management portal I got in touch with both SendGrid and Azure support. They were very helpful but, after several days they confirmed that this capability is simply not possible for the accounts created via the Azure Add-On Store.

So what can you do? The solution is both simple and obvious, if tedious: You can actually add multiple SendGrid plans to your Azure account via the Azure Add-On Store. Simply add a new SendGrid plan at the lower rate. Then migrate over the SendGrid credentials you are using in your applications and services.

Tedious? Yes. But it works if you need to downgrade before this ability is officially supported.

Delphi Shelf Life Reaches a New Low

I am going to try to keep this post short and to the point. I don’t want to rant (too much) or say anything I regret (too much), but something has to be said.

When it comes to iOS development, Delphi XE4, a major product released by Embarcadero five months ago, is now obsolete. If you want support for iOS 7 you must buy their new product, Delphi XE5.

Let’s take a step back and look at the facts when it comes to Delphi and Embarcadero chasing the mobile landscape:

  • Delphi XE2 – released September 2011. Claims support for iOS development but, by all reports, fails to deliver. iOS development makes use of the FreePascal compiler and cannot use traditional Delphi code.
  • Delphi XE3 – released September 2012. Support for iOS development completely removed. Anyone who built their iOS app on the foundation of XE2 was left out in the cold.
  • Delphi XE4 – released April 2013. Claims support for iOS development (again). Anyone who wants the iOS development support promised in XE2 must now buy XE4, released as a new product only seven months after XE3.

And now Delphi XE5 has been released only five months after Delphi XE4. It’s another new version and another paid upgrade.

Here’s the real rub though. iOS 7 was just released by Apple. It features a new UI and new navigation elements for apps. Anyone using Xcode (the traditional tool for iOS development which is free) could simply recompile their apps and resubmit them to support iOS 7.

What about Delphi XE4 customers? The ones who just put down their hard earned money for iOS support, again, five months ago? They are left out in the cold. Again. If a Delphi XE4 customer wants to support iOS 7 they must now purchase Delphi XE5. I confirmed this myself with a back-and-forth on Twitter with Jim McKeeth from Embarcadero:

Jim goes on to point out that, if you forgo using the bundled UI kit in your iOS app, you can still leverage the new iOS 7 elements using Delphi XE4:

However, this is basically suggesting the customer not use the same UI technology that is the very heart of Embarcadero’s marketing strategy for several releases now: FireMonkey.

To be clear, I am not upset by a six month release cycle. A lot of companies do that and it’s a smart move for some of them. However, Embarcadero is releasing Delphi like a subscription while selling it like a packaged product. While they offer “Software Assurance” this is a far cry from a subscription. This is an added fee on top of your purchase that allows you to get major upgrades that happen to be released within a year. It’s insurance. It’s the type of thing most of us pass up when checking out at Best Buy.

All-in-all this has just left a horrible taste in my mouth and the mouths of many other developers. My advice? If Delphi has turned into a subscription then charge a subscription. Stop packaging it as something that will be obsolete in 5 months without another purchase.

Resources for the Self Employed Software Developer

After a year of working for myself as a software consultant, this Monday I begin a new position at IDMWORKS. And, while I’ve had a blast being self-employed, I’m very excited to start this new chapter in my career with a lot of really cool ladies and gentlemen.

As I wind down my consulting I thought I’d do a blog post describing some of the resources I’ve used for the past few years in order to work with my customers as a software consultant and freelance developer. One of the fun parts of venturing into this was learning about all of the really awesome services there are out there – and at amazing prices – to help the solo consultant really hit the ground running.

Time Tracking & Invoicing

Harvest

For tracking time and invoicing customers, I really dig Harvest. It does everything I need and then some, and it’s priced right. Harvest is very easy to use and lets you manage:

  • Clients
  • Projects
  • Time Sheets
  • Invoices
  • Retainers
  • Payments

It also lets you accept payments via PayPal, Stripe, or Authorize.Net, sends out automatic invoice reminders, and more. When it comes to time tracking, they have a very nice HTML5 page for that, or you can use mobile apps and desktop widgets.

And, you can use it for free until you need more than two projects or more than four clients. After that, if you are the only user you are looking at a whopping $12/month.

Contracts

Contractually

In order to get projects under way you’ll eventually need to draw up some contracts and get them signed. I’m a fan of Contractually for getting this done. They have a library of contract templates available to customize, and you can save your customized templates for re-use later. From there you can invite folks to review and, optionally, edit the contract online with full version control. Once both parties accept the contract, both can sign the contract digitally. With the latest changes from the team at Contractually, the party you invite to review and sign no longer has to create a Contractually account.

Like Harvest and the rest of the resources on this list, Contractually is priced right. The price has gone up since they launched, but you can still get a solo account for $49/year, which is a bargain for getting this level of ease when it comes to the contract process.

Project/Task Management

Asana

So everyone’s all “Trello“! Honestly, I really like Asana for project/task management. It’s a very straight-forward “traditional” task management system that lets you break things down into workspaces, then projects, and finally tasks. Tasks can have sub-task lists, and it’s very easy to invite customers to participate in individual workspaces.

Asana is completely free for teams up to 15 people. After that their pricing model scales up nicely.

Hosted Source Control

Bitbucket

As with project management, there’s already another strong contender in this category: GitHub. And I love GitHub, especially for working on collaborative, open source projects. The workflow is just superb. But it costs money to host private repositories, and you must pay more as you add repositories. To me this discourages version controlling projects and keeping them offsite.

Bitbucket is a really wonderful product. The only real weakness is that it’s not GitHub. And everyone uses GitHub for collaborative projects. But if you need somewhere to store your private projects with great features and the ability to easily invite your customers, Bitbucket gives you that and is free – including unlimited private repositories – for up to 5 users.

And while we’re on the topic, Atlassian also provides a wonderful Git client for OS X (and Windows) called SourceTree.

Hosted Servers/Services

Windows Azure

If you follow my blog or my Twitter account you’ll know I’m a fan (if sometimes critic) of the Windows Azure services. To me, there is no single stronger tool a self-employed software consultant can have under her belt. Eventually you are going to need to host things somewhere that isn’t your machine. In my experience, the hosting options out there come in two flavors: cheap and horrible, or expensive and great.

Windows Azure gives you hosted environments for many different things, from websites to full virtual machines (Windows and Linux) to SQL data, off-site storage and APIs for mobile applications. And the pricing is very attractive. All of the services let you start off for free and the portal and services are structured in such a way that you will be warned before you are ever billed. From there, the pricing scales very nicely.

Most importantly, Windows Azure is absolutely a high priority for Microsoft. This is obvious from their recent developer conferences and product releases. For now it looks like Windows Azure is more of an Xbox than a Silverlight.

Training/Education

Pluralsight

The independent software consultant must constantly stay up-to-date on the available technologies in the field and how (and when) to exploit them. And Pluralsight is just a fantastic resource for training and education on the top technologies in development today. They go far beyond just how-to and include great details on the whys of what you are watching.

And to stick with our established pattern, the pricing don’t suck. Starting at $29/month you get access to their entire catalog of courses. This one is a no-brainer folks.

Conclusion

I’ll still be blogging here, plus I’ll be contributing to the IDMWORKS blog going forward. Feel free to share any resources you’ve found useful in the comments and good luck!

Painless File Backups to Azure Storage

Windows Azure

In my previous post I discussed steps and utilities for backing up Azure SQL Databases in order to guard against data loss due to user- or program-error. Since then I’ve started investigating options for backing up files – specifically those in Azure Virtual Machines – to the same Azure Storage service used previously.

Just like before I was delighted to find an existing app that makes this super easy. The AzCopy utility makes it possible to copy files to and from Azure Storage and local storage, or from Azure Storage to Azure Storage, with a nice set of arguments.

For instance, the following command will copy all of the files in a local folder, recursively, over to Azure Storage. It will overwrite existing files, and it will skip any files that already exist unless the source file is newer. Perfect.

AzCopy.exe C:HereBeImportantThings http://yourstoragename.blob.core.windows.net/yourstoragecontainer /destKey:YourSuperLongAzureStorageKey /S /V /Y /XO

The Azure storage account name and access key can be accessed in the Storage section of the portal, by clicking the Manage Access Keys button at the bottom of the Windows Azure Portal.

Azure Storage Information

This command took under a minute to backup 3,000 files to Azure Storage from an Azure Virtual Machine. From there I can keep running the command and it will only copy new files over to Azure Storage, overwriting any existing file.

As in my previous post, my little utility AzureStorageCleanup is a nice companion to this process. I have updated the source on Github to include a -recursive argument, which will remove files within the virtual hierarchy found in blob storage (created by the recursive option in AzCopy).

AzureStorageCleanup.exe -storagename yourstoragename -storagekey YourSuperLongAzureStorageKey -container yourstoragecontainer -mindaysold 60 -recursive

By scheduling AzureStorageCleanup to run with the -recursive option, you can remove old files to keep storage use in-check.

Painless Azure SQL Database Backups

Windows Azure

While the SQL Database service from Windows Azure provides resiliency and redundancy, there is no built in backup feature to guard against data loss due to user- or program-error. The advised way to handle this is to take a three-step approach:

  1. Make a copy of the SQL Database
  2. Backup the database copy to Azure Storage
  3. Maintain & remove any outdated backups on blob storage

The process in Windows Azure that backs up a SQL Database to blob storage is not transactionally consistent, which is why the initial database copy is required.

Richard Astbury has provided an excellent tool, SQLDatabaseBackup, that takes care of the first two steps with little fuss:

SQLDatabaseBackup.exe -datacenter eastus -server hghtd75jf9 -database MyDatabase -user DbUser -pwd DbPassword -storagename mybackups -storagekey YourSuperLongAzureStorageKey -cleanup

The data center and server name can be obtained from the SQL Databases section of the Windows Azure Portal.

SQL Database Information

The Azure storage account name and access key can be accessed in the Storage section of the portal, by clicking the Manage Access Keys button at the bottom of the portal.

Azure Storage Information

Finally, by specifying the -cleanup argument, the utility will delete the SQL Database copy it creates after the backup is successfully created.

And while the pricing for Azure blob storage is very affordable, you may want to automate the process of deleting old backups. I’ve created a very simple utility that does just that. AzureStorageCleanup uses command line arguments that mirror the SQLDatabaseBackup project (as it is meant to compliment its use):

AzureStorageCleanup.exe -storagename mybackups -storagekey YourSuperLongAzureStoragekey -container sqlbackup -mindaysold 60

The above command will remove files equal-to-or-older-than sixty days from the container “sqlbackup” – the default container used by SQLDatabaseBackup. The details of each file deleted are printed to the console.

By scheduling these two utilities on an available machine you’ll have painless, affordable backups for any of your Windows Azure SQL Databases.

Integration Testing ASP.NET MVC Projects with SpecsFor.Mvc

SpecsFor.Mvc

I’ve recently spent some time looking into different frameworks for doing integration testing for ASP.NET MVC projects. One that caught my eye almost immediately was SpecsFor.Mvc. Unlike other solutions I found for writing integration tests, SpecsFor.Mvc lets you write tests in a fashion that is very similar to writing unit tests, using strongly-typed access to your application’s data without having to script or dig into the DOM.

Some nice things that SpecsFor.Mvc provides out-of-the-box:

  • Hosts your ASP.NET MVC project, building a specified configuration of your project and then hosting it automatically under an instance of IIS Express
  • Provides strongly-typed methods for navigating to controllers and actions, checking route results, and filling out and submitting forms
  • Provides access to validation data, including access to the validation summary as well as the validity of each property in your view’s model

SpecsFor.Mvc uses Selenium WebDriver internally in order to drive the browser. You can still access the Selenium IWebDriver interface any time you need to dig further into your page.

Let’s take a look at these things in practice by writing a few hypothetical tests written against the stock ASP.NET MVC Internet Application.

Starting the Project

To get started, create a new ASP.NET MVC 4 project in Visual Studio.

New ASP.NET MVC 4 Web Application

Select the Internet Application project template, check the option to create a unit test project and click OK.

Internet Application Project Template

Once both the ASP.NET MVC project and the unit test project have been created, right-click the References folder under your Tests project and click Manage Nuget Packages.

Manage Nuget packages

Under Online, search for and install the official SpecsFor.Mvc Nuget package.

SpecsFor.Mvc Nuget Package

Initializing the Hosting Environment

The next thing that we need to add to the Tests project is some code that will initialize the IIS Express hosting environment using the classes provided by SpecsFor.Mvc. To do this, create a new class called MvcAppConfig with the following contents (adjust the namespace as needed):

using Microsoft.VisualStudio.TestTools.UnitTesting;
using SpecsFor.Mvc;

namespace SpecsForMvcDemo2.IntegrationTests
{
    [TestClass]
    class MvcAppConfig
    {
        private static SpecsForIntegrationHost integrationHost;

        [AssemblyInitialize()]
        public static void MyAssemblyInitialize(TestContext testContext)
        {
            var config = new SpecsForMvcConfig();

            config.UseIISExpress()
                .With(Project.Named("SpecsForMvcDemo"))
                .ApplyWebConfigTransformForConfig("Debug");

            config.BuildRoutesUsing(r => RouteConfig.RegisterRoutes(r));

            config.UseBrowser(BrowserDriver.Chrome);

            integrationHost = new SpecsForIntegrationHost(config);
            integrationHost.Start();
        }

        [AssemblyCleanup()]
        public static void MyAssemblyCleanup()
        {
            integrationHost.Shutdown();
        }
    }
}

The class is marked as a TestClass even though there are no explicit methods to test. This is so that the MyAssemblyInitialize() and MyAssemblyCleanup() methods run. In order for the AssemblyInitialize and AssemblyCleanup attributes to work the class must be marked with the TestClass attribute. With this code in place, MyAssemblyInitialize() will run once before all of the test methods in the project and MyAssemblyCleanup() will run after they all complete.

The code found in MyAssemblyInitialize() is fairly straight-forward given the clarity of the SpecsFor.Mvc API. A new SpecsForMvcConfig instance is created and set to use IIS Express with a given project name and configuration name. Next, a call to BuildRoutesUsing is made in order to register the various controllers and actions with SpecsFor.Mvc. Finally, the browser is specified and the configuration is used to start a new instance of the SpecsForIntegrationHost.

The MyAssemblyCleanup() method, paired with the AssemblyCleanup attribute, is used to shut down the integration host after all the tests have completed.

Initializing the Browser

Now that we have code in place to host the ASP.NET MVC site before any tests are run, we need some code in place to create an instance of our MVC application in a browser. Right-click the Tests project and add a new Unit Test.

Add Unit Test

Add the following code to the top of your new UnitTest1 class, before the TestMethod1 declaration:

private static MvcWebApp app;

[ClassInitialize]
public static void MyClassInitialize(TestContext testContext)
{
    //arrange
    app = new MvcWebApp();
}

This will require adding a using statement for SpecsFor.Mvc.

Using SpecsFor.Mvc

This new method, MyClassInitialize() will run before all of the tests in the new UnitTest1 class. It will create a new instance of the MvcWebApp class which will launch the browser with your application loaded.

If you go ahead and run the tests for UnitTest1 now you’ll see that two console windows are opened, one for IIS Express hosting the ASP.NET application and one for the Selenium WebDriver that is driving your application. In addition, after the Selenium WebDriver console window is opened, the browser specified in the MvcAppConfig class will be launched.

Note that you may get a prompt from Windows Firewall that you’ll need to allow.

Firewall Alert

Because we haven’t actually written any tests yet, all these windows will close after they are opened, but this demonstrates that these few lines of code used to bootstrap the environment are working.

Authentication Tests

Now that all the setup work is done, let’s see what some actual integration tests look like using SpecsFor.Mvc. The first test will ensure that, if a user tries to navigate to the /account/manage route of the ASP.NET MVC application without logging in, they will be redirected to the login screen.

[TestMethod]
public void AccountManage_WithoutSession_RedirectsToLogin()
{
    //act
    AccountController.ManageMessageId? messageId = null;
    app.NavigateTo<AccountController>(c => c.Manage(messageId));

    //assert
    const string returnUrl = "%2fAccount%2fManage";
    app.Route.ShouldMapTo<AccountController>(c => c.Login(returnUrl));
}

This test will require adding two new items to the using statements: YourProjectName.Controllers and MvcContrib.TestHelper (MvcContrib.TestHelper is needed for the call to ShouldMapTo).

And that’s it for the first integration test. I love it. It’s clear, concise, and (aside from the return URL path) it’s strongly typed. The call to NavigateTo will navigate to the URL corresponding to the AccountController and the Manage action, specified in the lambda expression. The call to ShouldMapTo will ensure that the resulting route corresponds to the AccountController and Login action (with the proper ReturnUrl parameter).

Let’s add two more tests to illustrate a few more examples using SpecsFor.Mvc:

[TestMethod]
public void Login_InvalidInput_TriggersValidation()
{
    //act
    app.NavigateTo<AccountController>(c => c.Login(string.Empty));
    app.FindFormFor<LoginModel>()
        .Field(f => f.UserName).SetValueTo(string.Empty)
        .Field(f => f.Password).SetValueTo(string.Empty)
        .Submit();

    //assert
    app.FindFormFor<LoginModel>()
        .Field(f => f.UserName).ShouldBeInvalid();
    app.FindFormFor<LoginModel>()
        .Field(f => f.Password).ShouldBeInvalid();
}

[TestMethod]
public void Login_InvalidCredentials_TriggersValidation()
{
    //act
    app.NavigateTo<AccountController>(c => c.Login(string.Empty));
    app.FindFormFor<LoginModel>()
        .Field(f => f.UserName).SetValueTo(Guid.NewGuid().ToString())
        .Field(f => f.Password).SetValueTo(Guid.NewGuid().ToString())
        .Submit();

    //assert
    app.ValidationSummary.Text.AssertStringContains("incorrect");
}

These tests will require adding a using statement for YourProjectName.Models so that the LoginModel class can be accessed.

Again, looking at the code, I love the simplicity and clarity in the SpecsFor.Mvc tests. I can use NavigateTo to navigate to my controller and action, and then use FindFormFor to access my view’s model. Finally I can submit the form with easy access to the resulting validation data.

Unfortunately, if you try to run these new tests right now they will fail. The reason is that the SpecsFor.Mvc initialization code compiles and deploys a fresh copy of the ASP.NET MVC project to a TestSite folder within the Debug folder. The App_Data folder contents are not included in the ASP.NET MVC Visual Studio project. So, the database files are not deployed to the TestSite folder and the site itself will YSOD if you try to do anything requiring the database.

No DB YSOD

To fix this, right-click the App_Data folder in your main MVC project and click Add>Existing Item.

Add Existing Item

Then, add the two files found in your physical App_Data folder to the project (you’ll need to run the MVC site and access the database once manually).

After adding the MDF and LDF files to the project you should be able to run all of the authentication integration tests without error.

The Big But

Pee-Wee Big But

Now this all sounds great, but

At the time I’m writing this, SpecsFor.Mvc tests run great under third-party test runners such as TestDriven.Net and CodeRush. However, the tests don’t run under Visual Studio’s MSTest runner. Trying to run the tests using Visual Studio’s built in test runner will result in a “Build failed” error. The author of SpecsFor.Mvc has reproduced the issue and is hoping to have it fixed within a couple of days.

UPDATE: This issue has since been resolved by Matt and is no longer a problem in version 2.4.0. No more buts!

Resources

Using Bootstrap with the DevExpress ASP.NET Data Grid

DX + Bootstrap
I’ve been having a lot of fun lately (and been quite productive) using Bootstrap as a way to lay my sites out before giving them a final visual style. The past three websites I’ve done have used Bootstrap and I love the CSS classes it provides and the speed with which I can develop a nice, consistent, responsive site with it.

In my most recent project I’ve been working on integrating some of the MVC Extensions from DevExpress with good success. However, one quirk had me scratching my head. My customer was generally very happy with the ASP.NET Data Grid but wanted a few additional features, one being the ability for the user to specify the page size for the grid. Easy enough – I thought – it’s just a setting after all.

However, this is what I saw after enabling the setting:

Page Size Item Before

After some poking around using the Developer Tools in Chrome, I was able to identify the CSS in Bootstrap that was interfering with the rendering of the ASP.NET Data Grid. Here is the CSS I used to fix the issue:

/* for playing happy with DX */
td.dxpDropDownButton img {
    max-width: none;
}

td.dxpComboBox input {
    margin-bottom: 0px;
    padding: 0px 0px;
}

With that bit of CSS in place the control now renders properly:

Page Size Item After

Note that there’s also a post available from DevExpress here on fixes for common CSS issues with Bootstrap. However, using that method requires overwriting your bootstrap.css file.

Running KnockoutJS Unit Tests with Chutzpah

In my previous blog post I discussed some of the specifics involved with unit testing JavaScript code that uses KnockoutJS and Web API. This blog posts builds on the example discussed in the previous post. If you missed that post you can read more about it here.

While unit testing our ViewModel was all working great, the real icing on the cake is getting JavaScript unit testing working without a browser, integrated into Visual Studio. And it would be even better if the test runs and results could be integrated into the Visual Studio Test Explorer. That’s where two separate Visual Studio extensions come in to play: the Chutzpah JavaScript Test Runner and the Chutzpah Test Adapter for Visual Studio. Both can be installed directly from the Extensions and Updates window in Visual Studio.

Chutzpah Extensions

The first extension allows you to right-click on your tests.js file and click a new “Run JS Tests” menu item to run your QUnit tests without launching a browser.

Run JS Tests

In order to for this to work, though, you must tell Chutzpah where to find the other JS files that your tests require (as it will not be launching your tests.html in a browser). To do this, add the following lines to the top of our tests.js file:

/// <reference path="../Scripts/jquery-1.9.1.js" />
///
/// <reference path="../Scripts/knockout-2.2.1.debug.js" />
///
/// <reference path="../Scripts/app/namespace.js" />
/// <reference path="webapiclient.stub.js" />
///
/// <reference path="../Scripts/app/model.js" />
/// <reference path="../Scripts/app/viewmodel.js" />

This is the same format used by the _references.js file that Visual Studio uses for JavaScript IntelliSense. With these lines in place, you can now right-click the tests.js file and click Run JS Tests, seeing the results right within Visual Studio:

Test Results in Visual Studio

Even cooler, with the Test Adapter installed, you can click CTRL+R, A to run all of the unit tests in your solution, and your QUnit tests will be run too, with their results displayed within the Visual Studio testing UI:

Test Results in Test Explorer

There is one catch I’ve found when using Chutzpah as a JavaScript test runner for KnockoutJS projects: if you right-click your tests.js file and click “Run JS Tests in browser”, Chutzpah will automatically generate an HTML file for the JS file and display that in a browser.

Run JS Tests in browser

However, the default template for the HTML used by Chutzpah puts the JavaScript references in the HTML head instead of at the bottom of the body. Without any changes, using the “Run JS Tests in browser” feature from Chutzpah, along with KnockoutJS, will result in an error running tests:

Test Results Wrong Order

To fix this you need to find and edit the HTML template used by Chutzpah. Search your C: drive for the text “Chutzpah” – under Windows Vista and up this should be located in a subfolder of C:UsersUserNameAppDataLocalMicrosoftVisualStudio. For instance, the path on my system is:

C:UsersNathanialAppDataLocalMicrosoftVisualStudio11.0Extensions30spjmvi.u3x

Once you have found the folder, open the TestFilesQUnitqunit.html file:

<!DOCTYPE html>
<html>
<head>
    @@TestFrameworkDependencies@@
    @@ReferencedCSSFiles@@
    @@ReferencedJSFiles@@
    @@TestJSFile@@
</head>

<body>
    <h1 id="qunit-header">Unit Tests</h1>
    <h2 id="qunit-banner"></h2>
    <h2 id="qunit-userAgent"></h2>
    <ol id="qunit-tests"></ol>
    <div id="qunit-fixture"></div>
</body>
</html>

Move the lines referencing the JS files to the bottom of the body, leaving the CSS reference in the head tag:

<!DOCTYPE html>
<html>
<head>
    @@ReferencedCSSFiles@@
</head>

<body>
    <h1 id="qunit-header">Unit Tests</h1>
    <h2 id="qunit-banner"></h2>
    <h2 id="qunit-userAgent"></h2>
    <ol id="qunit-tests"></ol>
    <div id="qunit-fixture"></div>
    @@TestFrameworkDependencies@@
    @@ReferencedJSFiles@@
    @@TestJSFile@@
</body>
</html>

Save your changes and that’s it! You can now delete the tests.html file from the project if you’d like and use Chutzpah to run tests both within the Visual Studio IDE and within the browser.

Run JS Tests in browser - Fixed

Unit Testing KnockoutJS and Web API

After a couple of years of looking on from the side-lines at the advancements being made world of web development, I decided recently it was time to dive in head-first and bring my web knowledge up to speed. A lot of my initial work with C# and .NET was with ASP.NET WebForms, but in the past few years the majority of my work has been either mobile, desktop, or server-based.

So, for the past couple of months I’ve been investigating a variety of topics from top-to-bottom, including HTML5 and CSS3, Bootstrap, ASP.NET MVC, Entity Framework (including Repository, Unit of Work, and Service patterns), Dependency Injection, JavaScript (including the Module and Revealing Module patterns) and jQuery, KnockoutJS, and finally QUnit.

One topic I thought I’d blog about is how to unit test client-side JavaScript code, specifically ViewModels used by KnockoutJS that communicate with a Web API endpoint. To help illustrate these techniques I’ve created an ultra-simple ASP.NET MVC application that uses both Web API and KnockoutJS. Shockingly, it’s a to-do application. You can check out the source code for the application here. You can also download a snapshot of the project, before adding unit testing, here.

Here is the main view for the MVC application:

<input data-bind="value: addingItemText, valueUpdate: 'afterkeydown'" type="text" />
<button data-bind="enable: canAddItem, click: addNewItem">Add</button>

<ol data-bind="foreach: items">
    <li>
        <strong data-bind="visible: Completed">Completed </strong>
        <span data-bind="text: Text"></span>
        <button data-bind="click: $root.deleteSelectedItem">Delete</button>
        <button data-bind="click: $root.completeSelectedItem, visible: !Completed()">Complete</button>
        <button data-bind="click: $root.undoSelectedItem, visible: Completed()">Undo</button>
    </li>
</ol>

@section scripts {
    @Scripts.Render("~/bundles/app")
}

I define a text input and button for adding a new to-do item. The Add button should only be enabled when there is text entered. Following that there is a list of to-do items. If an item has been completed it shows appropriate text in bold. There’s a button to delete an item. Finally, if the item is uncompleted there is a button to complete it, and if the item is completed there is a button to undo it.

Here is a look at the ViewModel:

$((function (ns, webApiClient) {
    "use strict";

    ns.todoViewModel = (function () {

        //utilities
        function cloneJSModel(sourceModel, destinationModel) {
            destinationModel.Id(sourceModel.Id)
                .Text(sourceModel.Text)
                .Completed(sourceModel.Completed);
        }

        function cloneKOModel(sourceModel, destinationModel) {
            var jsModel = ko.toJS(sourceModel);
            cloneJSModel(jsModel, destinationModel);
        }

        //UI binding
        var items = ko.observableArray();

        //web api calls
        function populate() {

            webApiClient.ajaxGet("TodoItem", "", function (json) {
                items.removeAll();

                $.each(json, function (index, value) { //ignore jslint
                    var item = new ns.todoItemModel();
                    cloneJSModel(value, item);
                    items.push(item);
                });
            });
        }

        function addItem(todoItem) {

            webApiClient.ajaxPost("TodoItem", ko.toJS(todoItem), function (result) {
                var newItem = new ns.todoItemModel();
                cloneJSModel(result, newItem);
                items.push(newItem);
            });
        }

        function deleteItem(id) {

            webApiClient.ajaxDelete("TodoItem", id, function (result) {
                items.remove(function (item) {
                    return item.Id() === result.Id;
                });
            });
        }

        function updateItem(todoItem) {

            webApiClient.ajaxPut("TodoItem", todoItem.Id(), ko.toJS(todoItem), function () {
                var existingItem = ko.utils.arrayFirst(items(), function (item) {
                    return item.Id() === todoItem.Id();
                });
                cloneKOModel(todoItem, existingItem);
            });
        }

        //UI actions
        var addingItemText = ko.observable('');

        var canAddItem = ko.computed(function () {
            return addingItemText() !== "";
        });

        var addNewItem = function () {
            var newItem = new ns.todoItemModel();
            newItem.Text(addingItemText());
            addItem(newItem);
            addingItemText("");
        };

        var deleteSelectedItem = function () {
            deleteItem(this.Id());
        };

        var completeSelectedItem = function () {
            this.Completed(true);
            updateItem(this);
        };

        var undoSelectedItem = function () {
            this.Completed(false);
            updateItem(this);
        };

        //return a new object with the above items
        //bound as defaults for its properties
        return {
            items: items,
            populate: populate,
            addingItemText: addingItemText,
            canAddItem: canAddItem,
            addNewItem: addNewItem,
            deleteSelectedItem: deleteSelectedItem,
            completeSelectedItem: completeSelectedItem,
            undoSelectedItem: undoSelectedItem
        };

    }());

    ns.todoViewModel.populate();

    ko.applyBindings(ns.todoViewModel);

    //pass in namespace prefix (from namespace.js)
}(todo, todo.webApiClient)));

Again this is all pretty standard. It follows the Revealing Module pattern for the ViewModel. One thing to note is that a webApiClient is passed in and used for the AJAX calls. John Papa shows something very similar in his Pluralsight training courses. This nicely abstracts out the Web API specifics plus, as you’ll see, it makes it easier to unit tests our ViewModel.

Here’s the Web API client source:

(function (ns) {
    "use strict";

    ns.webApiClient = (function () {

        var ajaxGet = function (method, input, callback, query) {

            var url = "/api/" + method;
            if (query) {
                url = url + "?" + query;
            }

            $.ajax({
                url: url,
                type: "GET",
                data: input,

                success: function (result) {
                    callback(result);
                }
            });
        };

        var ajaxPost = function (method, input, callback) {

            $.ajax({
                url: "/api/" + method + "/",
                type: "POST",
                data: input,

                success: function (result) {
                    callback(result);
                }
            });
        };

        var ajaxPut = function (method, id, input, callback) {

            $.ajax({
                url: "/api/" + method + "/" + id,
                type: "PUT",
                data: input,

                success: function (result) {
                    callback(result);
                }
            });
        };

        var ajaxDelete = function (method, id, callback) {

            $.ajax({
                url: "/api/" + method + "/" + id,
                type: "DELETE",

                success: function (result) {
                    callback(result);
                }
            });
        };

        return {
            ajaxGet: ajaxGet,
            ajaxPut: ajaxPut,
            ajaxPost: ajaxPost,
            ajaxDelete: ajaxDelete
        };
    }());

    //pass in namespace prefix (from namespace.js)
}(todo));

As you can see we’re simply wrapping access to the jQuery ajax function and calling our callback function. Again you can download the source code above or from Bitbucket and run the app to try all this out. It all works as expected: you can add, delete, complete, and undo items.

In this example I’ll be using QUnit to unit test the JavaScript. JavaScript unit tests, unlike standard unit tests, generally exist in the same project as your site. You’ll see that sites like KnockoutJS and Sugarjs have a webpages where you can run their tests. The tests need access to your JavaScript source files, and there is no easy way to make these available to other projects like you can .NET assemblies.

So we’ll start by creating a new Tests folder in the project.

Add New Folder

Inside that folder create both a test.html and a tests.js file. The next step is to put the QUnit specific markup in the tests.html file:

<!DOCTYPE html>
<html>
    <head>
        <title></title>
        <meta charset="utf-8">
        <!-- QUnit stylesheet from the jQuery CDN -->
        <link rel="stylesheet" href="http://code.jquery.com/qunit/qunit-1.11.0.css">
    </head>
    <body>
        <!-- elements that QUnit will inject test results into -->
        <div id="qunit"></div>
        <div id="qunit-fixture"></div>

        <!-- required JS libraries, such as jQuery and KnockoutJS -->
        <script src="/Scripts/jquery-1.9.1.js"></script>
        <script src="/Scripts/knockout-2.2.1.debug.js"></script>

        <!-- QUnit itself from the jQuery CDN -->
        <script src="http://code.jquery.com/qunit/qunit-1.11.0.js"></script>

        <!-- include tests themselves -->
        <script src="tests.js"></script>
    </body>
</html>

Note that I’ve also included references to the jQuery and Knockout js files. Next, add the following code to the tests.js file in order to ensure things are working:

test("hello test", function () {
    ok(1 == "1", "Passed!");
});

Now, if you right-click on the tests.html file and click View in Browser, you should see passing results for the single test.

Hello Test Run

Now lets implement actual unit tests for our ViewModel. One of the cardinal rules of unit tests is that they should not interact with the “outside world”, specifically things like databases, file systems, and web services. Tests that do this, even using a unit testing framework, are known as integration tests. So, one thing we need resolve is how to test our ViewModel without making the Web API calls.

We’ll do this by creating a stub for our Web API client object. With typical C# applications you can do stubbing, mocking, and faking in a variety of ways. I personally like using FakeItEasy. However, the dynamic nature of JavaScript means that, at least for something simple like stubbing our Web API client, there’s really no additional framework needed.

Let’s start the stubbing by creating a new JavaScript file called webapiclient.stub.js in the Tests folder. Now add the following code to define the stub:

(function (ns) {
    //better exceptions, less tomfoolery allowed
    "use strict";

    ns.webApiClient = (function () {

        var testResult = [];

        var ajaxGet = function (method, input, callback, query) { //ignore jslint
            callback(this.testResult);
        };

        var ajaxPost = function (method, input, callback) { //ignore jslint
            callback(this.testResult);
        };

        var ajaxPut = function (method, id, input, callback) { //ignore jslint
            callback(this.testResult);
        };

        var ajaxDelete = function (method, id, callback) { //ignore jslint
            callback(this.testResult);
        };

        //return a new object with the above items
        //bound as defaults for its properties
        return {
            ajaxGet: ajaxGet,
            ajaxPut: ajaxPut,
            ajaxPost: ajaxPost,
            ajaxDelete: ajaxDelete,
            testResult: testResult
        };
    }());

    //pass in namespace prefix (from namespace.js)
}(todo));

This code is pretty straight forward. It uses the same signatures as the real Web API client object. However, instead of making real AJAX calls, it calls the callback immediately, passing back the value stored in testResult.

One important thing to note is that the function calls to the callbacks pass this.testResult rather than just testResult. This is because the return block is returning a new object with the specified properties and default values for those properties. Those properties are not getters or setters for the privately scoped variables above, although it may look that way if you are used to OOP languages like C# or Delphi.

Next lets look at how to make use of this stub to write tests against the ViewModel. We’ll add the following script references to the tests.html file:

<!-- include the application's namespace.js -->
<script src="/Scripts/app/namespace.js"></script>

<!-- include our sub Web API client -->
<script src="webapiclient.stub.js"></script>

<!-- include the items to test -->
<script src="/Scripts/app/model.js"></script>
<script src="/Scripts/app/viewmodel.js"></script>

The first reference is to our Web API client stub and the second two are to our application’s Model and ViewModel respectively. Now lets add our first real test to the tests.js file:

module("todo.viewmodel.populate");

test("todo.viewmodel.populate (0 length)", function () {
    "use strict";

    //arrange
    todo.webApiClient.testResult = [];

    //act
    todo.todoViewModel.populate();

    //assert
    equal(todo.todoViewModel.items().length, 0, "Passed!");
});

The first line defines a module. This is not required at all and is merely a way to group tests visually into sections when results are displayed. Next we setup the todo.webApiClient (our stub) to return an empty array for any calls to it. Then, we call the populate() function on our ViewModel. Finally, we assert that the items() in our ViewModel is zero-length. You can save the tests.js and tests.html file and refresh your browser to view the results:

First Real Test Run

Here are some more example tests for the to-do ViewModel:


test("todo.viewmodel.populate (1 length)", function () {
    "use strict";

    //arrange
    todo.webApiClient.testResult = [
        {
            Id: 1,
            Text: "To-do",
            Completed: false
        }
    ];

    //act
    todo.todoViewModel.populate();

    //assert
    equal(todo.todoViewModel.items().length, 1, "Passed!");
});

module("todo.viewmodel.canAddNewItem");

test("todo.viewmodel.canAddNewItem (without text)", function () {
    "use strict";

    //arrange

    //act
    todo.todoViewModel.addingItemText('');

    //assert
    equal(todo.todoViewModel.canAddItem(), false, "Passed!");
});

test("todo.viewmodel.canAddNewItem (with text)", function () {
    "use strict";

    //arrange

    //act
    todo.todoViewModel.addingItemText('To-do');

    //assert
    equal(todo.todoViewModel.canAddItem(), true, "Passed!");
});

module("todo.viewmodel.addNewItem");

test("todo.viewmodel.addNewItem", function () {
    "use strict";

    //arrange
    todo.webApiClient.testResult = [];
    todo.todoViewModel.populate();

    var expectedItem = {
        Id: 1,
        Text: "To-do",
        Completed: false
    }

    todo.webApiClient.testResult = expectedItem;

    //act
    todo.todoViewModel.addingItemText(expectedItem.Text);
    todo.todoViewModel.addNewItem();

    //assert
    var firstItem = todo.todoViewModel.items()[0];
    equal(firstItem.Id(), expectedItem.Id, "Passed!");
});

Along with results for the full suite of tests:

All Tests Run

Hopefully this proves helpful to other developers taking a look at writing more client-side JavaScript and looking for ways to test it. Look forward to more posts in the coming months on other topics and techniques I’ve discovered relating to these evolving web technologies.