Visual Studio 2010–New and little-know features–Part 5–Better Unit Tests

lego_explorerTable of Contents for this series.

I occasionally give a talk about “Driving Quality through the Development Process”.. One of the things I try to drive home with developers is that you can’t test for software quality, you have to build it in from the start.  One of the ways you can do this is to write a good set of Unit Tests. 

For those not familiar with the term, here’s a thorough definition that I like:

“Unit Testing is a level of the software testing process where individual units/components of a software/system are tested. The purpose is to validate that each unit of the software performs as designed.

A unit is the smallest testable part of software. It usually has one or a few inputs and usually a single output. In procedural programming a unit may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong to a base/super class, abstract class or derived/child class. (Some treat a module of an application as a unit. This is to be discouraged as there will probably be many individual units within that module.)”    
                                    – Software Testing Fundamentals –
Unit Testing

My personal definition is a bit simpler:

“Unit tests are small pieces of code testing small pieces of code”

The really great thing about unit tests is that they pay for themselves over and over again.  There is an up-front cost to writing the tests but it saves you significant time and therefore money over their lifetime. This is described on

“The biggest resistance to dedicating this amount of time to unit tests is a fast approaching deadline. But during the life of a project an automated test can save you a hundred times the cost to create it by finding and guarding against bugs. The harder the test is to write the more you need it because the greater your savings will be. Automated unit tests offer a pay back far greater than the cost of creation.”
                                    – Extreme Programming – Unit Tests

Another great thing is that Unit Tests aren’t just for folks doing Agile development processes. They are equally applicable to teams doing Traditional software development practices.

For the rest of this article I’m going to assume that you are writing Object Oriented code.  I’m also going to assume that your definition of a Unit is the same as mine, namely that it is a Method. 

Ok, great! We write code that tests code. So that brings up a few questions…

  1. How many tests do I need to write for each “unit” (method)?
  2. How can I tell which logic paths my tests aren’t covering?
  3. How do I know what test to write if I don’t have enough?
  4. I have lots of legacy code and no tests, how do I start?

Let’s look at each of these questions in turn.

Q1: How many tests do I need to write for each “unit” (method)?

The answer here is “as many as are required to fully cover that method”. Not a really profound answer, huh?  What I mean is that there has to be a certain “minimum” number of tests to cover my method’s behavior.  Basically I want to be able to write enough tests to cover the “happy path” through my code. How many test is that?

To find that number all we have to do is run the Code Metrics tools in Visual Studio and look at the Cyclomatic Complexity metric for our method.

Cyclomatic Complexity can be defined as the number of decisions that are made within our source code. Another way to put it is to say that it is the number of distinct paths through a given piece of code.

So we have a tool that gives us a count of the “distinct paths” through our code. Guess what? Those “distinct paths” are the “happy path”. Look at the following example.

NOTE: You can click any image to display it at it’s original size.


The ShoppingCart class has 3 overrides for the AddItem() method.  The first 2 versions call the third version passing in additional default parameters. Since the first two only make a single call out they have a Cyclomatic Complexity of 1.  They only have a simple “happy path”.

The third AddItem() method does all of the real work.  As the diagram shows, the Code Metrics for this method show that it has a Cyclomatic Complexity of 5.  There are 5 basic paths through this code.

Important!: Cyclomatic Complexity does not show the total number of tests needed, just the minimum.  It does not account for boundary conditions and the like.  It is just a basic metric of complexity that we can use to get a handle on our testing.

This leads us nicely into the next question…

Q2: How can I tell which logic paths my tests aren’t covering?

So let’s say that you have the AddItem() method with a cyclomatic complexity of 5 and you want to start writing tests for it.  How do you know you’ve written the correct tests to cover the happy path?  I know I need a minimum of 5 unit tests, but how can I tell which paths have been tested and which haven’t? 

To answer this question we want to bring in the Code Coverage tool in Visual Studio.  Code Coverage determines how much of your code is being exercised by your unit tests.

To enable Code Coverage you need to do four things:

  • First, turn on Code Coverage in your .testsettings file.
    • Open the local.testsettings file in the Solution Items folder in your SolutionSNAGHTML7aadcd 
    • Navigate to the Data and Diagnostics tab and check the box next to Code Coverage.


    • While the Code Coverage entry is selected, the Configure button will be enabled.  Click on it to open the Code Coverage Detail dialog.


    • In the Code Coverage Detail dialog, select the assemblies that you wish to collect against.  This is usually all of your custom assemblies. We generally don’t collect coverage data on the unit test assemblies, but you could if you had a need.
    • You can click Ok then Apply and Close to close the dialogs. 


  • Second, tell Visual Studio to use our .testsettings file.
    • In the Visual Studio menus, select Test –> Select Active Test Settings –> Local (Local.testsettings)


  • Third, run our unit tests in Visual Studio
    • Open the Test View window to show the tests in our Solution. From the menu select Test –> Windows –> Test View


    • In the Test View pane, select one or more tests and click the Run button.  This will launch the Test Results window.


    • When the tests are finished running, click on the Show Code Coverage Results button in the Test View.


    • This will bring up the Code Coverage Results window.  From here you can drill down to the methods that interest you. Once there you can see how much of the code is covered.


    • In the example above, the AdjustQuantity() method (yellow highlight) had zero blocks touched by the test run.  This shows up as a red highlight over the lines of code that were not exercised.  The ClearItems() method on the other hand have every block touched by the tests that were run.  This does not indicate that every possible test was run for this block of logic, only that all of the lines of code were hit. 

Q3: How do I know what test to write if I don’t have enough?

The Code Coverage tools have the ability to color your code to visually indicate which paths weren’t hit by your tests in that run. 

  • Turn on code coloring by clicking on the Show Code Coverage Coloring button on the Code Coverage Results toolbar.


  • It is possible to have a partially covered method. The EntityBase() constructor below was tested for a case where the passed parameter was an instantiated object, but no test was run that checked for a null object being passed.


Q4: I have lots of legacy code and no tests, how do I start?

This is an easy one.  Just write your first test. Smile

To help with this process here are some things that I’ve used with my own code to help me get unit testing started.

Feathers-WorkingEffectivelyBook: Working Effectively with Legacy Code by Michael Feathers

“Get more out of your legacy systems: more performance, functionality, reliability, and manageability

Is your code easy to change? Can you get nearly instantaneous feedback when you do change it? Do you understand it? If the answer to any of these questions is no, you have legacy code, and it is draining time and money away from your development efforts.

In this book, Michael Feathers offers start-to-finish strategies for working more effectively with large, untested legacy code bases. This book draws on material Michael created for his renowned Object Mentor seminars: techniques Michael has used in mentoring to help hundreds of developers, technical managers, and testers bring their legacy systems under control.

The topics covered include

  • Understanding the mechanics of software change: adding features, fixing bugs, improving design, optimizing performance
  • Getting legacy code into a test harness
  • Writing tests that protect you against introducing new problems
  • Techniques that can be used with any language or platform—with examples in Java, C++, C, and C#
  • Accurately identifying where code changes need to be made
  • Coping with legacy systems that aren’t object-oriented
  • Handling applications that don’t seem to have any structure

This book also includes a catalog of twenty-four dependency-breaking techniques that help you work with program elements in isolation and make safer changes.”
                                                      – Back cover of book from

This is the one book that I recommend to my customers that are starting out with Unit Testing.  It has practical advice and techniques on how to make untestable-code testable. 

Tool: Pex and Moles from Microsoft Research

“Pex and Moles are Visual Studio 2010 Power Tools that help Unit Testing .NET applications.

  • Pex automatically generates test suites with high code coverage. Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Microsoft Pex is a Visual Studio add-in for testing .NET Framework applications.

The great thing about Pex is that it will generate test cases for all of your boundary conditions.  It is very thorough.  In fact, if your code can pass a full set of Pex-generated tests, it is robust indeed. Smile


So to wrap it all up…

  • You want to write unit tests to save yourself from injecting bugs into existing code while making changes or refactoring code. 
  • To determine the minimum number of unit tests for a given method, run the Code Metrics on your assembly or Solution and review the Cyclomatic Complexity metric for your method.
    • Cyclomatic Complexity does not give the total number of tests needed.  It does not take into account boundary conditions for a given algorithm or formula, only the decision points in the code.
  • To determine how much of your code is being exercised by your unit tests, turn on Code Coverage and review the metrics for your methods.
  • To visually determine which code paths weren’t hit in a method, turn on Code Coverage Coloring.
  • If you don’t currently have any unit tests, start by writing one, then one more, then one more.



Table of Contents for this series.


Visual Studio 2010–New and little-know features–Part 4–Code Snippets

SNAGHTML14b76cTable of Contents for this series.

Code Snippets

Code Snippets have been around since Visual Studio 2005 so I’m amazed when I show the functionality to developers that have been using Visual Studio for years and they tell me they have never seen it.

Code Snippets are feature of Visual Studio that allow you to add small pieces of code to your project using shortcuts.  This allows you to add common code elements like Property Declarations or Class Definitions, basically things that you have to type all the time to make your programs run but are pure drudgery.

So let’s say you want to create a new property on your class with a backing private variable.  This code is exactly like the thousands of other properties you’ve created over the years.  You could copy one of the other variable declaration/property declaration statement pairs that already exist in your project and paste it in then go and make changes to the names, types, etc. to make is unique.  Of course at some time you have probably done this and then forgotten to go back and make the changes thus causing your program to fail compilation, if you are lucky.  If you aren’t lucky, you changed the name but left the old Type or failed to change the Getter or Setter of the Property and your app behaves incorrectly at runtime (harder to find).

To use the Code Snippets, all you have to do is begin typing the shorthand code and IntelliSense will show the code snippets available along with any other appropriate tokens.


I can then use the arrow keys to navigate the list until I find the one that gives me a property and backing field.  That snippet’s shorthand code is propfull.  I select that one from the list and then press the TAB key twice to trigger snippet expansion.


Once expansion completes for this snippet, you can see that the variable’s type field (the int) has a blue background an has the focus.  You can immediately begin typing in the new Type for this variable.  When you do, the snippet feature will change the type of the Property declaration to match.  Once you change the Variable type you can press the TAB key to move to the Variable Name field.  When you change the name, the statements in the body of the Property that refer to the Variable’s name will also change.  Neat, huh?


There are all kinds of snippets that ship in the box with Visual Studio.  To see the list all you have to do is go to the Tools | Code Snippets Manager… menu item.


This will bring up the Code Snippets Manager dialog where you can peruse snippets until you collapse from the sheer joy. The code snippets are first categorized by Language.  There are snippets for HTML, T-SQL, JScript, Visual Basic, C#, SQL and XML.  Each language has sub-groupings.


You can also create your own snippets for commonly used pieces of code and add them to the My Code Snippets group.


To wrap up, Code Snippets are great for taking code that is painfully common and mind-numbingly error-prone and turns it into an activity that saves not only keystrokes, but brain cells as well.  I highly encourage you to look at the snippets that are shipped with Visual Studio. If you are interested in creating your own for you or your entire team, look at the structure of the .snippet files that back the existing snippets.

Here’s a link to the root of the MSDN Library documentation on Code Snippets.

Happy Snippeting!


Table of Contents for this series.

Visual Studio 2010–New and little-know features–Part 2–Code Metrics


Table of Contents for this series.

Code Metrics in Visual Studio

What is it?

One of the really nice features that shipped with Visual Studio 2010 Premium and Ultimate is the Code Metrics tooling. This tooling has been in the Visual Studio product line since Visual Studio Team System 2008 – Developer Edition  For those not familiar with the term:

“Code metrics is a set of software measures that provide developers better insight into the code they are developing. By taking advantage of code metrics, developers can understand which types and/or methods should be reworked or more thoroughly tested. Development teams can identify potential risks, understand the current state of a project, and track progress during software development.”     
                                                       – Visual Studio 2010 Code Metrics page on MSDN

From a Visual Studio perspective, the code metrics provided are Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling and Lines of Code

Supported Visual Studio Versions:

  • Visual Studio Team System 2008 – Developer Edition
  • Visual Studio 2010 Premium
  • Visual Studio 2010 Ultimate
  • Visual Studio 2012 [aka VS 11] Premium
  • Visual Studio 2012 [aka VS 11] Ultimate

How do I get these metrics?

These metrics can be triggered by opening a project or solution in a supported version of Visual Studio and using the Analyze | Calculate Code Metrics for [current project name | Solution]. Also see How to: Generate Code Metrics Data on MSDN.


          Figure 1 – Trigger Code Metrics by using the Analyze | Calculate Code Metrics menu item

I’m not going to go too an in-depth treatise on each of the metrics, but I will describe how each is useful by itself and also within the whole.  I’ll also provide deeper links if you’d like some stimulating bed-time reading.

Lines of Code:

What is it?

The Lines of Code metric describes the approximate lines of active code in a method or class.  The calculation is based on the IL code and not the text version of the code. Since it is based on IL, comments and whitespace are not counted.  The compiler may also reduce the number of lines in the IL due to compiler optimizations.

What do I do with it?

A high value may indicate a method or class that is doing too many things and violating the Single Responsibility Principle. This code may be a candidate for refactoring into multiple, smaller units of code.  This metric by itself is not the best indicator of code quality.  You must use this along with other metrics, like Cyclomatic Complexity and Class Coupling to make a final determination.

More information:

Class Coupling

What is it?

One can state that any “…two objects are coupled if and only if at least one of them acts upon the other”1.  Class coupling measure the number of classes this method relies upon.  This could be through parameters, local variables, return types, method calls, generics and a host of other mechanisms.

What do I do with it?

The higher the value the more a class or method can be affected by code changes within other parts of the system.  The more classes you rely upon, the more opportunities for your behavior to change due to factors outside of your control. I call this the “blast radius of change”.  The higher the class coupling value, the more likely this method will be affected by other edits and the more testing you have to do to ensure correct functionality after each change.

Classes with a high value are also harder to reuse in other areas or applications since you have to bring along all of the classes that you depend upon.

More information:

Depth if Inheritance:

What is it?

In OO design, we use classes to model the system.  These classes are defined in a hierarchy.  In the .Net framework, all classes ultimately derive from a single parent called Object. The Object class is the root of the inheritance tree. Depth of Inheritance measures the distance the measured class is from it’s root ancestor.  So if I create a new class Vehicle that doesn’t inherit from anything but Object , my Depth of Inheritance is zero because I’m directly on top of Object.  If I then create a class Bicycle that inherits from Vehicle, Bicycle’s Depth of Inheritance metric is 1 since it is one ancestor away from Object.

What do I do with it?

When looking at this metric you have to think about how it affects your code.  In the example above, my Bicycle class could be affected by changes directly within its code.  It could also be affected by changes to Vehicle’s code.  So every time you inherit from another class, you increase the risk that your behavior will change due to changes in the inherited class or one of the classes it inherits from.

The higher the value in this metric the more difficult it is to understand exactly what the code does since any method’s behavior can be defined/redefined in any of the class’ ancestors.  It is also more difficult to fined exactly where a behavior is defined/redefined in the inheritance tree.

More information:

Cyclomatic Complexity:

What is it?

This metric is one of the least understood for most people.  It’s got a funny name that isn’t really simple to understand, unlike Lines of Code or Depth of Inheritance.

This metric is basically the number of decisions that are made within our source code.  Another way to put it is to say that it is the number of distinct paths through a given piece of code.

What do I do with it?

Given these definitions, we can now understand that a lower number is better than a higher one since it is easier for us to keep a smaller set of logic in our head than a larger set. This also leads to fewer errors because we didn’t completely understand what the code was doing prior to making our changes.

You should look at the average value of the methods in a given class and compare the higher values to those of the rest of the class.  If any given method is significantly higher than its neighbors then it is probably somewhere you should investigate for refactoring.

You should also set an upper limit on what is acceptable complexity in your codebase.  I have seen references to a value of 10 being a good “max” value and I can understand why.  A method that has more than 10 decisions within it becomes difficult to keep straight while coding and therefore would be more likely to have errors introduced during maintenance.

You may also want to take the Lines of Code metric into account as well. 

  • Methods that have high line counts and high Cyclomatic Complexity tend to have the lowest reliability in the system. 
  • Methods that have low line counts but high Cyclomatic Complexity are generally less reliable and harder to maintain because the code is usually terse.

More information:

Maintainability Index:

What is it?

This metric describes the overall maintainability of the system reviewed.  It is based on a formula that returns a value between 0 and 100 that describes the overall maintainability of the code base and is based (in part) on the other 4 metrics described here.

A higher value indicates a system that is easier to maintain which should help to reduce the introduction of bugs during the maintenance cycle. Along with a numeric value, this column also presents a “stoplight” indicator to give the developer a quick indication of trouble spots in the code base. The breakdown of the stoplight values are:

  • 0-9 = Red
  • 10-19 = Yellow
  • 20-100 = Green


What do I do with it?

I tend to use this metric in a couple of different ways depending on my current needs.

  1. If I have to make modifications to a code base that I am unfamiliar with, I will use this metric to find the methods that I should be wary of.  Low maintainability means that it may be an overly complex or large method that is prone to logic errors.
  2. If I have an opportunity to refactor my code I will use this metric to target the code most in need of refactoring. This is an area that can use unit tests to help mitigate the risk inherent in refactoring complex code.

More information:


Table of Contents for this series.


Visual Studio 2010–New and little-know features–Part 1–Column selection

Table of Contents for this series.


One of the really cool, productivity-boosting but little-known features in the Visual Studio IDEs (since VS 2005) is the Column selection (aka Box selection) feature. 

Everyone is familiar with the standard Line selection (aka Stream selection) that is achieved by dragging the mouse cursor horizontally to select contiguous characters on a line.  Dragging vertically would select the remainder of the current line prior to selecting the next line down, like so. (For the mouse-impaired, aka keyboard junkies, this is done by using SHIFT + Arrow Keys)

                       Figure 1: Standard line (stream) selection

What many developers don’t realize is that Visual Studio also has a variant of this behavior called Column or Box Selection.

With Column Selection I’m able to select a vertical portion of a set of lines without taking the remainder of each line. My selection can start anywhere and extend anywhere within the file.  This is achieved by holding down the ALT key while performing a vertical drag around the area of text to select. (For our keyboard junkies, this would be SHIFT + ALT + Arrow keys)


Once I’ve made a selection, the editor will allow me to change all of the selected text on each line to the same value just by typing in the new text. 

If I want to change all of those variables from public to private, all I now have to do is begin typing p-r-i-v-a-t-e and the selected part of each line will change to the typed entry.


Notice that I don’t get IntelliSense while making this change, but I’m ok with that since is saves me so much time.

Visual Studio 2010–New and little-know features–Table of Contents

PowerTools This post is a set of links to all of the posts in the Visual Studio 2010 – New and little-known features series.
I’ve been teaching Visual Studio 2010 Ultimate for Developers classes on and off for about a year now. While showing some major new feature I might use a code snippet or the new IntelliSense features and someone in the class will ask me to swing back and show more of those little productivity boosters.  Because of this I decided to start documenting them out here.  I know that at the time of writing that the next version of Visual Studio is in Beta, but I think that there is still great value for those folks that are coming into VS 2010 or VS “11”.

Table of Contents

Part 1 – Column Selection in the Code Editor
Part 2 – Code Metrics
Part 3 – IntelliSense Improvements
Part 4 – Code Snippets
Part 5 – Better Unit Tests