Var is Better

June 18, 2013 Leave a comment
Ken
June 18th, 2013 | Ken Payson

Var is Better

In .NET 3.0 Microsoft introduced the var keyword. The primary reason for introducing the var keyword was to support anonymous types, but there are numerous benefits to using the var key word. I find that developers often still default to declaring variable types on the both the left and right side. Most of us have been writing code long before C# 3.0 and old habits are hard to break. But, there are reasons beyond habit that developers do not use the var keyword. Developers want their code to be readable and sometimes they believe that having the variable type visible on the start of the line helps the reader understand how the code works. I say that this shouldn’t be the case. There are situations that you cannot use the var key word. You cannot use it if you need to declare an out parameter. You cannot use it if you need the variable to be of an interface or a base class. You cannot use it for class variables. Other than these situations -

I say that using the var keyword is just better:
• Declaring variables with the type on the left hand side ex Widget myWidget = new Wiget() is repetitive. The compiler knows what the type of the variable is from the right hand side. (if you use the wrong type on the left you get a compilation error).
• With intellisense, you can always hover over the variable to see the type.
• It’s necessary to use var if you want to return an anonymous type.
• The var keyword makes your code look really neat with all your variable names lining up.
• The variable name, and the name of the function returning the value should tell you what you need know about the variable. You can use the extra space that you get by eliminating the type to give a more descriptive variable name.
• Using var makes it easier to refactor and maintain the code. Fewer things break when using var. If the right hand side changes. You don’t have to have to fix the left hand side also.
• You cannot say var foo = null because null is not a type. Using var makes it impossible to do the unnecessary initialization of variables to null. Now you have to set variables only when you have the correct value for them which is what you should do.

Ken Payson is a Software Engineer at LogicBoost an Agile consulting firm based in Washington DC.

Eliminating the Null Checks

June 18, 2013 Leave a comment
null
June 18th, 2013 | Ken Payson

Eliminating the Null Checks

Sometimes it feels like half the code in an application is concerned with checking if variables are null and yet still bugs come up as a result of null reference exceptions. What can we do about this? Is there a way to spend less lines of code worrying about null and more time worrying about the core logic of your functions? Yes there is and it’s simple – stop coding using null!
Stop assigning null to variables

Never “initialize” a variable to null. Only assign a value to a variable after computing what its value should be. Properties on objects can be null. Declared variables need not ever be null. If there is no value, there is nothing to be done with the variable.

Stop writing functions that accept and work with null values.

Null is not a value of any type. It is pointer. C# objects are reference types and are passed by reference. Value types can be made into nullable types, but unfortunately, it is not possible to take a reference type and say it cannot be null. This is a weakness of the type system. If you have a function that takes a string and you pass in an int, you get a compile time exception because there is a type mismatch. If you take the same function and pass in null it will compile. Why? It should be a type violation because null is not a string.

It is the responsibility of the caller to make sure that it is giving a function data. If the caller does not have a value needed by the function, it should not be calling the function at all.

The called function needs to know it has values to work with before proceeding with the core of its logic. If the developer cannot guarantee that the function will be called with good data (maybe because it is public api method) then the function should validate the input at the top of the function and throw an exception if the input values are null. If you are using .NET 4, the code contracts with contract preconditions work well for this. If you are below .NET 4 you’ll have to settle for writing if(myObject == null) { throw new ArgumentException();} If on the other hand, you can guarantee that your function will be called correctly, you may ignore the null checks all together.

Sometimes functions are written to take optional parameters. If the optional value is not supplied it is null. The function branches on whether or not the value is null in order to perform some additional piece of work. Functions written this way tend to grow and grow as new requirements come and more optional parameters and branching logic are added to the function. The better approach is to have overloaded versions of the function. Private auxiliary methods should hold common logic used by the over loaded methods to eliminate any code duplication. Now in the calling scope, we do not need to pass null into any method. We can call the version of the function that does exactly what we need.

Stop writing functions that return null values

The flip side of not taking in nulls, is not returning nulls.
If a function is supposed to return a list of things and there are no items to return, then return an empty list.
If a function is supposed to return a single item from some source, a database, cache, etc. and it is expected that the item will be found but it is not, then throw and exception. For example, if you are searching for an item by id, then in normal workflow, you will have a valid id and a match should be found. If a match is not found this is an exceptional case.
If a method might reasonably fail to be able to return a value then implement a TryGet version of the method that takes an out parameter and returns true or false. This is analogous to TryParse for integers or TryGet for dictionaries. In the calling scope, you will have something like this:

Widget widget;
If(TryGetWidget(someSearchString, out widget) {
//The widget will have a value. Do something with the widget
}
Else {
//The search didn’t find anything
}

Conclusion

It really is that easy to greatly reduce null checks and null exceptions from your code. Don’t assign null to variables and don’t write functions to take in or return nulls.
In general, it is the responsibility of a function to communicate its requirements and throw an exception if the requirements are not met. The calling scope has the responsibility of making sure that is passing acceptable values into a function and handle any exceptions that are thrown.

The ultimate goal would be to eliminate possible null from the entire call chain. If function A calls, function B, which calls function C and we know that function A never returns null, then we don’t need to check the values originating from A and going into C. Of course, in any real world system, there will be many places where methods you call may return a null. Don’t just pass these values along into the next function call. Stop the null chain.

Ken Payson is a Software Engineer at LogicBoost an Agile consulting firm based in Washington DC.

Lessons learned at Agile Coaches Camp 2012

December 26, 2012 Leave a comment

coaches

December 26th, 2012 | Katie McCroskey

Lessons learned at Agile Coaches Camp 2012

Agile Coaches Camp is something I look forward to every year: catching up with familiar faces and friends; hearing everyone’s horror stories, funny stories, accomplishments and successes, and most importantly – lessons learned over the year.
It is two days of education on a vast array of topics, from shaping team culture, Agile engineering practices, Scrum vs. Kanban, dealing with a difficult team member and complex Enterprise Agile issues. There is a topic of interest for everyone, that’s because the group picks the topics – there are no premeditated sessions and no prearranged speakers. Whatever happens at Agile Coaches camp was supposed to happen – the right people are there, and the conversations flow as they are meant to flow. I always leave feeling inspired.

Personally, my goal of Agile coaches camp was to explore Agile organizational culture and team environments. Well-built Agile teams operate seamlessly – with effectiveness and precision. Team dynamics are a crucial piece to the puzzle – there also must be respect and the willingness to help. The overall dominating perspective regarding Agile teams is that the whole team succeeds and fails as one. But here comes the challenge – how do you create that type of Agile team environment?

Through first-hand experience, great conversations at Agile Coaches Camp, and a few books read – I’ve come up with a few key factors that contribute to developing a strong Agile team.

First, the right people are important. It takes the right personalities, professional skills, individual drive, and willingness to work in a team environment. Another critical aspect of an Agile team is its never-ending drive to improve and status quo is never acceptable. This constant creation of change and continual learning in an Agile environment is typical. Stepping outside comfort zones is crucial for growth but isn’t always for everyone. It is this desire to change, grow and learn that builds great teams and experienced people.

Another important element to an Agile team is the ability to self-organize. Natural leaders emerge and there has to be enough trust, respect, and team buy-in for the team in order for this self-organization to occur in a productive direction. Simply, one weak link in the chain can disrupt the productivity of the unit and break the bond for the entire team.
Overall, the key to successful Agile teams is the overarching mindset that the team fails and succeeds as one unit. This concept can be applied to all shapes and varieties of an Agile team – from a team of developers/analysts/testers/designers to an entire organization that operates reflecting the Agile mindset.

Katie McCroskey is the Marketing Manager at LogicBoost an Agile consulting firm based in Washington DC.

Understanding the Web-Same Origin Policy, Restrictions, Purpose, and Workarounds

December 13, 2012 Leave a comment

Ken2

December 13th, 2012 | Ken Payson

Understanding the Web-Same Origin Policy, Restrictions, Purpose, and Workarounds

Sooner or later every developer will have to get or send data from his or her site to a site on a different domain. Recently, I was working on a bookmarklet which required communication between my company’s site and another domain. Whenever there is web communication between two different domains, it is necessary to understand the same origin policy of the web. Wikipedia summarizes the same origin policy as

“The policy permits scripts running on pages originating from the same site to access each other’s methods and properties with no specific restrictions, but prevents access to most methods and properties across pages on different sites.”
The thing to know and understand is that there are two origin policies. The “DOM Origin Policy” governs reading/writing via the DOM information in different windows. The “XmlHttpRequest Origin Policy” governs Ajax communications between different windows.

DOM Same Origin Policy

The same origin policy for the DOM disallows reading from or writing to a window if the window’s location is on a different domain. This prevents a number of attacks which could be used to steal user information. A simple example would be:

  1. MaliciousSite.com has a link on its page to mybank.com
  2. User clicks on link which opens mybank.com in a new window using javascript window.open
  3. MaliciousSite.com could then read and write to mybank.com through a reference to the mybank.com window.

XmlHttpRequest Same Origin Policy

The XmlHttpRequest Origin Policy disallows sending HTTP requests via the xmlhttp javascript request object to a site on a different domain, so “normal” Ajax with an endpoint on a different domain is not possible. There are workarounds to allow information sharing which I will discuss later. It is important to note that the XmlHttpRequest Origin Policy really exists to prevent a different line of attack than the DOM Origin Policy. XmlHttpRequest Origin Policy is trying to prevent Cross Site Request Forgery attacks (CSRF) attacks.
When an HTTP request is made, the web browser will send the cookies belonging to the requested domain. In particular, sites typically use an authentication cookie which proves that you are logged in to a site. Based on this, if there were no XmlHttpRequest Origin Policy, a CSRF attack could work as follows:

  1. User navigates to a bad site MaliciousSite.com
  2. MaliciousSite.com makes the blind guess that the user is also logged in on another tab to MyBank.com
  3. MyBank.com exposes data via web services to authenticated users.
  4. MaliciousSite.com uses the web services of MyBank.com to obtain information. The web services on MyBank.com return the requested data because the user is authenticated and the authentication cookie was sent with the Ajax request.

Work-Arounds
Just as there are “two” origin policies, there are two corresponding sets of “workarounds” which allow for communication between windows. Let’s take a look at these methods and see how they allow for legitimate communication without reintroducing the same security holes.

Working around the DOM Same Origin Policy
In order to work around the DOM origin policy, one should use window.postmessage(). This javascript method is now part of the HTML 5 standards but has actually been around for some time. It can be used with IE8+, and all modern version of FF and Chrome. The use of windows.postmessage is thoroughly documented on the web but can be summarized as:

  1. Get a reference to the window which you want to communicate with ex var win = window.frames[0].window
  2. The window receiving messages sets up an event listener for the messages that will be sent.
  3. The sender calls win.postmessage(dataMessage).
  4. For two way communication, event listeners are set up on both sides.

Window.post message does not allow for unauthorized sharing of information. Using window.postmessage requires both sides to be in cooperation on the messages structure and it requires the message recipient to accept messages coming from the sender. Using the earlier example, myBank.com would never be expecting messages from mybank.com and so even if mybank.com did have an event listener set up and methods for data sharing it would certainly ignore messages coming from any unknown location.

Working around the XmlHttpRequest Same Origin Policy

To work around the XMLHttp Origin Policy there are a few different options. The first thing you could do, if you want to use a data service from another domain, is simply create a proxy service on your site which calls the service on the other domain and passes back the data. This involves extra server side coding on your part, but it also requires no cooperation from the other domain. The next techniques that I will mention involve cooperation from the other domain in how their web services are set up. If the other domain isn’t “cooperating,” writing a proxy service maybe your only option.

If a site wants to expose services that can be consumed via Ajax from another domain, there are two established options, JSONP and CORS. The JSONP technique uses the fact that javascript files can come from different domains. JSONP service requests are structured as requests for the javascript file. If a site returns JSONP it is actually returning javascript containing a function which, when evaluated, returns JSON data. The JSONP technique works but it is a “hack.” JSONP services could theoretically still be a CSRF attack vector, though it is much less likely. JSONP is meant to be used with cross domain requests and JSONP services need to be implemented differently. Because of this, accidently exposing sensitive data via JSONP services is not a real concern.

CORS is a relatively new standard that address the need for cross domain Ajax in a more proper fashion. CORS stands for Cross Origin Request Sharing. All modern browsers can make Ajax requests to other domains so long as the target server allows it. A security handshake takes place using HTTP headers. When the client makes a cross-origin request, it includes the HTTP header – Origin – which announces the requesting domain to the target server. If the server wants to allow the cross-origin request, it has to echo back the Origin in the HTTP response header – Access-Control-Allow-Origin. The target server can establish a security policy to accept requests from anywhere or to only accept requests from specific domains. On the client, jQuery Ajax automatically supports CORS for cross domain requests. So, no additional coding on the client is necessary if you are using jQuery. We can see that CORS also makes CSRF attacks highly unlikely. If a site is checking for an authentication token with its web services, it will certainly not add an origin header for an untrusted domain.

In Conclusion

I hope that you found this brief introduction helpful. Understanding the rhyme and reason for the same origin policy should make the learning process easier when it comes time to dive into implementation details. Happy coding!

Ken Payson is a Senior .NET developer at LogicBoost, an agile software services and product development company based in Washington DC.

PhantomJS A Scriptable Headless Browser

October 19, 2012 Leave a comment

October 19th, 2012 | Ken Payson

PhantomJS A Scriptable Headless Browser

More and more, modern web applications are moving away from post-back driven pages and embracing ajax intensive sites that make use of client-side view-models. While this leads to great user-experience, it raises challenges in writing tests that cover the complex functionality on the web page. Web automation tools have been around for a long time to help automate web tasks. Selenium is one of the most popular web automation frameworks. Selenium has webdrivers for all of the major browsers. By using these drivers we can automate tasks such as opening a browser, navigating to a page, filling out a form, submitting it and checking the results. This is a very usefully thing and it can be fun to watch a browser performing like a player piano; running through set of tasks without you. There is a major drawback to the current suite of web drivers though, they are slow. The time it takes to load and render pages is too slow when we have a suite of tests to run. Most of the time, the questions we are asking can be phrased as a whether a certain element is in the DOM once some other action is complete. Actually seeing things on the screen isn’t really necessary. This is doubly so when these tests are being run on a build server where we will not be watching the test run. What we want is a “headless” browser – a browser that internally does the same things that a standard browser does, makes requests, parses html, builds a DOM, understands javascript, handles cookies, and session. In shorts it behaves like a browser does except it doesn’t actually render pages.

Enter PhantomJS. PhantomJS is a headless javascript scriptable webbrowser built using the WebKit engine. There have been other attempts at headless browsers in the past. The HTML Browser remote driver with Selenium, is one example. However, earlier headless browsers did not have a proper javascript engine backing them and as a result were limited to use with very simple pages. Because PhantomJs is based on webkit, and uses the WebKit javascript engine it does not have this problem.

To get started with PhantomJS, download the latest version from PhantomJS.org Working with PhantomJS directly can be challenging because PhantomJS is rather low level. To make working with PhantomJS easy, also download CasperJS from CasperJS.org. CasperJS is a navigation scripting & testing utility written to work with PhantomJS. It enhances the PhantomJS API so the coding is easier.

Scripting with Phantom/Casper is very easy once you learn to avoid a few of the pitfalls. With Phantom/Casper you can write javascript that is injected into the webpage you are testing. Casper has a utility class for selecting and modifying elements via css selectors.

If you need, it is also be possible to use JQuery. If JQuery is not already part of the page, it can be dynamically injected and used. However, usually it is easiest to use the document.querySelector method that is natively part of the latest version of javascript.

Phantom scripts are server side javascript. We can send client side javascript to the browser. We can also do things server side that we cannot do in a browser. There is a File System module that lets us read and write to files. There is a System module that lets us work with command line arguments and environment variables.

Here is a simple example that will query google using supplied command line arguments. A report on the results will be written to a file.

Here is the PhantomJS script using Casper

phantom.casperPath = 'C:\\CasperJs\\casperjs-1.0.0-RC1';
phantom.injectJs(phantom.casperPath + '\\bin\\bootstrap.js');

var casper = require('casper').create();

var system = require('system');
var page = require('webpage').create();
var utils = require('utils');
var fs = require('fs');

var Debug = function(message) {
    casper.echo("\n" + message + "\n");
}

var googleHome= "http://www.google.com";

casper.start();

casper.userAgent('Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89');
casper.thenOpen(googleHome, function() {
    casper.echo("url: " + this.getCurrentUrl());
});

casper.then(function() {
    Debug("SearchTerm1: " + system.args[1]);
    Debug("SearchTerm2: " + system.args[2]);
});


casper.then(function() {
    casper.evaluate(function(searchTerm1,searchTerm2) {
        
        document.querySelector('input[name="q"]').setAttribute('value', searchTerm1 + ' ' + searchTerm2);
        document.querySelector('form[action="/search"]').submit(); 
    }, {
        searchTerm1: system.args[1],
        searchTerm2: system.args[2]
    });
});


casper.then(function() {
    Debug("new url: " + this.getCurrentUrl());
});


casper.then(function() {

    var secondLink = this.evaluate(function() {
        return  __utils__.findAll('h3.r a')[1].href; 
    });
    
    var numResultsOnPage = this.evaluate(function() {
        return __utils__.findAll('h3.r a').length;
    });
    
    var fstream = fs.open('C:\\temp\\searchResults.txt', 'w');
    fstream.write("There are " + secondLink + " results on the page\r\n\r\n");
    fstream.write("The second link url is " + numResultsOnPage);
    fstream.close();
    
});

casper.run(function() {
    this.exit(); 
});

PhantomJS is run from the command line: PhantomJs …

In order to stay away from the dos prompt, I usually create a one or two line batch file to run my PhantomJs script. Here is the batch file to run the example program.
cd c:\\phantomjs\\scripts
phantomjs GoogleSearch2.js stinky cheese

The future of PhantomJS with Selenium
One thing exciting development to keep an eye on is GhostDriver. GhostDriver is a Selenium Web Driver for PhantomJS. It is still in development and Selenium doesn’t fully support it, but when it is available (mostly in the next release 2.26) it will be possible to write Selenium tests in C# and have them run against PhantomJS. Initial reports say that the GhostDriver with PhantomJS could be twice as fast as the Selenium Chrome Driver.

 

David Cooksey is a Senior .NET developer at LogicBoost, an agile software services and product development company based in Washington DC.

A Developers Take On Pair Programming

March 16, 2012 Leave a comment

David Cooksey on Pair Programming

March 16th, 2012 | David Cooksey

A Developer’s Take on Pair Programming

Over the past few years at Thycotic I have spent a lot of time pair programming. Five days a week, eight plus hours a day adds up to thousands of hours. Through my experiences in pair programming I came to a few conclusions about the practice. These represent my take on pair programming, I make no claims as to their originality.

First: Pair programming makes programming social.

This sounds obvious, but it has by far the biggest impact on how you, as a developer, write code. You are effectively writing code as a committee of two, where each member has veto power. All the factors that developers typically do not have to deal with when working alone come into play. Do you like your pair? Does he/she like you? How about respect? Do you work in similar ways? Do you work similar hours? Will you argue incessantly over architecture? Did you or your pair have a rough night?

Second: Verbal communication is as important as technical ability.

Ideas have to be expressed clearly enough that your pair can understand them. The greatest idea in the world may never be attempted if it is not stated clearly. Equally important is the ability to offer clear reasoning when you disagree with an approach. “I don’t like that” is not sufficient.

Third: Two heads are still better than one. Usually.

The caveat to this one is the social aspect. While in most cases two people who are focused on the task can do a better job, the social aspects mean that particular people may be worse together than they are apart.

So, as a developer, what is the best way to pair program? With courtesy. Learn when to shut up and let your pair drive. Learn how often you can interject comments without causing undue irritation. Learn when to push and when to let it go. If you push for your point of view all the time, no one will want to work with you. If you always give in, no one will respect your opinion. Compromise.

What were your experiences with pair programming? If you have never tried it, what are your thoughts on the subject?

David Cooksey is a Senior .NET developer at LogicBoost, an agile software services and product development company based in Washington DC.

Pro ASP.NET MVC 3 Framework

December 27, 2011 Leave a comment
December 27th, 2011 | Kevin Kershaw

Pro ASP.NET MVC 3 Framework

This is a brief review Pro ASP.NET MVC 3 Framework by Adam Freeman and Steven Sanderson. Now in its third edition this already good book continues to get better. This book provides thorough and comprehensive of MVC3 that functions well for both learning the subject and as a reference. This book includes a substantial example web site that is developed through the course of three chapters. This sample covers many aspects and issues that you would encounter developing a web site using MCV 3. Also within this sample are several interesting methods and techniques. I am selecting two that I found most interesting to explore below.

Using DI And Mock Objects To Replace DB And Repository Code

Many of the projects and features I have worked on have proceeded from the database first and then built code towards the UI. The technique below allows the UI to be developed earlier in the cycle and can facilitate prototyping the UI with a lower investment in backend code. First some infrastructure setup is needed. We will start by defining a controller factory that uses the Ninject dependency injector .

public class NinjectControllerFactory : DefaultControllerFactory
{
    private IKernel ninjectKernal;

    public NinjectControllerFactory()
    {
        ninjectKernal = new StandardKernel();
        AddBindings();
    }

    protected override IController GetControllerInstance(
        System.Web.Routing.RequestContext requestContext, Type controllerType)
    {
        return controllerType == null ? null :
             (IController)ninjectKernal.Get(controllerType);
    }
    ...
}

 

Wire this controller factory in by replacing the default controller factory in Global.asx

    protected void Application_Start()
    {
        ...
        ControllerBuilder.Current.SetControllerFactory(new NinjectControllerFactory());
        ...
    }

This infrastructure is desirable in the project by itself. Its existence will eliminate much trivial and annoying code. The fact that it also supports the technique we are discussing is just a bonus.

We will continue by defining a DTO that will be used by the front end.

public class Product
{
    public int ProductId { get; set; }
    public string Name { get; set; }
    public string Description { get; set; }
    public decimal Price { get; set; }
}

 

Next, define the repository interface that will be used by the controllers to access the data.

 

public interface IProductRepository
{
    IQueryable<Product> Products { get; }
}

Finally, we will setup a fake repository using Moq and wire it into the Ninject container. Add to the NinjectControllerFactory defined above the using the following function.

private void AddBindings()
{
    Mock<IProductRepository> mock = new Mock<IProductRepository>();
    mock.Setup(m => m.Products).Returns(new List<Product>{
        new Product{Name="Football", Price=25, Description="Standard issue ball"},
        new Product{Name="Surf board", Price=179, Description="A wave rider you will love"},
        new Product{Name="Running shoes", Price=95, Description="Move your fat butt!"}
        }.AsQueryable());
        ninjectKernal.Bind<IProductRepository>().ToConstant(mock.Object);
}

At this point you can start defining controllers and views and considering other front side issues leaving the database and repository definition for later. This allows the inverting of the more normal construction of the database and backend first, with the UI areas being developed second. Instead with a small amount of infrastructure in place, issues of UI design and actual data requirements of screens can be explored earlier.

I have used mock objects in tests, but it never occurred to me to use them as temporary filler in applications. This is a thought provoking technique in the sample code.

Using Model Binder To Access Session Data

This section of the sample code creates a model binder to allow access in the controller of data stored in the session. This has a twofold benefit, first it simplifies the controller code. Second it simplifies the testing of those controller methods.

The example application implements a shopping cart object that is stored in session. To access this shopping cart object a model binder is created. The effect is to decouple controller from session, since the controller accesses the cart via a parameter instead of directly through the session object of the http context.

First, starting with the Cart definition, the details are not important for our discussion here.

public class Cart
{
    ...
}

Next a model binder is defined .

public class CartModelBinder : IModelBinder
{
    private const string sessionKey = "Cart";

    public object BindModel(ControllerContext controllerContext,
        ModelBindingContext bindingContext)
    {
        var cart = (Cart)controllerContext.HttpContext.Session[sessionKey];
        if (cart == null)
            {
                cart = new Cart();
                controllerContext.HttpContext.Session[sessionKey] = cart;
            }
        return cart;
    }
}

Register this new binder in Global.asx.

    protected void Application_Start()
    {
        ...
        ModelBinders.Binders.Add(typeof(Cart), new CartModelBinder());
        ...
    }

 

Following is an example of a controller method that accesses the cart. Notice no references to http context and session.

[HttpPost]
public ViewResult Checkout(Cart cart, ShippingDetails shippingDetails)
{
    if (cart.Lines.Count() == 0)
    {
        ModelState.AddModelError("", "Sorry, your cart is empty!");
    }
    if (ModelState.IsValid)
    {
        processor.ProcessOrder(cart, shippingDetails);
        cart.Clear();
        return View("Completed");
    }
    return View(shippingDetails);
}

 

The implications for testability are great. For example, the following test.

[Test]
public void CannotCheckoutEmptyCart()
{
    var mock = new Mock<IOrderProcessor>();
    var cart = new Cart();
    var shippingDetails = new ShippingDetails();
    var controller = new CartController(null, mock.Object);

    var result = controller.Checkout(cart, shippingDetails);

    mock.Verify(m => m.ProcessOrder(It.IsAny<Cart>(), It.IsAny<ShippingDetails>()),
        Times.Never());
    Assert.AreEqual("", result.ViewName); //default view name
    Assert.AreEqual(false, result.ViewData.ModelState.IsValid);
}

 

Gone is the need to stub out the http context and session. I don’t want to even imagine the mocking setup that would be required to perform the above test if the Controller access the Cart object directly in Session. This is a big simplification.

Summary

Pro ASP.NET MVC 3 Framework is informative in the broad sense about the many aspects of MVC 3. I think the quality of the sample is indicative of the quality of the book. And in the details of that sample are many points of insight, two of which are discussed above.

Categories: ASP.NET Tags: , ,
Follow

Get every new post delivered to your Inbox.