Archive

Archive for the ‘TDD’ Category

TDD Teams Write Better Software

November 16, 2011 Leave a comment

November 16th, 2011 | Katie McCroskey

Pictures are worth a thousand words, so I’ll try to keep this brief – TDD teams write better software.

The chart above is based on a formal paper of four empirical case studies on teams using Test Driven Development. Test Driven Development has been used sporadically for decades, and has been tested in academic and controlled settings numerous times. But this paper is different because it’s not about controlled experiments; it involves real people, real projects and real customers – at IBM and Microsoft, really smart companies. Test Driven Development is when a developer writes a failing unit test to define a desired function. Then, code is written to pass the failing test. Finally, the code is refactored to meet acceptable coding standards. The developers are in control, constantly making small design and implementation decisions – and getting constant feedback.

A few details on the teams: at IBM the developers were working on developing device drivers, working on new and legacy code. It was a distributed team, working in Java and no one had any previous experience in TDD. The teams at Microsoft, on the other hand, were co-located, working in C# and C++ on three products you may have heard of: Windows, MSN and Visual Studio. None of these teams were Agile, but collectively decided to try out TDD.

Despite the differences in project size, type and environment – the results were conclusive. Test Driven Development drastically reduced the defect density (bugs, missed requirements, etc.), anywhere from 40 – 90%. In the end, the tests were considered assets so when the code is enhanced/modified it will be easier to find new bugs. Although all the teams did experience a slight increase in development time, 15 – 35%, the increased time is offset by the improved quality and reduced maintenance costs. If you are interested in learning more, check out the formal paper.

Katie McCroskey is the Marketing Manager at LogicBoost an Agile consulting firm based in Washington DC.

StrictMock vs. DynamicMock: What are You Testing Here Anyway?

July 16, 2009 3 comments

Dynamic Mock vs Strict Mock

July 16th 2009 | Jimmy Bosse

StrictMock vs. DynamicMock: What are You Testing Here Anyway?

Here at Thycotic, we are TDD enthusiasts and pair developers. Test Driven Development and Pair Programming go together like chocolate and peanut butter, especially when you tackle a brand new piece of functionality and practice some green-field Ping-Pong programming.

For those unfamiliar with the concept of Ping-Pong programming, it’s a fun way to develop a new piece of business logic when you are not sure what the best method of implementation will be.

To start, one of you writes a failing test. The other makes the test pass, and then writes the next failing test (refactoring as needed). You make that test pass and write a new failing test. The goal of the session is to write the smallest amount of code possible, then try to write a test that effectively tests the desired functionality while providing a challenge to your pair. The session is easy at first, but as the tests move toward satisfying the more complicated requirements of the business, the exercise becomes more challenging.

Once your code becomes suitably complex, you’ll start to have multiple classes with dependencies—and with these dependencies come mock tests.

Then you have to decide: StrictMock or DynamicMock?

The problem with Strict Mocks is that you are required to set up your entire mock dependency far beyond the scope of the subject/system under test (SUT).

Take the following example.

[Test]
public virtual void ShouldSubscribeToViewEventWhenConstructed()
{
    MockRepository mocks = new MockRepository();
    IView mockIView = mock.StrictMock<IView>();
    mockView.SomeEvent += delegate{};
    LastCall.Constraints(Is.NotNull());
    mocks.ReplayAll();
    IPresenter presenter = new Presenter(mockView);
    mocks.VerifyAll();
}

To implement a presenter:

public class Presenter : IPresenter
{
    private IView _view;

    public Presenter(IView view)
    {
        _view = view;
        BindToEventsOn(_view);
    }

    private void BindToEventsOn(_view)
    {
        _view.SomeEvent += SomeEventHandler;
    }

    private void SomeEventHandler()
    {
        //Do something…
    }
}

The test is now green because you’ve subscribed to the event. But now that you’ve bound to the view in your constructor, you must always expect a call in your test whenever you mock IView with a StrictMock. This will make your tests verbose and it will be difficult to determine the actual SUT in each test.

Another issue is this: When you use a StrictMock you are essentially telling your pair what to program. Let’s say you and your pair sit down to create a calculator business object that must be able to add numbers together.

Your pair writes the following test:

[Test]
public virtual void ShouldReturnFourWhenGivenThreeAndOne()
{
    int firstNumber = 3;
    int secondNumber = 1;
    int expectedSum = 4;
    MockRepository mocks = new MockRepository();
    IAdder mockIAdder = mock.StrictMock<IAdder>();
    IAdderFactory mockIAdderFactory = mock.StrictMock<IAdderFactory>();
    Expect.Call(mockIAdderFactory.Create()).Return(mockIAdder);
    Expect.Call(mockIAdder.Add(firstNumber, secondNumber)).Return(4);
    mocks.ReplayAll();
    ICalculator calculator = new Calculator(mockIAdderFactory);
    int result = calculator.Add(firstNumber, secondNumber);
    mocks.VerifyAll();
    Assert.AreEqual(expectedSum, result);
}

Well, there’s nothing to think about here is there? If this is the way you want to write tests, invest some time and write a tool that will parse your test and create your business object for you. The test tells you “use the factory, create an adder, call the add method on the adder and return the result. Where’s the fun in that? What are you actually testing? Does it really matter how the calculator does what it needs to do?

Actually, I think the above test could be a good one if it was created three days into the calculator object—when you were refactoring the different pieces of the calculator into distinct service objects.

But right now, all you need is a calculator that can add two numbers together:

[Test]
public virtual void ShouldReturnFourWhenGivenThreeAndOne()
{
    ICaclulator calculator = new Calculator();
    Assert.AreEqual(4, calculator.Add(3, 1));
}

This should be your first failing test. I don’t care how you add it, but by golly you’d better give me back 4 when I give you 3 and 1.

I love DynamicMock because I am a believer that a test should test a very specific piece of code. Recently, however, I came across this poignant counterpoint that shattered my DynamicMock utopia. I had written tests and an object that looked something like this:

[Test]
public virtual void ShouldReturnTrueIfBigIsTrueAndBadIsFalse()
{
    MockRepository mocks = new MockRepository();
    IDataWrapper mockIDataWrapper = mock.DynamicMock<IDataWrapper>();
    SetupResult.For(mockIDataWrapper.IsBig).Return(true);
    SetupResult.For(mockIDataWrapper.IsBad).Return(false);
    mocks.ReplayAll();
    IDad dad = new Dad();
    Bool result = dad.IsABigBad(mockIDataWrapper);
    mocks.VerifyAll();
    Assert.IsTrue(result);
}

[Test]
public virtual void ShouldReturnTrueIfBigIsFalseAndBadIsTrue()
{
    MockRepository mocks = new MockRepository();
    IDataWrapper mockIDataWrapper = mock.DynamicMock<IDataWrapper>();
    SetupResult.For(mockIDataWrapper.IsBig).Return(false);
    SetupResult.For(mockIDataWrapper.IsBad).Return(true);
    mocks.ReplayAll();
    IDad dad = new Dad();
    Bool result = dad.IsABigBad(mockIDataWrapper);
    mocks.VerifyAll();
    Assert.IsTrue(result);
}

[Test]
public virtual void ShouldReturnFalseIfBigIsFalseAndBadIsFalse()
{
    MockRepository mocks = new MockRepository();
    IDataWrapper mockIDataWrapper = mock.DynamicMock<IDataWrapper>();
    SetupResult.For(mockIDataWrapper.IsBig).Return(false);
    SetupResult.For(mockIDataWrapper.IsBad).Return(false);
    mocks.ReplayAll();
    IDad dad = new Dad();
    Bool result = dad.IsABigBad(mockIDataWrapper);
    mocks.VerifyAll();
    Assert.IsFalse(result);
}

public interface IDataWrapper
{
    public IsBig { get; }
    public IsBad { get; }
}

public class Dad : IDad
{
    public virtual bool IsABigBad(IDataWrapper dataWrapper)
    {
        return dataWrapper.IsBig || dataWrapper.IsBad;
    }
}

My test was green and I was happy, but my pair informed me that a bug could be introduced accidentally and no one might ever notice:

public interface IDataWrapper
{
    public IsBig { get; }
    public IsBad { get; }
    public IsFat { get; }
}

public class Dad : IDad
{
    public virtual bool IsABigBad(IDataWrapper dataWrapper)
    {
        return dataWrapper.IsBig || dataWrapper.IsBad || dataWrapper.IsFat;
    }
}

Because a DynamicMock will not throw an exception for unexpectedly calling IsFat and will return false as the default for the bool base type, my tests will all remain green. But in production my code might not work as expected.

There is seldom a single solution that works in every situation. I have learned to find the proper place for both Dynamic and Strict Mocks in my TDD toolbox and hope that this encourages you to think more deeply about your own toolbox.

Jimmy Bosse is a Senior .NET Consultant and Team Lead at Thycotic Software, an agile software consulting and product development company based in Washington DC. Secret Server is our flagship password management software product.

Refactoring Code: A Programmers Challenge Part 2

April 9, 2009 2 comments

Kevin Jones: Refactoring

April 9th 2009 | Kevin Jones

Refactoring Code: A Programmer’s Challenge

In a previous blog post– Refactoring Code: A Programmer’s Challenge Part 1—we went over refactoring basics, and took a not-so-easily-tested piece of code and made it easy to test.

Our final bit of code was this:

public class DisplayPanelDecider
{
    public bool ShouldDisplayPanelOnPage(string pageName)
    {
        string[] pagesToHideElement = new[]
        {
            "/index.aspx",
            "/sitemap.aspx",
            "/404.aspx",
            "/status.aspx",
            "/online_help.aspx",
        };
        return !Array.Exists(pagesToHideElement, s => s == pageName);
    }
}

The objective of the code is clear: “Given the name of a page, should I display a panel?”

Furthermore, the code is compact and does no more or less than it should – so we can test it easily.

Some of our tests would look like this:

[TestClass]
public class DisplayPanelDeciderTests
{
    [TestMethod]
    public void ShouldDisplayThePanelIfThePageIsNotInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider();
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/foobar.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/main.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/blog.aspx"));
    }

    [TestMethod]
    public void ShouldNotDisplayThePanelIfThePageIsInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider();
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/index.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/map.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/status.aspx"));
    }
}

An interesting side note: Did you notice that the names of our tests start with the word “should?” It’s such a simple thing, but the name of tests is important. One should be able to figure out the purpose of the test by reading its name. Using the prefix “should” forces you to think about the test name.

But as we know, software requirements grow and change. What’s happened to our software developer a few months down the line? (Put on your pretend cap!)

Well, at this point, his list of pages grown considerably. Rather than the handful we have in our code, we now need to hide the panel for many pages. As an additional requirement, it now needs be configurable too, so compiled code is not the best solution.

A natural place to put configuration for now is in the appSettings section of the web.config file, and for simplicity sake, we’ll separate our pages with a semicolon, so they’ll look like this:

<configuration>
    <appSettings>
        <add
key="PagesToHidePanel"
value="/index.aspx;/map.aspx;/404.aspx;/status.aspx;/online_help.aspx"/>
    </appSettings>
</configuration>

Now, we need a class that is responsible for retrieving these page names and parsing them. Rather than throw that code in our existing class, introduce a new dependency.

The implementation I came up with looks like this:

public interface IDisplayPanelExclusionProvider
{
    string[] GetPageNames();
}

public class DisplayPanelExclusionProvider : IDisplayPanelExclusionProvider
{
    public string[] GetPageNames()
    {
        string unparsedContent =
            ConfigurationManager.AppSettings["PagesToHidePanel"];
        string[] pageNames = unparsedContent.Split(';');
        return pageNames;
    }
}

Notice I created an interface. This will have a key role later on. The next step is getting our two classes, DisplayPanelExclusionProvider and DisplayPanelDecider talking to each other.

The constructor is a simple and useful approach to dependency injection. Our goal here is to get the DisplayPanelDecider to ask the DisplayPanelExclusionProvider, “On which pages should I not display this panel?”

We’ll modify the DisplayPanelDecider to take in a IDisplayPanelExclusionProvider. Again, notice I used the interface in this example. So our class now looks like this:

public class DisplayPanelDecider
{
    private readonly IDisplayPanelExclusionProvider _displayPanelExclusionProvider;

    public DisplayPanelDecider(IDisplayPanelExclusionProvider displayPanelExclusionProvider)
    {
        _displayPanelExclusionProvider = displayPanelExclusionProvider;
    }

    public bool ShouldDisplayPanelOnPage(string pageName)
    {
        string[] pagesToHideElement = _displayPanelExclusionProvider.GetPageNames();
        return !Array.Exists(pagesToHideElement, s => s == pageName);
    }
}

Our DisplayPanelDecider now asks the IDisplayPanelExclusionProvider for a list of pages. However, by introducing our constructor, we broke our tests! The next step is to start getting our tests working again.

Important note: even though our class examples seem contrived, we are keeping decision-making logic separate. This is important from the testability aspect as well as for future maintenance. The key part here is single dependency, and that applies for our tests as well. We don’t want our tests testing multiple classes at once: keep them separate.

This is where the interface saves us: we can inject our own classes for the purpose of testing.

[TestClass]
public class DisplayPanelDeciderTests
{
    private class MockDisplayPanelExlusionProvider : IDisplayPanelExclusionProvider
    {
        public string[] GetPageNames()
        {
            return new[]
                   {
                       "/index.aspx",
                    "/map.aspx",
                    "/404.aspx",
                    "/status.aspx",
                       "/online_help.aspx"
                   };
        }
    }

    [TestMethod]
    public void ShouldDisplayThePanelIfThePageIsNotInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider(new MockDisplayPanelExlusionProvider());
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/foobar.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/main.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/blog.aspx"));
    }

    [TestMethod]
    public void ShouldNotDisplayThePanelIfThePageIsInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider(new MockDisplayPanelExlusionProvider());
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/index.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/map.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/status.aspx"));
    }
}

Note that we are injecting a fake class that allows us to implement a hand rolled mock. The single purpose of this test class is to ensure that the DisplayPanelDecider decision logic works. It isn’t the DisplayPanelDecider’s responsibility to know the real list of pages from our configuration. That should be tested somewhere else. In effect, we are testing against dummy data. When we want our class to use the real provider, we just create an instance of that and hand it in. In fact, with constructor chaining we can implement a default constructor that uses the real provider:

public DisplayPanelDecider() : this(new DisplayPanelExclusionProvider())
{
}

We have the option of using the DisplayPanelDecider with the real provider, or handing in our own – which we do for testing.

Now the thought on most people’s mind at this point is most likely “This seems like overkill.” If you stand back and look at it for all it is now, I might agree. However, maybe the programmer we’re helping finds himself with more challenging requirements. The other part is the hand rolled mock.

Rewinding a bit, in our new tests we saw a dummy class that provided fake data for the sake of testing. There are mock frameworks that remove the need for these dummy classes, but we’ll examine that in a future blog post. If you’re up for some homework, check out Rhino Mocks.

I pose this question to you: What advantages and disadvantages do you see with our approach? Is it worth the effort? I’d love to know what you think.

Kevin Jones is a Senior .NET developer and Product Team Leader at Thycotic Software Ltd. Thycotic is recognized for providing Agile TDD Training and Agile .NET Consulting Services, and its flagship password management software Secret Server. On Twitter? — > Follow Kevin

Refactoring Code: A Programmers Challenge

April 2, 2009 4 comments

Kevin Jones: Refactoring

April 2nd 2009 | Kevin Jones

Refactoring Code: A Programmer’s Challenge

Today I encountered a question on the ASP.NET Forum that I got quite excited about, “How do I refactor this?” Refactoring is a big part of our daily coding practices and I was curious as to what other programmers would come up with. Here is the basic question re-worded a little bit for brevity:
I have a master page with a control on it. However, on certain pages I don’t want a control to be shown. At first it was just one or two pages, and my approach worked just fine. However, the number of pages I need to hide this control on has grown. Can anyone offer some advice on how to refactor this code?
This was his code:
string pagename = Request.ServerVariables[“PATH_INFO”].ToString().Trim().ToLower();
if (
pagename == “/index.aspx” ||
pagename == “/sitemap.aspx” ||
pagename == “/404.aspx” ||
pagename == “/status.aspx” ||
pagename == “/online_help.aspx”
)
{
SomePanel.Visible = false;
}

The original poster realized that his code wasn’t maintainable-a classic reason to refactor: Keep your code clean, readable, and maintainable.

The interesting thing about refactoring is that there is no de-facto way to improve this code-or any refactoring for that matter. Refactoring has a lot of patterns that have been used over the years, but how that pattern is implemented, and evolves, is a discussion that could go on forever. The key is: keep the code simple and keep the changes small. If we really wanted to, we could write over-the-top code; put these pages in a database table; and cache the table for performance. But if you refactor often enough the changes should always be incremental.

Here’s something else to consider: the ‘if’ statement will keep getting larger and larger every time a page is added. If I was implementing this I might have started in the same place. I wouldn’t have needed this control visible for this page, but then I might not have needed it for the next one either, or the next one. The original code is not bad-it just doesn’t meet the new criteria.
My suggestion is to remove the OR chain and store the page names in a local so one can just keep adding to a string array, and then check if it exists in the string array. This simple refactoring adds a little clarity to the code, but it’s not exactly a work of art:
string[] pagesToHideElement = new[]
{
“/index.aspx”,
“/sitemap.aspx”,
“/404.aspx”,
“/status.aspx”,
“/online_help.aspx”,
};
string pagename = Request.ServerVariables[“PATH_INFO”].ToString().Trim().ToLower();
if (Array.Exists(pagesToHideElement, s => s == pagename))
{
SomePanel.Visible = false;
}
We have isolated one of our problems: the growing list of pages we want to exclude. The developer can then keep adding to the array. This also provides flexibility we didn’t have before. We can now move our page list anywhere we like, like a new class and method. This is a very simple refactoring, but what else can we do? While the original person didn’t think of this, this code isn’t easy to test without a full-blown integration test.
Not to derail our refactoring discussion, but let’s talk briefly about testing and how it applies to refactoring. They go hand-in-hand, especially to an Agile developer. Refactoring is a necessary part of improving code, but how do you know that everything will still work? Many who oppose refactoring don’t write tests simply because they have no proof that a refactoring didn’t break the code. And refactoring isn’t always easy, especially when dealing with legacy code. Most Agile developers are familiar with the “Red-Green-Refactor” motto:

Refactoring Code

The diagram varies, but nevertheless it always says the same thing
1. Write a failing test first (That’s a core Test Driven Development value)
2. Make the test pass by implementing the required code
3. See the test pass
4. Refactor
5. Make sure you are still green
6. Improve test so that it is red again
It’s the fifth component that makes testing and refactoring go together well. Peace of mind comes with knowing that you didn’t break anything, assuming you have good testing practices-but of course you do!

Back to refactoring. How is this code refactoring tested without actually seeing if it shows the correct content or not? MVP or the MVC pattern are a good idea, but let’s start small. A pattern that I am fond of for these simple things is a decider class. The benefits of this pattern will become clear shortly.
This sort of decision-making logic doesn’t really belong in the presentation layer (UI layer). It can be deferred to a decider. The decider is exactly what it sounds like-a class that decides if the panel should be visible or not. So our decider can have a method on it that takes a page name, and return a Boolean value if the panel should be displayed. Simple enough, right? At this point we are extracting our logic out of the page and into a new class.
public class DisplayPanelDecider
{
public bool ShouldDisplayPanelOnPage(string pageName)
{
string[] pagesToHideElement = new[]
{
“/index.aspx”,
“/sitemap.aspx”,
“/404.aspx”,
“/status.aspx”,
“/online_help.aspx”,
};
return !Array.Exists(pagesToHideElement, s => s == pageName);
}
}
So now we have a piece of logic that we can test very easily. All that the ASPX page would have to do is create an instance and hand in the page name, but the core decision logic can be tested without any need for a browser.
This gives us a new power through. Now that we have this decision making logic in its own method, we can make our method do whatever we like. If, in a year’s time, this no longer feasible, we can alter our method to pull the page names from an XML file or a database. We’ll look at that later.


A quick recap of what we did:

  • We changed the functionality to give us a bit more functionality and moved from a large OR operation to a more dynamic approach.
  • We extracted our logic out to a new class.
  • Two simple changes…and the benefits are huge!
  • We can now test this logic very easily through our favorite testing framework. XUnit MSTest, etc. (I refer to XUnit as “Any of the testing frameworks, like NUnit, MbUnit, etc. Not the actual framework called “xUnit”).
  • Our logic extraction keeps the decision making out of the ASPX code behind. This keeps our ASPX code behind focused on the page, not buried in a code behind.
  • We have the freedom to change what this method does without affecting the page. In the future, we can make “ShouldDisplayPanelOnPage” do whatever we like.
  • Further changes to this class can be tested TDD style so we can feel confident that future refactorings won’t break functionality.

What would you have done to improve the code? Got a better idea or suggestions?
Next we’ll look at dependency injection and what happens when this decision making logic gets complex.


Kevin Jones is a Senior .NET developer and Product Team Leader at Thycotic Software Ltd. Thycotic is recognized for providing Agile TDD Training and Agile .NET Consulting Services, and its flagship password management software Secret Server.

My two and a half cents – pitfalls with rounding

March 19, 2009 Leave a comment

Secret Server at FOSE tradeshow

March 19th 2009 | Tucker Croft

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:”Calibri”,”sans-serif”;}
True or False: these values are the same?

1.       61.64598913 saved to DataBase Column :[Age] [decimal](10, 2)

2.        Math.Round(61.64598913, 2)

What does (1. == 2.) return?

False. Bryant and I came across an interesting issue when comparing two values that appeared identical, but which the code insisted were different.   Our research lead us into the internals of ASP.Net rounding to solve an infrequent but legitimate concern.

Scenario

The above question became my issue. I needed to know why a portion of the system would compare a value in the database to the same newly-calculated value and judge a significant difference.

The scenario, using Age as the example:

  • CalculateAge() – A value is calculated with up to 15 decimal places
  • AgeInDB – The value is stored in a database table with precision 2
    • [Age] [decimal](10, 2)
  • AgeChangeDetector – Determines if the Age has changed by comparing CalculateAge() to AgeInDB
  • If Age had changed flags the value for review
  • If Age hadn’t change then doesn’t need to be reviewed

After several months and thousands of values being compared correctly,  I found a single value being repeatedly marked for review even though the raw Age was not changing.

  • Calculated Age(): 61.64598913
  • AgeInDB: 61.65

At first glance, this seemed to be a simple failure to compare the values at the same precision. Since the database is at precision 2 and the new calculated value is up to 15 decimal places, the same value would appear different if compared directly. But looking at the comparison the code was:

  • Math.Round(caculatedAge, 2) == AgeInDatabase

With the Math.Round function both values were being compared to the same decimal places. I checked the AgeChangeDetector tests and all test cases passed with a variety of different decimal number combinations.  Curious, I plugged the mysterious values into the AgeChangeDetector class and saw my assertion fail; the class detected a difference with the AgeInDB at 64.68 and the rounded calculation at 64.67.  Seeing the 64.67, I had isolated the problem to the Math.Round function.

Banker’s Rounding

Jumping on Google, I searched for “problems with Math.Round” and filtered down to a forum about the internal algorithm.  Rounding is, at its core, a lossy method where a number right in the middle (.5) must be given a bias and go up or down to 1 or 0.  From grade school, everyone is familiar with common rounding which always rounds .5 up to 1.  When saving the decimal value to precision 2, the database used this method, hence the 61.65.  But Math.Round uses the algorithm known as Banker’s Rounding.  Banker’s Rounding strives to equal things out by rounding up as often as it rounds down.  In a transaction, one party would gain the half a penny and one party would lose it.  To achieve an unbiased view of the middle number, bankers round to the closest even number.  Since having an even number before the .5 is as statistically likely as having an odd, the bias evens out over multiple transactions.  My problem boils down to the fact that ASP 1.1 Math.Round will always use Banker’s Rounding and my database is using common method.

Solution

Knowing the problem may be half the battle, I now had some potential solutions that didn’t seem easy.  One option was that I could round every number in code first before saving to the database. This would ensure that all values going (and coming) from the database would be uniform with my calculations. But that would be a big impact type of change and add an extra level of processing on every decimal value persist to the database (and this might need to expand to more than just the AgeInDB column).  Also, I had to keep in mind that the database had already saved all its numbers and there would be no way to know if the number had been rounded up because it landed on .5 or because it had landed above that.  Another option would be  that the comparison was all that really needed to change, so I researched a means to accomplish common rounding in the AgeChangeDetector.

If I was  in ASP 2.0, Math.Round would take an optional parameter that can set the rounding algorithm, but I am tied to ASP 1.1 in the current release.  After receiving input from forums, I created a new Method to round numbers using the built-in rounding in the ToString method.  My function looked like this:

public decimal CustomMathRound(decimal number, int places)

{

string decimalPad = “#.” + string.Empty.PadRight(places, ‘#’);

string nums = number.ToString(decimalPad);

return Convert.ToDecimal(nums);

}

I replaced the call to Math.Round with my custom function and the new test passed without disturbing any other functionality.  Feeling confident that this was a workable fix, I replaced all other comparisons between a database value and a calculated value with my custom function to avoid another change conflict in the future.

Follow-up

Rounding is a common task for a developer, but should be considered thoroughly to ensure that different methods are not in play on each side of the comparison. The thousands of correct values before this issue proves comparing numbers to a given significance can catch you off guard. This problem is such an edge case that even in-depth test cases may not expose it. This is a situation that the developer must be aware of beforehand-to specifically test a number impacted by banker’s rounding.


Tucker Croft is a Senior .NET developer at Thycotic Software Ltd. Thycotic is recognized for providing Agile TDD Training and Agile .NET Consulting Services, and its flagship password management software Secret Server.

Please question the need for whitespace

March 18, 2008 13 comments

I have blogged about this before but I think it is a common problem that is worth restating since it affect developers across our industry.  I noticed the following method recently and again the curious separation of sections by whitespace popped into my head:

   1:  private void CalcHeaderOffsets()
   2:  {
   3:      this.fs = new FileStream( assemblyPath, FileMode.Open, FileAccess.Read );
   4:      this.rdr = new BinaryReader( fs );
   5:      dos_magic = rdr.ReadUInt16();
   6:      if ( dos_magic == 0x5a4d )
   7:      {
   8:          fs.Position = 0x3c;
   9:          peHeader = rdr.ReadUInt32();
  10:          fileHeader = peHeader + 4;
  11:          optionalHeader = fileHeader + 20;
  12:          dataDirectory = optionalHeader + 96;
  13:          // dotNetDirectoryEntry = dataDirectory + 14 * 8;
  14:   
  15:          fs.Position = peHeader;
  16:          pe_signature = rdr.ReadUInt32();
  17:          rdr.ReadUInt16(); // machine
  18:          numDataSections = rdr.ReadUInt16();
  19:          fs.Position += 12;
  20:          optionalHeaderSize = rdr.ReadUInt16();
  21:          dataSections = optionalHeader + optionalHeaderSize;
  22:   
  23:          sections = new Section[numDataSections];
  24:          fs.Position = dataSections;
  25:          for( int i = 0; i < numDataSections; i++ )
  26:          {
  27:              fs.Position += 8;
  28:              sections[i].virtualSize = rdr.ReadUInt32();
  29:              sections[i].virtualAddress = rdr.ReadUInt32();
  30:              uint rawDataSize = rdr.ReadUInt32();
  31:              sections[i].fileOffset = rdr.ReadUInt32();
  32:              if ( sections[i].virtualSize == 0 )
  33:                  sections[i].virtualSize = rawDataSize;
  34:   
  35:              fs.Position += 16;
  36:          }
  37:      }
  38:  }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

What is the purpose of the new lines at 14, 22 and 34?  Are these separate blocks of logic?  Could they be better represented using “Extract Method“?

One of the first things that I do with code when refactoring is remove all the line breaks and make sure I am comfortable with the logical flow of the code.  If lines don’t logically flow together then I ask myself if the two pieces belong next to each other or if they should be different responsibilities in different methods.

Ask yourself next time you leave a new line whether it is really necessary or is it a codesmell indicating poorly separated unrelated functionality?

FYI:  The code above is taken from NUnit (AssemblyReader.cs) and is free software licensed under the NUnit license (Copyright 2007, Charlie Poole, http://nunit.org/?p=license&r=2.4).

 

We are hiring!  Do you want to write beautiful code in a Test Driven, Refactored, Agile .NET software company in the heart of Washington DC and work on cool products
Take the code test and send your resume along with why you want to join Thycotic to
tddjobs@thycotic.com.

Refactoring example in C# and VB.NET

March 26, 2007 1 comment

Our very own Bryant Smith has revamped his conversion of Martin Fowler’s refactoring example (originally in Java) to nowcover both C# and VB.NET.

You can find the article here with the relevant downloads and walkthrough.

Martin Fowler’s example works nicely because it is a simple class structurethat is easy to understand. It alsohas enough complexity to allow refactorings like MoveMethod to show how data and operations should be related. I also like Bob Martin’s refactoring example on generating primes but it doesn’t have much of the OO feel since the whole thing is really just one method being broken out with little instance data. Both examples are great for debating techniques and trying out new ideas. A book filled withrefactoringpuzzles would be a great resource … anyone?

Our team has a weekly Friday morning meeting where we experiment with new techniques and ideas – we have done both of the mentioned refactoring examples by breaking our team into programming pairs and then comparing their solutions at the end. This is a lot of fun and allows the team to experiment with refactoring ideas outside of the deadline crunch. Both examples have worked well in this environment and have helped to spread adoption and familiarity of refactoring techniques across the group. Of course, everyday pairing also helps but sometimes it is nice to refactor it several different ways and then compare approaches (something a client might not be too excited about! :)) – although the learning experience is fantastic.

If you are interested in joining our team where we practice Test Driven Development, Pair Programming and Refactoring daily – we are hiring.

Jonathan Cogley is the CEO and founder of Thycotic Software, a .NET consulting company and ISV in Washington DC. Our product, Secret Server is an enterprise password management system for teams to secure their passwords. Where do you keep your passwords or do you use the samepassword everywhere?

Follow

Get every new post delivered to your Inbox.