Archive

Archive for the ‘Refactoring’ Category

The Dangers of Single Responsibility in Programming

November 9, 2009 3 comments

The Danger of Single Responsibility in Programming

October 16th 2009 | David Cooksey

The Dangers of Single Responsibility in Programming

The principle of single responsibility is an important and well-known object-oriented guideline. Systems designed without any consideration for this principle often result in the God-object anti-pattern, with predictably unfortunate results. However, taking single responsibility too far also results in code that is difficult to read and maintain.

Let’s take a look at a possible system built for a company that sells a wide variety of products. This system was originally built around a Product object, and over the years the product object has continued to grow and acquire responsibility as new features were added to the system. The IProduct interface now looks like this:

  public interface IProduct
  {
    int Id { get; set; }
    string Name { get; set; }
    int TypeId { get; set; }
    DateTime AddedDate { get; set; }
    decimal Cost { get; set; }
    decimal BasePrice { get; set; }
    decimal SalesPrice();
    decimal Price(int customerId);
    decimal Discount(int customerId);
    decimal GetShippingCharge(int customerId, string stateCode);
    int GetMinQuantity(int customerId);
    DateTime GetNumAvailable(int customerId);
  }

This company has built its business model on special services for frequent customers, including special discounts, shipping rates, lower base prices, etc. Some customers receive lower prices on specific products in return for promises to order at least a certain quantity each time. The net result is that the full cost depends on who and where the customer is as much as it depends on the product itself.

Now imagine that a developer on this project has read about the fascinating new concept of single responsibility and decides that IProduct is responsible for too much. In fact, it’s responsible for everything. So he creates a PriceProvider that contains a GetPrice method as shown below, moving the logic of the method directly from the Product class to the PriceProvider.

    public decimal GetPrice(IProduct product, int customerId)
    {
      decimal price = product.BasePrice;
      ICustomer customer = GetCustomer(customerId);
      if (customer.GoldLevelCustomer)
      {
        price = price * (1 - GetGoldLevelDiscount());
      }
      if (ProductIsOnSale() && !FixedDiscountAgreementExists(customer, product))
      {
        decimal salePrice = product.SalesPrice();
        if (salePrice < price)
        {
          price = salePrice;
        }
      }

      return price;
    }

So far, so good. The logic is adequately complex and involves enough dependencies that it should probably exist in its own class. Initially, our developer is happy. However, as he continues to look at this method, he decides that it is doing a lot more than it should. After all, any number of business logic changes could make this class change, and a class should have only one reason to change, right? So he rolls up his sleeves and gets to work, eventually producing the following:

    public decimal GetPrice(IProduct product, int customerId)
    {
      decimal price = product.BasePrice;
      ICustomer customer = GetCustomer(customerId);
      if (goldLevelDeterminator.IsGoldLevelCustomer(customer))
      {
        price = price * (1 - goldLevelDiscountProvider(product));
      }
      if (saleProvider.IsOnSale(product) && !fixedDiscountAgreementProvider.AgreementExists(customer, product))
      {
        decimal salePrice = product.SalesPrice();
        if (useSalesPriceInsteadOfPriceDeterminator.UseSalesPrice(price, salePrice))
        {
          price = salePrice;
        }
      }
      return price;
    }

The goldLevelDiscountProvider, saleProvider, and fixedDiscountAgreementProvider probably refer to their own tables, given the code structure shown, so it makes sense to split them out. However, the goldLevelDeterminator is literally calling the GoldLevelCustomer property on customer and returning it, and the useSalesPriceInsteadOfPriceDeterminator is simply comparing the sales price to the price.

These latter two changes are examples of implementing the principle of single responsibility at a level of granularity below that of the business requirements. It is possible that the company’s needs will change in such a way that these classes will become necessary, but they do not need their own class unless and until their complexity warrants it. The creation of two determinator classes here implies that significant logic is involved in determining whether a customer is a gold level customer, or whether the sales price should be used.

Unnecessary classes like the two mentioned above cause a number of problems. Firstly, they increase the complexity of the code. A developer reading through this class for the first time will need to open both determinators up and mentally piece their contents back into the main function instead of simply reading them. Secondly, their existence as independent entities implies that they are reusable. However, their creation was solely based on a desire to separate method calls into their own class, not a thorough investigation of how the class meshes with the rest of the classes in the project.. Quite often, classes like these are not reused and in fact their functionality is duplicated in other tiny classes used in other places.

In short, when you’re designing or refactoring systems, plan your class structure around business needs, not logical decision points. A method with two if statements should not automatically be considered as having two reasons to change. After all, those two if statements may represent a single business concept.

David Cooksey is a Senior .NET Consultant at Thycotic Software, an agile software services and product development company based in Washington DC. Secret Server is our flagship password management software product.

The Template Pattern A Benevolent Dictator

July 22, 2009 1 comment

Ben Yoder the Template Pattern

July 22nd 2009 | Ben Yoder

The Template Pattern: A Benevolent Dictator

The Template Pattern is unique because of the level of control maintained at the top level. An abstract class controls the steps and the possible default implementations of the algorithm, but it’s kind enough to let its subclasses modify the behavior in pre-defined methods.

Similar design patterns, and specifically the Strategy pattern, prescribe the encapsulation of individual algorithms and logic into single classes that can be called independently.

The Template Pattern is useful for avoiding code duplication and keeping code maintainable. When you copy and paste the same—or very similar—logic across code you should encapsulate that code to prevent drift. This is when a benevolent dictator class helps clean up your code.

Drift may occur due to a change in requirements. Imagine you are working on an application that has a requirement to create an audit record whenever a user edits information on a form. During version 1.0, you created an AuditLogger class that simply writes a record to the database.

public class AuditLogger
{
    public void InsertAuditRecord(){...}
}

However, for version 2.0 you have this requirement: whenever a regular user edits information on certain forms, an email should be sent to a system admin. Additionally system admins, due to their higher level of access, require separate security audit records to be created elsewhere. As quick fix, you could add methods called EmailNotification() and InsertAdminAuditRecord() to AuditLogger which can be called from the Save() method on the forms depending on the user type.

But after that’s been wrapped up, requirements change and a new power user type has been added to the system. This user type requires a single audit record, but this time there is no need to send an email notifying administrators. You could create a mess on all your forms by adding methods to AuditLogger and making decisions on the form, or you could encapsulate what differs between the auditing logic per user-type, recognizing that requirements may change again.

In this case, AuditLogger currently looks like:

public class AuditLogger
{
    public void InsertAuditRecord(){...}
    public void InsertAdminAuditRecord(){...}
    public void EmailNotification(){...}
}

In refactoring this to a Template Pattern, you’ll notice that InsertAuditRecord() and InsertAdminAuditRecord() are essentially the same logical step. By default, users should write a standard audit record, but administrators should write a special audit record. So in creating your Template class, you should define just a single virtual InsertAuditRecord() method that inserts a standard record and a virtual Notify() method that sends emails by default.

public abstract class AuditLogger
{
    public void Audit()
    {
        InsertAuditRecord();
        Notify();
    }

    public virtual void InsertAuditRecord()
    {
        Console.WriteLine("Auditing User...");
    }

    public virtual void Notify()
    {
        Console.WriteLine("Emailing Admin...");
    }
}

Next, create concrete implementations to define specific implementations of the methods, where needed. Your standard AuditLogger will simply be a concrete that extends AuditLogger with the default implementations, but your Admin and PowerUser loggers will override separate methods.

public class StandardAuditLogger : AuditLogger
{
}

public class AdminAuditLogger : AuditLogger
{
    public override void InsertAuditRecord()
    {
        Console.WriteLine("Auditing Admin User...");
    }

    public override void Notify()
    {
        return;
    }
}

public class PowerUserAuditLogger : AuditLogger
{
    public override void Notify()
    {
        return;
    }
}

The advantage of this is as requirements change, your specific logic is encapsulated, but you can continue to define default implementations. So if everyone except admins needs to send email notifications, you could pull the Notify() override out of the PowerUserAuditLogger and be done. And if another step was to be added to the audit process, a default method could be defined and called in the template without having to touch anything else.

The Template pattern becomes less useful the more the algorithms diverge. If you find yourself overriding every single method of the abstract Template class and adding in a lot of hook methods to control the flow from the subclasses, then the Template Pattern may not be your best choice. But if it looks like your code could use some iron fisted governing, let the Template call all the shots—and see how much simpler maintenance becomes.

Ben Yoder is a Senior .NET Consultant at Thycotic Software, an agile software consulting and product development company based in Washington DC. Secret Server is our flagship password management software product.

Refactoring Code: A Programmers Challenge Part 2

April 9, 2009 2 comments

Kevin Jones: Refactoring

April 9th 2009 | Kevin Jones

Refactoring Code: A Programmer’s Challenge

In a previous blog post– Refactoring Code: A Programmer’s Challenge Part 1—we went over refactoring basics, and took a not-so-easily-tested piece of code and made it easy to test.

Our final bit of code was this:

public class DisplayPanelDecider
{
    public bool ShouldDisplayPanelOnPage(string pageName)
    {
        string[] pagesToHideElement = new[]
        {
            "/index.aspx",
            "/sitemap.aspx",
            "/404.aspx",
            "/status.aspx",
            "/online_help.aspx",
        };
        return !Array.Exists(pagesToHideElement, s => s == pageName);
    }
}

The objective of the code is clear: “Given the name of a page, should I display a panel?”

Furthermore, the code is compact and does no more or less than it should – so we can test it easily.

Some of our tests would look like this:

[TestClass]
public class DisplayPanelDeciderTests
{
    [TestMethod]
    public void ShouldDisplayThePanelIfThePageIsNotInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider();
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/foobar.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/main.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/blog.aspx"));
    }

    [TestMethod]
    public void ShouldNotDisplayThePanelIfThePageIsInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider();
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/index.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/map.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/status.aspx"));
    }
}

An interesting side note: Did you notice that the names of our tests start with the word “should?” It’s such a simple thing, but the name of tests is important. One should be able to figure out the purpose of the test by reading its name. Using the prefix “should” forces you to think about the test name.

But as we know, software requirements grow and change. What’s happened to our software developer a few months down the line? (Put on your pretend cap!)

Well, at this point, his list of pages grown considerably. Rather than the handful we have in our code, we now need to hide the panel for many pages. As an additional requirement, it now needs be configurable too, so compiled code is not the best solution.

A natural place to put configuration for now is in the appSettings section of the web.config file, and for simplicity sake, we’ll separate our pages with a semicolon, so they’ll look like this:

<configuration>
    <appSettings>
        <add
key="PagesToHidePanel"
value="/index.aspx;/map.aspx;/404.aspx;/status.aspx;/online_help.aspx"/>
    </appSettings>
</configuration>

Now, we need a class that is responsible for retrieving these page names and parsing them. Rather than throw that code in our existing class, introduce a new dependency.

The implementation I came up with looks like this:

public interface IDisplayPanelExclusionProvider
{
    string[] GetPageNames();
}

public class DisplayPanelExclusionProvider : IDisplayPanelExclusionProvider
{
    public string[] GetPageNames()
    {
        string unparsedContent =
            ConfigurationManager.AppSettings["PagesToHidePanel"];
        string[] pageNames = unparsedContent.Split(';');
        return pageNames;
    }
}

Notice I created an interface. This will have a key role later on. The next step is getting our two classes, DisplayPanelExclusionProvider and DisplayPanelDecider talking to each other.

The constructor is a simple and useful approach to dependency injection. Our goal here is to get the DisplayPanelDecider to ask the DisplayPanelExclusionProvider, “On which pages should I not display this panel?”

We’ll modify the DisplayPanelDecider to take in a IDisplayPanelExclusionProvider. Again, notice I used the interface in this example. So our class now looks like this:

public class DisplayPanelDecider
{
    private readonly IDisplayPanelExclusionProvider _displayPanelExclusionProvider;

    public DisplayPanelDecider(IDisplayPanelExclusionProvider displayPanelExclusionProvider)
    {
        _displayPanelExclusionProvider = displayPanelExclusionProvider;
    }

    public bool ShouldDisplayPanelOnPage(string pageName)
    {
        string[] pagesToHideElement = _displayPanelExclusionProvider.GetPageNames();
        return !Array.Exists(pagesToHideElement, s => s == pageName);
    }
}

Our DisplayPanelDecider now asks the IDisplayPanelExclusionProvider for a list of pages. However, by introducing our constructor, we broke our tests! The next step is to start getting our tests working again.

Important note: even though our class examples seem contrived, we are keeping decision-making logic separate. This is important from the testability aspect as well as for future maintenance. The key part here is single dependency, and that applies for our tests as well. We don’t want our tests testing multiple classes at once: keep them separate.

This is where the interface saves us: we can inject our own classes for the purpose of testing.

[TestClass]
public class DisplayPanelDeciderTests
{
    private class MockDisplayPanelExlusionProvider : IDisplayPanelExclusionProvider
    {
        public string[] GetPageNames()
        {
            return new[]
                   {
                       "/index.aspx",
                    "/map.aspx",
                    "/404.aspx",
                    "/status.aspx",
                       "/online_help.aspx"
                   };
        }
    }

    [TestMethod]
    public void ShouldDisplayThePanelIfThePageIsNotInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider(new MockDisplayPanelExlusionProvider());
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/foobar.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/main.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/blog.aspx"));
    }

    [TestMethod]
    public void ShouldNotDisplayThePanelIfThePageIsInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider(new MockDisplayPanelExlusionProvider());
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/index.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/map.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/status.aspx"));
    }
}

Note that we are injecting a fake class that allows us to implement a hand rolled mock. The single purpose of this test class is to ensure that the DisplayPanelDecider decision logic works. It isn’t the DisplayPanelDecider’s responsibility to know the real list of pages from our configuration. That should be tested somewhere else. In effect, we are testing against dummy data. When we want our class to use the real provider, we just create an instance of that and hand it in. In fact, with constructor chaining we can implement a default constructor that uses the real provider:

public DisplayPanelDecider() : this(new DisplayPanelExclusionProvider())
{
}

We have the option of using the DisplayPanelDecider with the real provider, or handing in our own – which we do for testing.

Now the thought on most people’s mind at this point is most likely “This seems like overkill.” If you stand back and look at it for all it is now, I might agree. However, maybe the programmer we’re helping finds himself with more challenging requirements. The other part is the hand rolled mock.

Rewinding a bit, in our new tests we saw a dummy class that provided fake data for the sake of testing. There are mock frameworks that remove the need for these dummy classes, but we’ll examine that in a future blog post. If you’re up for some homework, check out Rhino Mocks.

I pose this question to you: What advantages and disadvantages do you see with our approach? Is it worth the effort? I’d love to know what you think.

Kevin Jones is a Senior .NET developer and Product Team Leader at Thycotic Software Ltd. Thycotic is recognized for providing Agile TDD Training and Agile .NET Consulting Services, and its flagship password management software Secret Server. On Twitter? — > Follow Kevin

My two and a half cents – pitfalls with rounding

March 19, 2009 Leave a comment

Secret Server at FOSE tradeshow

March 19th 2009 | Tucker Croft

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:”Calibri”,”sans-serif”;}
True or False: these values are the same?

1.       61.64598913 saved to DataBase Column :[Age] [decimal](10, 2)

2.        Math.Round(61.64598913, 2)

What does (1. == 2.) return?

False. Bryant and I came across an interesting issue when comparing two values that appeared identical, but which the code insisted were different.   Our research lead us into the internals of ASP.Net rounding to solve an infrequent but legitimate concern.

Scenario

The above question became my issue. I needed to know why a portion of the system would compare a value in the database to the same newly-calculated value and judge a significant difference.

The scenario, using Age as the example:

  • CalculateAge() – A value is calculated with up to 15 decimal places
  • AgeInDB – The value is stored in a database table with precision 2
    • [Age] [decimal](10, 2)
  • AgeChangeDetector – Determines if the Age has changed by comparing CalculateAge() to AgeInDB
  • If Age had changed flags the value for review
  • If Age hadn’t change then doesn’t need to be reviewed

After several months and thousands of values being compared correctly,  I found a single value being repeatedly marked for review even though the raw Age was not changing.

  • Calculated Age(): 61.64598913
  • AgeInDB: 61.65

At first glance, this seemed to be a simple failure to compare the values at the same precision. Since the database is at precision 2 and the new calculated value is up to 15 decimal places, the same value would appear different if compared directly. But looking at the comparison the code was:

  • Math.Round(caculatedAge, 2) == AgeInDatabase

With the Math.Round function both values were being compared to the same decimal places. I checked the AgeChangeDetector tests and all test cases passed with a variety of different decimal number combinations.  Curious, I plugged the mysterious values into the AgeChangeDetector class and saw my assertion fail; the class detected a difference with the AgeInDB at 64.68 and the rounded calculation at 64.67.  Seeing the 64.67, I had isolated the problem to the Math.Round function.

Banker’s Rounding

Jumping on Google, I searched for “problems with Math.Round” and filtered down to a forum about the internal algorithm.  Rounding is, at its core, a lossy method where a number right in the middle (.5) must be given a bias and go up or down to 1 or 0.  From grade school, everyone is familiar with common rounding which always rounds .5 up to 1.  When saving the decimal value to precision 2, the database used this method, hence the 61.65.  But Math.Round uses the algorithm known as Banker’s Rounding.  Banker’s Rounding strives to equal things out by rounding up as often as it rounds down.  In a transaction, one party would gain the half a penny and one party would lose it.  To achieve an unbiased view of the middle number, bankers round to the closest even number.  Since having an even number before the .5 is as statistically likely as having an odd, the bias evens out over multiple transactions.  My problem boils down to the fact that ASP 1.1 Math.Round will always use Banker’s Rounding and my database is using common method.

Solution

Knowing the problem may be half the battle, I now had some potential solutions that didn’t seem easy.  One option was that I could round every number in code first before saving to the database. This would ensure that all values going (and coming) from the database would be uniform with my calculations. But that would be a big impact type of change and add an extra level of processing on every decimal value persist to the database (and this might need to expand to more than just the AgeInDB column).  Also, I had to keep in mind that the database had already saved all its numbers and there would be no way to know if the number had been rounded up because it landed on .5 or because it had landed above that.  Another option would be  that the comparison was all that really needed to change, so I researched a means to accomplish common rounding in the AgeChangeDetector.

If I was  in ASP 2.0, Math.Round would take an optional parameter that can set the rounding algorithm, but I am tied to ASP 1.1 in the current release.  After receiving input from forums, I created a new Method to round numbers using the built-in rounding in the ToString method.  My function looked like this:

public decimal CustomMathRound(decimal number, int places)

{

string decimalPad = “#.” + string.Empty.PadRight(places, ‘#’);

string nums = number.ToString(decimalPad);

return Convert.ToDecimal(nums);

}

I replaced the call to Math.Round with my custom function and the new test passed without disturbing any other functionality.  Feeling confident that this was a workable fix, I replaced all other comparisons between a database value and a calculated value with my custom function to avoid another change conflict in the future.

Follow-up

Rounding is a common task for a developer, but should be considered thoroughly to ensure that different methods are not in play on each side of the comparison. The thousands of correct values before this issue proves comparing numbers to a given significance can catch you off guard. This problem is such an edge case that even in-depth test cases may not expose it. This is a situation that the developer must be aware of beforehand-to specifically test a number impacted by banker’s rounding.


Tucker Croft is a Senior .NET developer at Thycotic Software Ltd. Thycotic is recognized for providing Agile TDD Training and Agile .NET Consulting Services, and its flagship password management software Secret Server.

The Power of Yield Return

February 19, 2009 Leave a comment

David Cooksey: Using code yield return

February 19th 2009 | David Cooksey

The Power of Yield Return

While working on a project recently I ran into a problem that was very difficult to track down.  We have a good number of checks that validate business rules on the main business object within the system.  Sometimes these checks need to run on related business objects as well.  The problem comes into play at this point, because the checks at the time of their creation do not have a direct reference to the business object they will be run on.  Since the checks have dependencies that vary with their intended target, the convention is to push an Id onto a static stack, create the checks, run the checks, then pop the Id off of the stack.

Unfortunately, it is very easy to forget to update the stack, or to push but not pop. This results in some checks being set up and then run using the wrong dependencies, which causes all kinds of obscure behavior down the road.

Enter Yield Return.

This is a simple wrapper that guarantees the StaticStack is updated correctly.  Even if the calling code throws an exception, the Pop() function will still be called.  Note that the real code uses a context-specific stack, the sample code presented here does not handle multi-threaded situations.

Any foreach statement that iterates over a collection of business objects can then be updated as follows.

What I find most interesting about this pattern is that it allows the called function to run any arbitrary setup or teardown code per object it returns, in synch with the caller’s processing of the object.


David Cooksey is a Senior .NET developer at Thycotic Software Ltd.

Code Camps this weekend!

April 11, 2008 1 comment

CodeCampLogo There are two code camps this weekend:

I will be speaking at the Pittsburgh Code Camp on Refactoring – a topic that is very dear to me.

Register now and come along to talk code.

 

We are hiring!  Do you want to write beautiful code in a Test Driven, Refactored, Agile .NET software company in the heart of Washington DC and work on cool products?  Take the code test and send your resume along with why you want to join Thycotic to tddjobs@thycotic.com.

Don't miss the Nova Code Camp South this weekend!

March 28, 2008 Leave a comment

The NoVa CodeCamp South v1 will be held on March 29th 2008 in Woodbridge VA.  The speaker schedule has been posted here.

 

I am presenting two sessions:

9:00-10:15:     Refactoring in C#

1:00-2:15:        Web Application Testing in Watin

Register now!

 

We are hiring!  Do you want to write beautiful code in a Test Driven, Refactored, Agile .NET software company in the heart of Washington DC and work on cool products
Take the code test and send your resume along with why you want to join Thycotic to
tddjobs@thycotic.com.