Archive

Archive for the ‘Extreme Programming’ Category

The Bridge Pattern – When a single class hierarchy is not enough

November 9, 2009 Leave a comment

September 10th 2009 | David Cooksey

The Bridge Pattern – When a single class hierarchy is not enough

The bridge pattern allows both the implementation and the abstraction of a programming scenario to vary. Let’s take a look at a specific use case in order to understand the benefit the bridge pattern provides.

Imagine we are writing a top-down scrolling action game. The player will be able to choose from a variety of vehicles and will be up against a maze full of passive and active obstacles. In order to increase replayability, the vehicles available will include tanks, helicopters, and motorcycles, with expansions planned to include additional vehicles. Tanks will have a cannon and a machine gun, helicopters will have missiles and a machine gun, while the motorcycle will allow the player to throw grenades and wield a samurai sword. Throughout the course of the game, upgrades will alter the abilities of each weapon (better missiles, more ammo for the machine gun, swords of awesome lethality, etc).

So, how do we plan our class structure in such a way that we can treat each vehicle the same at the high level, while allowing for flexibility in both the selected vehicle and the weapons it is currently using?

Ideally our top-level implementation should allow us to do something like PlayerVehicle.FireWeapon1() without concerning ourselves with the specific vehicle or weapon the player is using.

The bridge pattern gives us the flexibility we need.

First we create a vehicle base class.

  public abstract class Vehicle
{
public IWeapon weapon1;
public IWeapon weapon2;

public abstract void Move();
public void ShootWeapon1()
{
weapon1.Fire();
}

public void ShootWeapon2()
{
weapon2.Fire();
}
}

By leveraging the “has-a” relationship between a vehicle and its weapons we allow the weapons to vary. The exposed ShootWeapon1 and ShootWeapon2 functions perform the same function in this case as calling .weapon1.Fire() on the vehicle itself. By making Vehicle an abstract class we leave all details of movement up to its concrete implementations.

public class Tank : Vehicle
{
public Tank()
{
weapon1 = new SimpleCannon();
weapon2 = new BasicMachineGun();
}

public override void Move()
{
// Check for physical obstacle, if no obstacle move the tank.
}
}

public class Helicopter : Vehicle
{
public Helicopter()
{
weapon1 = new AirToAirMissile();
weapon2 = new BasicMachineGun();
}

public override void Move()
{
// Move helicopter
}
}

As a result, both the Tank and Helicopter listed above will work as the vehicle in the following code sample.

      Vehicle vehicle = new Tank();

vehicle.Move();
vehicle.ShootWeapon1();
vehicle.ShootWeapon2();

The bridge pattern allows us to change vehicles and weapons independently. This concept is extensible to as many degrees as are necessary to allow independent variation. For example, a futuristic update to the game might add varying kinds of passive or reactive shields to the vehicles. No problem, just create an IShield interface and add it to the Vehicle abstract class.

Essentially, the bridge pattern is just an implementation of a recognition that two concepts exist in a “has-a” relationship and need to vary independently. As such, it provides the required flexibility with no drawback other than a small increase in the complexity of the class hierarchy.

David Cooksey is a Senior .NET Consultant at Thycotic Software, an agile software services and product development company based in Washington DC. Secret Server is our flagship password management software product.

Categories: Extreme Programming

The Dangers of Single Responsibility in Programming

November 9, 2009 3 comments

The Danger of Single Responsibility in Programming

October 16th 2009 | David Cooksey

The Dangers of Single Responsibility in Programming

The principle of single responsibility is an important and well-known object-oriented guideline. Systems designed without any consideration for this principle often result in the God-object anti-pattern, with predictably unfortunate results. However, taking single responsibility too far also results in code that is difficult to read and maintain.

Let’s take a look at a possible system built for a company that sells a wide variety of products. This system was originally built around a Product object, and over the years the product object has continued to grow and acquire responsibility as new features were added to the system. The IProduct interface now looks like this:

  public interface IProduct
  {
    int Id { get; set; }
    string Name { get; set; }
    int TypeId { get; set; }
    DateTime AddedDate { get; set; }
    decimal Cost { get; set; }
    decimal BasePrice { get; set; }
    decimal SalesPrice();
    decimal Price(int customerId);
    decimal Discount(int customerId);
    decimal GetShippingCharge(int customerId, string stateCode);
    int GetMinQuantity(int customerId);
    DateTime GetNumAvailable(int customerId);
  }

This company has built its business model on special services for frequent customers, including special discounts, shipping rates, lower base prices, etc. Some customers receive lower prices on specific products in return for promises to order at least a certain quantity each time. The net result is that the full cost depends on who and where the customer is as much as it depends on the product itself.

Now imagine that a developer on this project has read about the fascinating new concept of single responsibility and decides that IProduct is responsible for too much. In fact, it’s responsible for everything. So he creates a PriceProvider that contains a GetPrice method as shown below, moving the logic of the method directly from the Product class to the PriceProvider.

    public decimal GetPrice(IProduct product, int customerId)
    {
      decimal price = product.BasePrice;
      ICustomer customer = GetCustomer(customerId);
      if (customer.GoldLevelCustomer)
      {
        price = price * (1 - GetGoldLevelDiscount());
      }
      if (ProductIsOnSale() && !FixedDiscountAgreementExists(customer, product))
      {
        decimal salePrice = product.SalesPrice();
        if (salePrice < price)
        {
          price = salePrice;
        }
      }

      return price;
    }

So far, so good. The logic is adequately complex and involves enough dependencies that it should probably exist in its own class. Initially, our developer is happy. However, as he continues to look at this method, he decides that it is doing a lot more than it should. After all, any number of business logic changes could make this class change, and a class should have only one reason to change, right? So he rolls up his sleeves and gets to work, eventually producing the following:

    public decimal GetPrice(IProduct product, int customerId)
    {
      decimal price = product.BasePrice;
      ICustomer customer = GetCustomer(customerId);
      if (goldLevelDeterminator.IsGoldLevelCustomer(customer))
      {
        price = price * (1 - goldLevelDiscountProvider(product));
      }
      if (saleProvider.IsOnSale(product) && !fixedDiscountAgreementProvider.AgreementExists(customer, product))
      {
        decimal salePrice = product.SalesPrice();
        if (useSalesPriceInsteadOfPriceDeterminator.UseSalesPrice(price, salePrice))
        {
          price = salePrice;
        }
      }
      return price;
    }

The goldLevelDiscountProvider, saleProvider, and fixedDiscountAgreementProvider probably refer to their own tables, given the code structure shown, so it makes sense to split them out. However, the goldLevelDeterminator is literally calling the GoldLevelCustomer property on customer and returning it, and the useSalesPriceInsteadOfPriceDeterminator is simply comparing the sales price to the price.

These latter two changes are examples of implementing the principle of single responsibility at a level of granularity below that of the business requirements. It is possible that the company’s needs will change in such a way that these classes will become necessary, but they do not need their own class unless and until their complexity warrants it. The creation of two determinator classes here implies that significant logic is involved in determining whether a customer is a gold level customer, or whether the sales price should be used.

Unnecessary classes like the two mentioned above cause a number of problems. Firstly, they increase the complexity of the code. A developer reading through this class for the first time will need to open both determinators up and mentally piece their contents back into the main function instead of simply reading them. Secondly, their existence as independent entities implies that they are reusable. However, their creation was solely based on a desire to separate method calls into their own class, not a thorough investigation of how the class meshes with the rest of the classes in the project.. Quite often, classes like these are not reused and in fact their functionality is duplicated in other tiny classes used in other places.

In short, when you’re designing or refactoring systems, plan your class structure around business needs, not logical decision points. A method with two if statements should not automatically be considered as having two reasons to change. After all, those two if statements may represent a single business concept.

David Cooksey is a Senior .NET Consultant at Thycotic Software, an agile software services and product development company based in Washington DC. Secret Server is our flagship password management software product.

Bringing Plausible Deniability to Development: the Strategy Pattern

July 30, 2009 3 comments

July 30th 2009 | David Cooksey

Bringing Plausible Deniability to Development: the Strategy Pattern

If the template pattern is a benevolent dictator the strategy pattern is a politician concerned with plausible deniability: Don’t tell me the details, just do it. The strategy is defined solely in terms of its inputs and outputs.

Let’s say you are writing a program that cuts checks for employees. The code that handles the physical printing of the checks is complete, all that remains is determining how much to print on the check for each person. The company contains both salaried and hourly employees in addition to salesmen whose pay is based on commissions. Also, the company sends out holiday checks to all employees at various times of the year based on how the company performed recently.

You want to be flexible, so your payment generator is designed as a windows service that polls a PaymentRequest table periodically. It then examines the type of payment to determine how the amount should be calculated. The next step is to write and organize implementations of the different ways to calculate payments.

The design should be flexible enough that it makes adding new payment types as simple as possible, while also providing as much flexibility as possible with respect to implementation details. You don’t want to mandate that a particular step in calculation occur, because you don’t know what future payment types might require.

This is where the strategy pattern comes into play. You can use a simple interface that defines your payment strategy to streamline your code and cut down on the number of decision points. All you really need is a block of code that looks at the payment record and decides what strategy to use. The other code should be the same regardless of the payment type. Here is some pseudo code for what this would look like:

public interface IPaymentStrategy
  {
    double CalculateAmount(IPaymentRequest paymentRequest);
  }

public class PaymentGenerator
  {
    public void LookForPaymentRequests()
    {
      IPaymentRequest[] paymentRequests= GetNewPaymentRequests();
      foreach (IPaymentRequest paymentRequest in paymentRequests)
      {
        Process(paymentRequest);
      }
    }

    private void Process(IPaymentRequest request)
    {
      IPaymentStrategy strategy;
      switch (request.PaymentType)
      {
        case 1:
          strategy = new HourlyPayment();
          break;
        case 2:
          strategy = new SalariedPayment();
          break;
        case 3:
          strategy = new CommissionPayment();
          break;
        case 4:
          strategy = new HolidayPayment();
          break;
        default:
          strategy = new FlatPayment();
          break;
      }
      double amount = strategy.CalculateAmount(request);
      WriteCheck(amount);
    }
}

IPaymentStrategy defines a simple interface that accepts an IPaymentRequest and returns the amount calculated. The PaymentGenerator pulls in new payment requests from a table. It picks the appropriate payment calculation method based on the payment type Id and uses it to generate the correct payment amount.

If a new payment type is added, it requires no code restructuring other than the creation of a new class that inherits from IPaymentStrategy, a new case block, and a new row in the PaymentType table.

This code structure places no restrictions at all on the implementation details of the individual payment types. If a lot of code is shared among the payment types, inheritance, a common dependency, or any other method can be used to reduce or eliminate code duplication entirely at the discretion of the programmer.

This makes it easier to ignore the gritty details of payment calculation which a more strict pattern such as the Template Pattern would force you to consider.

Flexible

Maintainable

Plausibly Deniable

The Strategy Pattern.

David Cooksey is a Senior .NET Consultant at Thycotic Software, an agile software services and product development company based in Washington DC. Secret Server is our flagship password management software product.

The Template Pattern A Benevolent Dictator

July 22, 2009 1 comment

Ben Yoder the Template Pattern

July 22nd 2009 | Ben Yoder

The Template Pattern: A Benevolent Dictator

The Template Pattern is unique because of the level of control maintained at the top level. An abstract class controls the steps and the possible default implementations of the algorithm, but it’s kind enough to let its subclasses modify the behavior in pre-defined methods.

Similar design patterns, and specifically the Strategy pattern, prescribe the encapsulation of individual algorithms and logic into single classes that can be called independently.

The Template Pattern is useful for avoiding code duplication and keeping code maintainable. When you copy and paste the same—or very similar—logic across code you should encapsulate that code to prevent drift. This is when a benevolent dictator class helps clean up your code.

Drift may occur due to a change in requirements. Imagine you are working on an application that has a requirement to create an audit record whenever a user edits information on a form. During version 1.0, you created an AuditLogger class that simply writes a record to the database.

public class AuditLogger
{
    public void InsertAuditRecord(){...}
}

However, for version 2.0 you have this requirement: whenever a regular user edits information on certain forms, an email should be sent to a system admin. Additionally system admins, due to their higher level of access, require separate security audit records to be created elsewhere. As quick fix, you could add methods called EmailNotification() and InsertAdminAuditRecord() to AuditLogger which can be called from the Save() method on the forms depending on the user type.

But after that’s been wrapped up, requirements change and a new power user type has been added to the system. This user type requires a single audit record, but this time there is no need to send an email notifying administrators. You could create a mess on all your forms by adding methods to AuditLogger and making decisions on the form, or you could encapsulate what differs between the auditing logic per user-type, recognizing that requirements may change again.

In this case, AuditLogger currently looks like:

public class AuditLogger
{
    public void InsertAuditRecord(){...}
    public void InsertAdminAuditRecord(){...}
    public void EmailNotification(){...}
}

In refactoring this to a Template Pattern, you’ll notice that InsertAuditRecord() and InsertAdminAuditRecord() are essentially the same logical step. By default, users should write a standard audit record, but administrators should write a special audit record. So in creating your Template class, you should define just a single virtual InsertAuditRecord() method that inserts a standard record and a virtual Notify() method that sends emails by default.

public abstract class AuditLogger
{
    public void Audit()
    {
        InsertAuditRecord();
        Notify();
    }

    public virtual void InsertAuditRecord()
    {
        Console.WriteLine("Auditing User...");
    }

    public virtual void Notify()
    {
        Console.WriteLine("Emailing Admin...");
    }
}

Next, create concrete implementations to define specific implementations of the methods, where needed. Your standard AuditLogger will simply be a concrete that extends AuditLogger with the default implementations, but your Admin and PowerUser loggers will override separate methods.

public class StandardAuditLogger : AuditLogger
{
}

public class AdminAuditLogger : AuditLogger
{
    public override void InsertAuditRecord()
    {
        Console.WriteLine("Auditing Admin User...");
    }

    public override void Notify()
    {
        return;
    }
}

public class PowerUserAuditLogger : AuditLogger
{
    public override void Notify()
    {
        return;
    }
}

The advantage of this is as requirements change, your specific logic is encapsulated, but you can continue to define default implementations. So if everyone except admins needs to send email notifications, you could pull the Notify() override out of the PowerUserAuditLogger and be done. And if another step was to be added to the audit process, a default method could be defined and called in the template without having to touch anything else.

The Template pattern becomes less useful the more the algorithms diverge. If you find yourself overriding every single method of the abstract Template class and adding in a lot of hook methods to control the flow from the subclasses, then the Template Pattern may not be your best choice. But if it looks like your code could use some iron fisted governing, let the Template call all the shots—and see how much simpler maintenance becomes.

Ben Yoder is a Senior .NET Consultant at Thycotic Software, an agile software consulting and product development company based in Washington DC. Secret Server is our flagship password management software product.

StrictMock vs. DynamicMock: What are You Testing Here Anyway?

July 16, 2009 3 comments

Dynamic Mock vs Strict Mock

July 16th 2009 | Jimmy Bosse

StrictMock vs. DynamicMock: What are You Testing Here Anyway?

Here at Thycotic, we are TDD enthusiasts and pair developers. Test Driven Development and Pair Programming go together like chocolate and peanut butter, especially when you tackle a brand new piece of functionality and practice some green-field Ping-Pong programming.

For those unfamiliar with the concept of Ping-Pong programming, it’s a fun way to develop a new piece of business logic when you are not sure what the best method of implementation will be.

To start, one of you writes a failing test. The other makes the test pass, and then writes the next failing test (refactoring as needed). You make that test pass and write a new failing test. The goal of the session is to write the smallest amount of code possible, then try to write a test that effectively tests the desired functionality while providing a challenge to your pair. The session is easy at first, but as the tests move toward satisfying the more complicated requirements of the business, the exercise becomes more challenging.

Once your code becomes suitably complex, you’ll start to have multiple classes with dependencies—and with these dependencies come mock tests.

Then you have to decide: StrictMock or DynamicMock?

The problem with Strict Mocks is that you are required to set up your entire mock dependency far beyond the scope of the subject/system under test (SUT).

Take the following example.

[Test]
public virtual void ShouldSubscribeToViewEventWhenConstructed()
{
    MockRepository mocks = new MockRepository();
    IView mockIView = mock.StrictMock<IView>();
    mockView.SomeEvent += delegate{};
    LastCall.Constraints(Is.NotNull());
    mocks.ReplayAll();
    IPresenter presenter = new Presenter(mockView);
    mocks.VerifyAll();
}

To implement a presenter:

public class Presenter : IPresenter
{
    private IView _view;

    public Presenter(IView view)
    {
        _view = view;
        BindToEventsOn(_view);
    }

    private void BindToEventsOn(_view)
    {
        _view.SomeEvent += SomeEventHandler;
    }

    private void SomeEventHandler()
    {
        //Do something…
    }
}

The test is now green because you’ve subscribed to the event. But now that you’ve bound to the view in your constructor, you must always expect a call in your test whenever you mock IView with a StrictMock. This will make your tests verbose and it will be difficult to determine the actual SUT in each test.

Another issue is this: When you use a StrictMock you are essentially telling your pair what to program. Let’s say you and your pair sit down to create a calculator business object that must be able to add numbers together.

Your pair writes the following test:

[Test]
public virtual void ShouldReturnFourWhenGivenThreeAndOne()
{
    int firstNumber = 3;
    int secondNumber = 1;
    int expectedSum = 4;
    MockRepository mocks = new MockRepository();
    IAdder mockIAdder = mock.StrictMock<IAdder>();
    IAdderFactory mockIAdderFactory = mock.StrictMock<IAdderFactory>();
    Expect.Call(mockIAdderFactory.Create()).Return(mockIAdder);
    Expect.Call(mockIAdder.Add(firstNumber, secondNumber)).Return(4);
    mocks.ReplayAll();
    ICalculator calculator = new Calculator(mockIAdderFactory);
    int result = calculator.Add(firstNumber, secondNumber);
    mocks.VerifyAll();
    Assert.AreEqual(expectedSum, result);
}

Well, there’s nothing to think about here is there? If this is the way you want to write tests, invest some time and write a tool that will parse your test and create your business object for you. The test tells you “use the factory, create an adder, call the add method on the adder and return the result. Where’s the fun in that? What are you actually testing? Does it really matter how the calculator does what it needs to do?

Actually, I think the above test could be a good one if it was created three days into the calculator object—when you were refactoring the different pieces of the calculator into distinct service objects.

But right now, all you need is a calculator that can add two numbers together:

[Test]
public virtual void ShouldReturnFourWhenGivenThreeAndOne()
{
    ICaclulator calculator = new Calculator();
    Assert.AreEqual(4, calculator.Add(3, 1));
}

This should be your first failing test. I don’t care how you add it, but by golly you’d better give me back 4 when I give you 3 and 1.

I love DynamicMock because I am a believer that a test should test a very specific piece of code. Recently, however, I came across this poignant counterpoint that shattered my DynamicMock utopia. I had written tests and an object that looked something like this:

[Test]
public virtual void ShouldReturnTrueIfBigIsTrueAndBadIsFalse()
{
    MockRepository mocks = new MockRepository();
    IDataWrapper mockIDataWrapper = mock.DynamicMock<IDataWrapper>();
    SetupResult.For(mockIDataWrapper.IsBig).Return(true);
    SetupResult.For(mockIDataWrapper.IsBad).Return(false);
    mocks.ReplayAll();
    IDad dad = new Dad();
    Bool result = dad.IsABigBad(mockIDataWrapper);
    mocks.VerifyAll();
    Assert.IsTrue(result);
}

[Test]
public virtual void ShouldReturnTrueIfBigIsFalseAndBadIsTrue()
{
    MockRepository mocks = new MockRepository();
    IDataWrapper mockIDataWrapper = mock.DynamicMock<IDataWrapper>();
    SetupResult.For(mockIDataWrapper.IsBig).Return(false);
    SetupResult.For(mockIDataWrapper.IsBad).Return(true);
    mocks.ReplayAll();
    IDad dad = new Dad();
    Bool result = dad.IsABigBad(mockIDataWrapper);
    mocks.VerifyAll();
    Assert.IsTrue(result);
}

[Test]
public virtual void ShouldReturnFalseIfBigIsFalseAndBadIsFalse()
{
    MockRepository mocks = new MockRepository();
    IDataWrapper mockIDataWrapper = mock.DynamicMock<IDataWrapper>();
    SetupResult.For(mockIDataWrapper.IsBig).Return(false);
    SetupResult.For(mockIDataWrapper.IsBad).Return(false);
    mocks.ReplayAll();
    IDad dad = new Dad();
    Bool result = dad.IsABigBad(mockIDataWrapper);
    mocks.VerifyAll();
    Assert.IsFalse(result);
}

public interface IDataWrapper
{
    public IsBig { get; }
    public IsBad { get; }
}

public class Dad : IDad
{
    public virtual bool IsABigBad(IDataWrapper dataWrapper)
    {
        return dataWrapper.IsBig || dataWrapper.IsBad;
    }
}

My test was green and I was happy, but my pair informed me that a bug could be introduced accidentally and no one might ever notice:

public interface IDataWrapper
{
    public IsBig { get; }
    public IsBad { get; }
    public IsFat { get; }
}

public class Dad : IDad
{
    public virtual bool IsABigBad(IDataWrapper dataWrapper)
    {
        return dataWrapper.IsBig || dataWrapper.IsBad || dataWrapper.IsFat;
    }
}

Because a DynamicMock will not throw an exception for unexpectedly calling IsFat and will return false as the default for the bool base type, my tests will all remain green. But in production my code might not work as expected.

There is seldom a single solution that works in every situation. I have learned to find the proper place for both Dynamic and Strict Mocks in my TDD toolbox and hope that this encourages you to think more deeply about your own toolbox.

Jimmy Bosse is a Senior .NET Consultant and Team Lead at Thycotic Software, an agile software consulting and product development company based in Washington DC. Secret Server is our flagship password management software product.

Pair Programming and Pandemics

June 25, 2009 1 comment

Ben Yoder the Facade Pattern

June 25th 2009 | PouyaYousefi

Pair Programming in the Time of Swine Flu

I am not a doctor. Nor do I claim to have any medical knowledge other than what I pick up from watching House. I am an “Agile Expert”, a “Test Driven Developer” and a “Pair Programmer”. That last existential statement has faced new challenges with the recent media scare and the possible pending spread of global pandemics such as the H1N1 virus, more commonly known as the “swine flu”.

Inherent to pair programming is the need for close proximity to another person. Although in the past our team has conducted remote pairing with some success, our day to day development requires that two programmers need to sit across from each other at the same pairing station sometimes for several days. Now it does not take a genius to see that this situation would be ideal for the passing and spread of communicable diseases.

Our team has taken many measures to combat the possibility of our team members getting sick. We have alcohol-based hand sanitizers at every pair station, provide alcohol wipes for cleaning the keyboards and pairing areas, and allow for working from home. We hope these measures reduce the likelihood of catching or spreading any illnesses but I believe that an inherent mental shift in thinking needs to occur in order for any measure to really work. That shift in thinking is this: “Do not come to work until you are 100% over your illness.”

I am guilty of coming to work when I feel less than stellar in order to preserve time for an upcoming vacation. Sometimes the urge to get back to work to help the team can sometimes hurt the team instead. Pushing to get back to work when your body is not at 100% can prolong your illness, make you susceptible to other illnesses, and expose others.

Kent Beck, one of the leading minds on Agile methodologies, and the author of eXtreme Programming Explained explains that one of the values of XP is respect. He states that respect is the fundamental value that binds the team and drives the project towards a unified goal. Taking care of yourself should be as important as looking out for your team. The next time you “suck it up” and drag yourself to work because you think it makes you a team player, stop and think about whether or not your staying home might be better for the team in the long run.

Pouya Yousefi is a Senior .NET Consultant at Thycotic Software, an agile software consulting and product development company based in Washington DC. Secret Server is our flagship password management software product.

The Facade Pattern – Don’t Talk to Strangers

June 18, 2009 2 comments

Ben Yoder the Facade Pattern

June 19th 2009 | Ben Yoder

The Facade Pattern – Don’t Talk to Strangers

The Facade and Adapter patterns are two closely related and easily confused designs. Both patterns create a layer between two interacting classes but the objective of each is significantly different.

While the Adapter pattern encapsulates the communication between two classes, the goal of the Facade pattern is to simplify wiring in a set of actions leveraging a complex subsystem of code, such as a group of legacy classes, or a third party vendor’s class library or API. If you ever need to integrate a third party system or an open source library and find yourself entangled in outside dependencies and overly-complicated sets of method calls, a Facade refactoring can help you.

There’s a good chance you’ve created a facade pattern on occasion without realizing it. As an oversimplified mechanical analogy—when you start a car, you don’t go through the separate steps of combining an electrical spark with gasoline in the engine cylinder while simultaneously cranking the engine to achieve compression. Instead of worrying about the complicated series of steps involved in making a car start, you simply turn a key which creates some magical action that starts a car.

The engine ignition switch is our facade class. It hides the ugly workings of the engine from us and we simply say “Make Go” and it works.

    public class SparkPlug
    {
        public void Spark(){…}
    }
    public class FuelPump
    {
        public void PumpFuel() {…}
        public void ShutOffFuel(){…}
    }
    public class Starter
    {
        public void Crank() {…}
    }
    //Encapsulates starting logic
    public class IgnitionSwitchFacade
    {
        public void StartEngine()
        {
            FuelPump pump = new FuelPump();
            SparkPlug plug = new SparkPlug();
            Starter starter = new Starter();
            pump.PumpFuel();
            starter.Crank();
            plug.Spark();
        }
    }
    //Client Code that interfaces with the Facade
    class You
    {
        static void Main(string[] args)
        {
            IgnitionSwitchFacade ignitionSwitch = new IgnitionSwitchFacade();
            ignitionSwitch.StartEngine();
        }
    }

This bit of code is contrived, but it demonstrates what’s so powerful about the Facade pattern and how it differs from the Adapter.

While the Adapter is concerned about sitting between the client and an interface, the Facade’s aim is to encapsulate common logic or instructions of a subsystem for the client’s consumption.

The goal of the Facade is simplicity. It provides a common gateway to the class library which can be easily modified and called rather than dealing directly with the logic and classes it hides.

Read David Cooksey’s The Adapter Pattern, A Code Diplomat

Ben Yoder is a Senior .NET Consultant at Thycotic Software, an agile software consulting and product development company based in Washington DC. Secret Server is our flagship password management software product.