Archive

Archive for the ‘Software Development’ Category

A stranger in a strange land: A graphic designer in a dot NET Agile sprint planning world

April 30, 2009 Leave a comment

Josh Frankel: Stranger in a Strange Land a Graphic Designer in a .NET Agile planning meeting

April 30th 2009 | Josh Frankel

Stranger in a strange land: A graphic designer in a .NET Agile Sprint planning world

Agile Planning is an effective way of gauging how much work can get done in a given period of time. By measuring things in relative units of effort one should be able to approximate units of work required for a task. Our team uses a point system to avoid being influenced by a system based on hours. Hours may seem easier to use but they are not necesarily representative of the proportional relationship between work that has been completed and work that needs to be accomplished.

Shortly after joining Thycotic as a Web designer I was given a brief overview of Test Driven Development and Agile Planning practices and immediately recruited to participate in the bi-weekly sprint meetings. But with my skill set rooted firmly in the design arena and my vast development skills limited to CSS and HTML-I had absolutely no idea what was being discussed.

As the team went through the challenges and methods of each task I realized that my grasp of programming was comparable to my grasp of Mandarin Chinese. When it came to discussing point efforts needed for tasks, my guesses were wildly different from the rest of the team’s.

It’s been a few months now and sprint planning meetings are a lot easier. My assessment of the point effort needed for a task is usually pretty close to the developer’s approximations. Although I understand the planning terminology, I still don’t know squat about programming- for the purpose of Agile sprint planning, I really don’t need to. I only need a grasp of how or what is going to be done based on it’s relative difficulty to other tasks. What I’m trying to figure out during each sprint meeting though, is how much harder the current task is when compared to one that’s worth one point.

With buzzwords like ‘synergy’ and ‘cross-functional’ being thrown around so much, isn’t it time consideration was given to how Agile techniques might be expanded and enriched by those of us outside the traditional developer role?

Josh Frankel is the junior graphic designer and marketing team member at Thycotic Software Ltd, an agile software consulting and product development company based in Washington DC.  Secret Server is our flagship enterprise password management product.
On Twitter? Follow Josh

Name a variable like you name your first-born

April 13, 2009 6 comments

Jonathan Cogley: Code Naming

April 13th 2009 | Jonathan Cogley

Name a variable like you name your first-born

“You should name a variable using the same care with which you name a first-born child.”

– James O. Coplien, Denmark   (foreword to Clean Code)

This is hysterical!  I had such a good laugh on reading this line.  For those developers who don’t have children – the child naming process can take months … it usually starts in the second trimester (3-6 months of pregnancy) and can still remain undecided when the child is born… having been through this twice, it is not an easy process.

Note that James doesn’t say naming of a child but rather your first-born implying even more care and emphasis!  While this is obviously in jest, it does highlight how important naming and concepts can be within your code.  With our team, naming a variable can sometimes take 5 minutes while the programming pair argues backwards and forwards. If it takes too long we give it some silly name (bunnyFooFoo) and move on with the intention of revisiting the discussion during code review before committing to the source repository. Besides, who would let bunnyFooFoo go into the source repository with their initials on the commit?

Next time you whip out a string “s” or int “i” or DateTime “d”, give a thought to a logical name that will help others to understand the code in future.

Further thinking on naming:

Jonathan Cogley is the CEO of Thycotic Software, an agile software consulting and product development company based in Washington DC.  Secret Server is our flagship enterprise password management product.

Refactoring Code: A Programmers Challenge Part 2

April 9, 2009 2 comments

Kevin Jones: Refactoring

April 9th 2009 | Kevin Jones

Refactoring Code: A Programmer’s Challenge

In a previous blog post– Refactoring Code: A Programmer’s Challenge Part 1—we went over refactoring basics, and took a not-so-easily-tested piece of code and made it easy to test.

Our final bit of code was this:

public class DisplayPanelDecider
{
    public bool ShouldDisplayPanelOnPage(string pageName)
    {
        string[] pagesToHideElement = new[]
        {
            "/index.aspx",
            "/sitemap.aspx",
            "/404.aspx",
            "/status.aspx",
            "/online_help.aspx",
        };
        return !Array.Exists(pagesToHideElement, s => s == pageName);
    }
}

The objective of the code is clear: “Given the name of a page, should I display a panel?”

Furthermore, the code is compact and does no more or less than it should – so we can test it easily.

Some of our tests would look like this:

[TestClass]
public class DisplayPanelDeciderTests
{
    [TestMethod]
    public void ShouldDisplayThePanelIfThePageIsNotInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider();
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/foobar.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/main.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/blog.aspx"));
    }

    [TestMethod]
    public void ShouldNotDisplayThePanelIfThePageIsInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider();
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/index.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/map.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/status.aspx"));
    }
}

An interesting side note: Did you notice that the names of our tests start with the word “should?” It’s such a simple thing, but the name of tests is important. One should be able to figure out the purpose of the test by reading its name. Using the prefix “should” forces you to think about the test name.

But as we know, software requirements grow and change. What’s happened to our software developer a few months down the line? (Put on your pretend cap!)

Well, at this point, his list of pages grown considerably. Rather than the handful we have in our code, we now need to hide the panel for many pages. As an additional requirement, it now needs be configurable too, so compiled code is not the best solution.

A natural place to put configuration for now is in the appSettings section of the web.config file, and for simplicity sake, we’ll separate our pages with a semicolon, so they’ll look like this:

<configuration>
    <appSettings>
        <add
key="PagesToHidePanel"
value="/index.aspx;/map.aspx;/404.aspx;/status.aspx;/online_help.aspx"/>
    </appSettings>
</configuration>

Now, we need a class that is responsible for retrieving these page names and parsing them. Rather than throw that code in our existing class, introduce a new dependency.

The implementation I came up with looks like this:

public interface IDisplayPanelExclusionProvider
{
    string[] GetPageNames();
}

public class DisplayPanelExclusionProvider : IDisplayPanelExclusionProvider
{
    public string[] GetPageNames()
    {
        string unparsedContent =
            ConfigurationManager.AppSettings["PagesToHidePanel"];
        string[] pageNames = unparsedContent.Split(';');
        return pageNames;
    }
}

Notice I created an interface. This will have a key role later on. The next step is getting our two classes, DisplayPanelExclusionProvider and DisplayPanelDecider talking to each other.

The constructor is a simple and useful approach to dependency injection. Our goal here is to get the DisplayPanelDecider to ask the DisplayPanelExclusionProvider, “On which pages should I not display this panel?”

We’ll modify the DisplayPanelDecider to take in a IDisplayPanelExclusionProvider. Again, notice I used the interface in this example. So our class now looks like this:

public class DisplayPanelDecider
{
    private readonly IDisplayPanelExclusionProvider _displayPanelExclusionProvider;

    public DisplayPanelDecider(IDisplayPanelExclusionProvider displayPanelExclusionProvider)
    {
        _displayPanelExclusionProvider = displayPanelExclusionProvider;
    }

    public bool ShouldDisplayPanelOnPage(string pageName)
    {
        string[] pagesToHideElement = _displayPanelExclusionProvider.GetPageNames();
        return !Array.Exists(pagesToHideElement, s => s == pageName);
    }
}

Our DisplayPanelDecider now asks the IDisplayPanelExclusionProvider for a list of pages. However, by introducing our constructor, we broke our tests! The next step is to start getting our tests working again.

Important note: even though our class examples seem contrived, we are keeping decision-making logic separate. This is important from the testability aspect as well as for future maintenance. The key part here is single dependency, and that applies for our tests as well. We don’t want our tests testing multiple classes at once: keep them separate.

This is where the interface saves us: we can inject our own classes for the purpose of testing.

[TestClass]
public class DisplayPanelDeciderTests
{
    private class MockDisplayPanelExlusionProvider : IDisplayPanelExclusionProvider
    {
        public string[] GetPageNames()
        {
            return new[]
                   {
                       "/index.aspx",
                    "/map.aspx",
                    "/404.aspx",
                    "/status.aspx",
                       "/online_help.aspx"
                   };
        }
    }

    [TestMethod]
    public void ShouldDisplayThePanelIfThePageIsNotInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider(new MockDisplayPanelExlusionProvider());
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/foobar.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/main.aspx"));
        Assert.IsTrue(decider.ShouldDisplayPanelOnPage("/blog.aspx"));
    }

    [TestMethod]
    public void ShouldNotDisplayThePanelIfThePageIsInTheExclusionList()
    {
        DisplayPanelDecider decider = new DisplayPanelDecider(new MockDisplayPanelExlusionProvider());
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/index.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/map.aspx"));
        Assert.IsFalse(decider.ShouldDisplayPanelOnPage("/status.aspx"));
    }
}

Note that we are injecting a fake class that allows us to implement a hand rolled mock. The single purpose of this test class is to ensure that the DisplayPanelDecider decision logic works. It isn’t the DisplayPanelDecider’s responsibility to know the real list of pages from our configuration. That should be tested somewhere else. In effect, we are testing against dummy data. When we want our class to use the real provider, we just create an instance of that and hand it in. In fact, with constructor chaining we can implement a default constructor that uses the real provider:

public DisplayPanelDecider() : this(new DisplayPanelExclusionProvider())
{
}

We have the option of using the DisplayPanelDecider with the real provider, or handing in our own – which we do for testing.

Now the thought on most people’s mind at this point is most likely “This seems like overkill.” If you stand back and look at it for all it is now, I might agree. However, maybe the programmer we’re helping finds himself with more challenging requirements. The other part is the hand rolled mock.

Rewinding a bit, in our new tests we saw a dummy class that provided fake data for the sake of testing. There are mock frameworks that remove the need for these dummy classes, but we’ll examine that in a future blog post. If you’re up for some homework, check out Rhino Mocks.

I pose this question to you: What advantages and disadvantages do you see with our approach? Is it worth the effort? I’d love to know what you think.

Kevin Jones is a Senior .NET developer and Product Team Leader at Thycotic Software Ltd. Thycotic is recognized for providing Agile TDD Training and Agile .NET Consulting Services, and its flagship password management software Secret Server. On Twitter? — > Follow Kevin

Fun with Anonymous Types and LINQ

March 24, 2009 Leave a comment

Ben Yoder: LINQ

March 24th 2009 | Ben Yoder

Fun with Anonymous Types and LINQ

A new feature introduced in C# 3.0—anonymous types and their proper usage—has sparked some debate.  While some may feel that they distract from code clarity, they can be used for standard object initialization, such as:

—————–
var f = new FakeClass()
—————–

or primitive types

—————–
var i = 1;
—————–
In these cases, anonymous types add little to code readability.  In the case of initializing primitive data types, they really don’t save much typing or make a programmers life any easier.  Although anonymous types may not add much to standard object and type initialization, they are extremely handy for defining tuples and collections with LINQ.  For example, instead of having to define a struct or class of read-only properties, you could initialize the type like this:

—————–
var person = new {FirstName=”James”, LastName=”Ingram”};
var persons = new {first = new {FirstName = “James”, LastName = “Ingram”}
,second = new {FirstName=”Kenny”, LastName = “Loggins”}};
—————–

And even better, once a composite type is initialized, intellisense picks up on the properties making them easy to reference:

—————–
Console.WriteLine(person.FirstName);
—————–

We can also do fun things with delegates and functions with composite objects.  By specifying two different delegate functions and setting the “Join” property of the person object we can specify how each person should join their names.

—————–
Func<string, string, string> ReverseJoin =
(first, last) => last + “, ” + first;
Func<string, string, string> NormalJoin =
(first, last) => first + “, ” + last;

var persons = new {first = new {FirstName = “James”, LastName = “Ingram”, Join = ReverseJoin}
,second = new {FirstName=”Kenny”, LastName=”Loggins”, Join = NormalJoin}};
—————–

This behavior can be nicely leveraged in LINQ queries.  Since the compiler infers a type for the left hand side at build time (Check out the IL after compiling some code with anonymous types to see for yourself) there is no need for casting or conversions.  With anonymous types and LINQ we can write something like:

—————–
Func<string, string, string> ReverseJoin =
(first, last) => last + “, ” + first;
Func<string, string, string> NormalJoin =
(first, last) => first + “, ” + last;

var persons = new[]{
new {FirstName = “James”, LastName = “Ingram”, Join = ReverseJoin},
new {FirstName=”Kenny”, LastName=”Loggins”, Join = NormalJoin}
};

foreach (var person in from p in persons where p.LastName == “Ingram” select p)
{
Console.WriteLine(person.Join(person.FirstName, person.LastName));
}
Console.ReadLine();
—————–

Anonymous types aren’t really meant for replacing that standard class or initialization, its much more advantageous to use it with the set based operations of LINQ.  The example above may be somewhat contrived, but it’s handy that due to the inferred creation of the anonymously typed objects there was no need to create any classes.  Sure we could have created a Person class that specified properties for First and Last names, and methods to join them, and then looped through each item in the collection to check the LastName field.  That would have been a valid way of getting the same result, but with the combination of Anonymous types and LINQ we can write the same functionality in a more concise manner.


Ben Yoder is a Senior .NET developer at Thycotic Software Ltd. Thycotic is recognized for providing Agile TDD Training and Agile .NET Consulting Services, and its flagship password management software Secret Server.

My two and a half cents – pitfalls with rounding

March 19, 2009 Leave a comment

Secret Server at FOSE tradeshow

March 19th 2009 | Tucker Croft

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:”Calibri”,”sans-serif”;}
True or False: these values are the same?

1.       61.64598913 saved to DataBase Column :[Age] [decimal](10, 2)

2.        Math.Round(61.64598913, 2)

What does (1. == 2.) return?

False. Bryant and I came across an interesting issue when comparing two values that appeared identical, but which the code insisted were different.   Our research lead us into the internals of ASP.Net rounding to solve an infrequent but legitimate concern.

Scenario

The above question became my issue. I needed to know why a portion of the system would compare a value in the database to the same newly-calculated value and judge a significant difference.

The scenario, using Age as the example:

  • CalculateAge() – A value is calculated with up to 15 decimal places
  • AgeInDB – The value is stored in a database table with precision 2
    • [Age] [decimal](10, 2)
  • AgeChangeDetector – Determines if the Age has changed by comparing CalculateAge() to AgeInDB
  • If Age had changed flags the value for review
  • If Age hadn’t change then doesn’t need to be reviewed

After several months and thousands of values being compared correctly,  I found a single value being repeatedly marked for review even though the raw Age was not changing.

  • Calculated Age(): 61.64598913
  • AgeInDB: 61.65

At first glance, this seemed to be a simple failure to compare the values at the same precision. Since the database is at precision 2 and the new calculated value is up to 15 decimal places, the same value would appear different if compared directly. But looking at the comparison the code was:

  • Math.Round(caculatedAge, 2) == AgeInDatabase

With the Math.Round function both values were being compared to the same decimal places. I checked the AgeChangeDetector tests and all test cases passed with a variety of different decimal number combinations.  Curious, I plugged the mysterious values into the AgeChangeDetector class and saw my assertion fail; the class detected a difference with the AgeInDB at 64.68 and the rounded calculation at 64.67.  Seeing the 64.67, I had isolated the problem to the Math.Round function.

Banker’s Rounding

Jumping on Google, I searched for “problems with Math.Round” and filtered down to a forum about the internal algorithm.  Rounding is, at its core, a lossy method where a number right in the middle (.5) must be given a bias and go up or down to 1 or 0.  From grade school, everyone is familiar with common rounding which always rounds .5 up to 1.  When saving the decimal value to precision 2, the database used this method, hence the 61.65.  But Math.Round uses the algorithm known as Banker’s Rounding.  Banker’s Rounding strives to equal things out by rounding up as often as it rounds down.  In a transaction, one party would gain the half a penny and one party would lose it.  To achieve an unbiased view of the middle number, bankers round to the closest even number.  Since having an even number before the .5 is as statistically likely as having an odd, the bias evens out over multiple transactions.  My problem boils down to the fact that ASP 1.1 Math.Round will always use Banker’s Rounding and my database is using common method.

Solution

Knowing the problem may be half the battle, I now had some potential solutions that didn’t seem easy.  One option was that I could round every number in code first before saving to the database. This would ensure that all values going (and coming) from the database would be uniform with my calculations. But that would be a big impact type of change and add an extra level of processing on every decimal value persist to the database (and this might need to expand to more than just the AgeInDB column).  Also, I had to keep in mind that the database had already saved all its numbers and there would be no way to know if the number had been rounded up because it landed on .5 or because it had landed above that.  Another option would be  that the comparison was all that really needed to change, so I researched a means to accomplish common rounding in the AgeChangeDetector.

If I was  in ASP 2.0, Math.Round would take an optional parameter that can set the rounding algorithm, but I am tied to ASP 1.1 in the current release.  After receiving input from forums, I created a new Method to round numbers using the built-in rounding in the ToString method.  My function looked like this:

public decimal CustomMathRound(decimal number, int places)

{

string decimalPad = “#.” + string.Empty.PadRight(places, ‘#’);

string nums = number.ToString(decimalPad);

return Convert.ToDecimal(nums);

}

I replaced the call to Math.Round with my custom function and the new test passed without disturbing any other functionality.  Feeling confident that this was a workable fix, I replaced all other comparisons between a database value and a calculated value with my custom function to avoid another change conflict in the future.

Follow-up

Rounding is a common task for a developer, but should be considered thoroughly to ensure that different methods are not in play on each side of the comparison. The thousands of correct values before this issue proves comparing numbers to a given significance can catch you off guard. This problem is such an edge case that even in-depth test cases may not expose it. This is a situation that the developer must be aware of beforehand-to specifically test a number impacted by banker’s rounding.


Tucker Croft is a Senior .NET developer at Thycotic Software Ltd. Thycotic is recognized for providing Agile TDD Training and Agile .NET Consulting Services, and its flagship password management software Secret Server.

A Developer’s Uphill Journey From Custom Development to Software Vendor – Part 3

March 12, 2009 Leave a comment

Jonathan Cogley: From Custom Development to Software ISV

February 26th 2009 | Jonathan Cogley

We concluded the previous blog in this series by revealing how using a professional graphic designer to optimize the aesthetics of your software user interface benefits the overall output. This week, we conclude the series with two more characteristics that are essential to the overall success of your software:

Stability: Design for Multiple Environments

The smart ISV knows that user experience relies on stability. A failure during the initial few minutes can cause negative first impressions and the user may give up on the product. The user may have many different software configurations and settings, making stability and testing a more challenging exercise. The Thycotic team has already encountered an amazing range of issues, especially involving internationalization. For example, language settings on the database server and web server needed to be exactly the same.

“You have to support everyone!” noted David Astle, a Secret Server developer, upon realizing the endless combinations of software and settings that our audience use. Limiting the supported configuration by setting certain system requirements may seem like a smart way to reduce quality assurance time, but it doesn’t really work. Customers will often try your software in other environments and solicit help when problems arise. Ignoring such requests, which may indicate your customers’ preferred environments and configurations, can be dangerous. Our customers quickly requested support for installing the product in a hosted web environment – something we had considered but didn’t officially support. (A thread quickly emerged on our support forums to talk through the issues.) We have also had requests to support other database platforms for our product. This was something we anticipated, and we specifically designed our product to use generic SQL wherever possible to facilitate other database platforms.

There is still a huge cost associated with expanding your supported platform: install issues; environment quirks; multiplying your quality assurance test environments; and ultimately fielding more variation in support calls. This is definitely a balancing act and something you should probably defer until you have enough customer requests to justify the cost.

Quality Assurance

Tools also play a big role in controlling the adjustment to the software vendor world. Virtualization is a must for your quality assurance environment. This allows you to easily test your product on multiple support configurations and reset as needed. It is worth the investment in time to create multiple virtual environments to represent the main configurations that you anticipate your customer will use. Use a tool to create an automated test script or even a repeatable manual test script, and then simply run through the tests on all the configurations. This approach has been especially useful for ensuring quality during the product install process, which is heavily dependent on the environment and is a miserable place to fail on the customer. For our virtualization, we use VMWare software and have found the snapshot feature to be invaluable for rerunning our tests with different builds of our software.

Unit testing tools are even more essential in ensuring high quality in the builds making it to quality assurance. The ability to run an entire regression set of unit tests-a large suite of unit tests which allow you to determine if anything will break when you make a change-was beneficial when making big changes, such as adding support for Microsoft Access as a database platform in addition to Microsoft SQL Server. The same was true when we began supporting Microsoft .NET 2.0. The ability to easily test all of the system features in an automated manner while the product is still in development is a big time and money-saver.

Other standard software development tools, such as source control, issue tracking, automated builds and development productivity enhancement tools, were as useful in the ISV world as they had been in our custom development practice. A source control platform with solid support for branching is especially useful if you need to apply fixes to multiple versions of your software in the field. Branching lets you easily separate out your release versions from your mainline of development for your next product release.

In summary, these are the main differences we experienced between the development of custom software and packaged software:

Custom

Packaged

Control of Requirements

Not usually

Often more than you might like

Attention to Detail

Functionality first and last!

Endless refinement

Number of Users

Small

Huge

Operating Environments

Usually controlled

More than you can guess

Stability

Negotiable

Essential

Building a product can be a rewarding experience, even when the financial rewards won’t let you live happily ever after in Bermuda, but it poses a different set of challenges than those experienced when developing custom software. Think carefully about your strengths and how you might adapt your best practices before trying your hand at packaged software development


Jonathan Cogley is the founder and CEO of Thycotic Software. Test Driven Development (TDD) is the cornerstone of the Thycotic approach to software development and the company is committed to innovate TDD on the Microsoft .NET platform with new techniques and tools. Jonathan is an active member in the developer community and speaks regularly at various .NET User Groups, conferences and code camps across the US. Jonathan is recognized by Microsoft as an MVP for C# and has also been invited to join the select group of the ASPInsiders who have interactions with the product teams at Microsoft.

A Developer’s Uphill Journey From Custom Development to Software Vendor – Part 2

March 5, 2009 Leave a comment

Jonathan Cogley: From Custom Development to Software ISV

February 26th 2009 | Jonathan Cogley

We concluded our last blog post by posing a question about meeting customer requirements in an off-the-shelf software product:

So, how can we bring customers into the software development loop and meet their real world needs?

There are things you can do as an ISV (Independent Software Vendor) to bridge the gap between the development team generating requirements and somehow involving the customer.

You can come up with personas for typical users of your new software and categorize their likes, dislikes, favorite colors and even foodstuffs, but it is all fantasy until you have your first customers. Another option advocated by many vendors is an early access beta program; this helps to build a community around your product in the early stages and provides valuable feedback from people using your product. This option is still not ideal since the characteristics of a beta tester may not match the profile of your typical customer in six months time. At this point, the cynics are probably saying that this whole situation isn’t that different from custom development projects, since their user requirements can be poorly defined or championed too.

Our approach was to focus on the pain that our product solves. Secret Server is a web-based application to store passwords in an encrypted database and then securely share them with other team members (or your wife for that matter!) By delivering the core pain-relieving features, we would have a product that was genuinely useful and could then be refined and tuned based on customer feedback. This strategy put our product in our customers’ hands quickly, solved a few of their main problems, and began generating a stream of feedback to then drive requirements for the next phase. In fact, our tracking system actually prioritizes issues and features requested by our customers.

In our custom development, we always practiced what we call “just good enough.” This means giving the client just what they asked for in the shortest possible time while avoiding any over-engineering (read: technical guessing). This mentality was useful for our initial product development since we could easily have blue-skied the product into non-delivery. Focus on the pain your product solves and deliver it quickly to get early customer feedback.

Aesthetics Count!

First impressions count when the user has a choice about using your software. This impacts the aesthetics and the quality of the user experience. Gone are the developer-designed user interfaces, as they simply can’t compare to the work that a true graphics designer can produce in a few hours. The implications of this decision were huge for our development team. The developers knew that a qualified professional would beautify the interface later on, so they could ignore aesthetics and focus on the functionality and automated unit testing (test-driven development) of the software. I have seen developers spend hours tweaking a user interface on many custom development projects because no budget was available for a dedicated graphic designer. This costly exercise seldom produces remarkable results. The decision to use a professional early on benefited the overall output tremendously.

The user experience isn’t just pretty graphics, though, and the vendor should spend serious time refining the number of clicks to perform tasks, the information presented on the screen and the metaphors used in understanding the system. This difficult and time-consuming task can be justified, since the results will be spread over the many users who will try and, hopefully, use your software. Small gains in usability can yield large rewards when marketed to the masses; the economics of this attention to detail do not pan out when there are only a small number of users for your software.

Next week’s blog will explain how stability and virtualization play a vital role in quality assurance.


Jonathan Cogley is the founder and CEO of Thycotic Software. Test Driven Development (TDD) is the cornerstone of the Thycotic approach to software development and the company is committed to innovate TDD on the Microsoft .NET platform with new techniques and tools. Jonathan is an active member in the developer community and speaks regularly at various .NET User Groups, conferences and code camps across the US. Jonathan is recognized by Microsoft as an MVP for C# and has also been invited to join the select group of the ASPInsiders who have interactions with the product teams at Microsoft.