Archive

Archive for the ‘Custom Development’ Category

Eliminating the Null Checks

June 18, 2013 Leave a comment
null
June 18th, 2013 | Ken Payson

Eliminating the Null Checks

Sometimes it feels like half the code in an application is concerned with checking if variables are null and yet still bugs come up as a result of null reference exceptions. What can we do about this? Is there a way to spend less lines of code worrying about null and more time worrying about the core logic of your functions? Yes there is and it’s simple – stop coding using null!
Stop assigning null to variables

Never “initialize” a variable to null. Only assign a value to a variable after computing what its value should be. Properties on objects can be null. Declared variables need not ever be null. If there is no value, there is nothing to be done with the variable.

Stop writing functions that accept and work with null values.

Null is not a value of any type. It is pointer. C# objects are reference types and are passed by reference. Value types can be made into nullable types, but unfortunately, it is not possible to take a reference type and say it cannot be null. This is a weakness of the type system. If you have a function that takes a string and you pass in an int, you get a compile time exception because there is a type mismatch. If you take the same function and pass in null it will compile. Why? It should be a type violation because null is not a string.

It is the responsibility of the caller to make sure that it is giving a function data. If the caller does not have a value needed by the function, it should not be calling the function at all.

The called function needs to know it has values to work with before proceeding with the core of its logic. If the developer cannot guarantee that the function will be called with good data (maybe because it is public api method) then the function should validate the input at the top of the function and throw an exception if the input values are null. If you are using .NET 4, the code contracts with contract preconditions work well for this. If you are below .NET 4 you’ll have to settle for writing if(myObject == null) { throw new ArgumentException();} If on the other hand, you can guarantee that your function will be called correctly, you may ignore the null checks all together.

Sometimes functions are written to take optional parameters. If the optional value is not supplied it is null. The function branches on whether or not the value is null in order to perform some additional piece of work. Functions written this way tend to grow and grow as new requirements come and more optional parameters and branching logic are added to the function. The better approach is to have overloaded versions of the function. Private auxiliary methods should hold common logic used by the over loaded methods to eliminate any code duplication. Now in the calling scope, we do not need to pass null into any method. We can call the version of the function that does exactly what we need.

Stop writing functions that return null values

The flip side of not taking in nulls, is not returning nulls.
If a function is supposed to return a list of things and there are no items to return, then return an empty list.
If a function is supposed to return a single item from some source, a database, cache, etc. and it is expected that the item will be found but it is not, then throw and exception. For example, if you are searching for an item by id, then in normal workflow, you will have a valid id and a match should be found. If a match is not found this is an exceptional case.
If a method might reasonably fail to be able to return a value then implement a TryGet version of the method that takes an out parameter and returns true or false. This is analogous to TryParse for integers or TryGet for dictionaries. In the calling scope, you will have something like this:

Widget widget;
If(TryGetWidget(someSearchString, out widget) {
//The widget will have a value. Do something with the widget
}
Else {
//The search didn’t find anything
}

Conclusion

It really is that easy to greatly reduce null checks and null exceptions from your code. Don’t assign null to variables and don’t write functions to take in or return nulls.
In general, it is the responsibility of a function to communicate its requirements and throw an exception if the requirements are not met. The calling scope has the responsibility of making sure that is passing acceptable values into a function and handle any exceptions that are thrown.

The ultimate goal would be to eliminate possible null from the entire call chain. If function A calls, function B, which calls function C and we know that function A never returns null, then we don’t need to check the values originating from A and going into C. Of course, in any real world system, there will be many places where methods you call may return a null. Don’t just pass these values along into the next function call. Stop the null chain.

Ken Payson is a Software Engineer at LogicBoost an Agile consulting firm based in Washington DC.

Understanding the Web-Same Origin Policy, Restrictions, Purpose, and Workarounds

December 13, 2012 Leave a comment

Ken2

December 13th, 2012 | Ken Payson

Understanding the Web-Same Origin Policy, Restrictions, Purpose, and Workarounds

Sooner or later every developer will have to get or send data from his or her site to a site on a different domain. Recently, I was working on a bookmarklet which required communication between my company’s site and another domain. Whenever there is web communication between two different domains, it is necessary to understand the same origin policy of the web. Wikipedia summarizes the same origin policy as

“The policy permits scripts running on pages originating from the same site to access each other’s methods and properties with no specific restrictions, but prevents access to most methods and properties across pages on different sites.”
The thing to know and understand is that there are two origin policies. The “DOM Origin Policy” governs reading/writing via the DOM information in different windows. The “XmlHttpRequest Origin Policy” governs Ajax communications between different windows.

DOM Same Origin Policy

The same origin policy for the DOM disallows reading from or writing to a window if the window’s location is on a different domain. This prevents a number of attacks which could be used to steal user information. A simple example would be:

  1. MaliciousSite.com has a link on its page to mybank.com
  2. User clicks on link which opens mybank.com in a new window using javascript window.open
  3. MaliciousSite.com could then read and write to mybank.com through a reference to the mybank.com window.

XmlHttpRequest Same Origin Policy

The XmlHttpRequest Origin Policy disallows sending HTTP requests via the xmlhttp javascript request object to a site on a different domain, so “normal” Ajax with an endpoint on a different domain is not possible. There are workarounds to allow information sharing which I will discuss later. It is important to note that the XmlHttpRequest Origin Policy really exists to prevent a different line of attack than the DOM Origin Policy. XmlHttpRequest Origin Policy is trying to prevent Cross Site Request Forgery attacks (CSRF) attacks.
When an HTTP request is made, the web browser will send the cookies belonging to the requested domain. In particular, sites typically use an authentication cookie which proves that you are logged in to a site. Based on this, if there were no XmlHttpRequest Origin Policy, a CSRF attack could work as follows:

  1. User navigates to a bad site MaliciousSite.com
  2. MaliciousSite.com makes the blind guess that the user is also logged in on another tab to MyBank.com
  3. MyBank.com exposes data via web services to authenticated users.
  4. MaliciousSite.com uses the web services of MyBank.com to obtain information. The web services on MyBank.com return the requested data because the user is authenticated and the authentication cookie was sent with the Ajax request.

Work-Arounds
Just as there are “two” origin policies, there are two corresponding sets of “workarounds” which allow for communication between windows. Let’s take a look at these methods and see how they allow for legitimate communication without reintroducing the same security holes.

Working around the DOM Same Origin Policy
In order to work around the DOM origin policy, one should use window.postmessage(). This javascript method is now part of the HTML 5 standards but has actually been around for some time. It can be used with IE8+, and all modern version of FF and Chrome. The use of windows.postmessage is thoroughly documented on the web but can be summarized as:

  1. Get a reference to the window which you want to communicate with ex var win = window.frames[0].window
  2. The window receiving messages sets up an event listener for the messages that will be sent.
  3. The sender calls win.postmessage(dataMessage).
  4. For two way communication, event listeners are set up on both sides.

Window.post message does not allow for unauthorized sharing of information. Using window.postmessage requires both sides to be in cooperation on the messages structure and it requires the message recipient to accept messages coming from the sender. Using the earlier example, myBank.com would never be expecting messages from mybank.com and so even if mybank.com did have an event listener set up and methods for data sharing it would certainly ignore messages coming from any unknown location.

Working around the XmlHttpRequest Same Origin Policy

To work around the XMLHttp Origin Policy there are a few different options. The first thing you could do, if you want to use a data service from another domain, is simply create a proxy service on your site which calls the service on the other domain and passes back the data. This involves extra server side coding on your part, but it also requires no cooperation from the other domain. The next techniques that I will mention involve cooperation from the other domain in how their web services are set up. If the other domain isn’t “cooperating,” writing a proxy service maybe your only option.

If a site wants to expose services that can be consumed via Ajax from another domain, there are two established options, JSONP and CORS. The JSONP technique uses the fact that javascript files can come from different domains. JSONP service requests are structured as requests for the javascript file. If a site returns JSONP it is actually returning javascript containing a function which, when evaluated, returns JSON data. The JSONP technique works but it is a “hack.” JSONP services could theoretically still be a CSRF attack vector, though it is much less likely. JSONP is meant to be used with cross domain requests and JSONP services need to be implemented differently. Because of this, accidently exposing sensitive data via JSONP services is not a real concern.

CORS is a relatively new standard that address the need for cross domain Ajax in a more proper fashion. CORS stands for Cross Origin Request Sharing. All modern browsers can make Ajax requests to other domains so long as the target server allows it. A security handshake takes place using HTTP headers. When the client makes a cross-origin request, it includes the HTTP header – Origin – which announces the requesting domain to the target server. If the server wants to allow the cross-origin request, it has to echo back the Origin in the HTTP response header – Access-Control-Allow-Origin. The target server can establish a security policy to accept requests from anywhere or to only accept requests from specific domains. On the client, jQuery Ajax automatically supports CORS for cross domain requests. So, no additional coding on the client is necessary if you are using jQuery. We can see that CORS also makes CSRF attacks highly unlikely. If a site is checking for an authentication token with its web services, it will certainly not add an origin header for an untrusted domain.

In Conclusion

I hope that you found this brief introduction helpful. Understanding the rhyme and reason for the same origin policy should make the learning process easier when it comes time to dive into implementation details. Happy coding!

Ken Payson is a Senior .NET developer at LogicBoost, an agile software services and product development company based in Washington DC.

Do websites need to be experienced exactly the same in every browser?

March 14, 2011 2 comments

March 14th 2011 | Jimmy Bosse

Do websites need to be experienced exactly the same in every browser?

Someone just asked me, “Do websites need to be experienced exactly the same in every browser?” (Go to the link now and come back. Go on, I’ll wait…)

While this was initially amusing, my amusement quickly changed to concern. What happens if we stop building sites for the lowest common denominator? I work on a project that still needs to support IE6 because every paying customer needs to be able to use the web application they pay us to provide. I guess I would like to ask the mystery producer of the site, (I believe it is http://simplebits.com/) to define experienced. If it means that every pixel render in the exact same way, then no. But if experience means perform the same tasks, my answer is ABSOLUTELY.

When I go to YouTube, I want to watch a video of some sort, usually a dumb video my father in-law just emailed me. When that web page comes up, I expect to be shown a video. If I am greeted with some snarky message about what a Luddite I am because I use a browser that my IT department makes me use, then in my opinion, your site is a failure. Granted, because you just called me a Luddite, I am pretty sure my opinion matters very little to you. If, on the other hand, you play a pixelated video for me because that’s the best my lame IT department installed browser can render and you also politely inform me that the video would be way cooler if I viewed in on my teenage daughter’s i9000 giga-core quantum computer, then your site is a success. Hey I might even annoy my daughter by invading her inner-sanctum to view the video on my second mortgage and see what all the fuss is about.

In fact, just this morning I was trying to log into my health care insurance provider’s site to research providers and was confronted with a message that I had locked my account. How did I do that? Well, I made the silly mistake of browsing the site with my iPad. Their support staff informed me that the site didn’t work reliably in Safari and that I should use Internet Explorer. My task of doing a simple directory search now required me to go to my office and boot my computer because someone couldn’t be bothered to make an HMTL login page work correctly on the iPad?

It is okay to use a given browser’s full potential to make a user experience better, easier, and faster. But we must remember that as the creators of these websites we are by definition more advanced than the users for which we build them. And if you are like me and have bills to pay and are trying to generate income from them then you want to reach as many users as you can.

Jimmy Bosse is a Senior .NET developer and Team Lead at Thycotic Software, an agile software services and product development company based in Washington DC. Secret Server is our flagship password management software product.

Code Contracts in dot NET

June 24, 2010 Leave a comment

Kevin Code Contracts

June 24th | 2010

Code Contracts in dot NET

It is often desirable for a developer, particularly those who make public APIs, to communicate to 3rd party developers what their code expects. For example, say I have a product that allows plugins to be written for it. A plugin can call an API I provided, “Print”. Print takes a single string, however, the string should never be null.
Nonetheless, we can’t guarantee a developer will never pass in null; there may be a bug in their application. To prevent serious errors from occurring, we need to ensure our parameter is never null. We can do this with an exception.

public void Print(string data)
{
    if (data == null)
        throw new ArgumentNullException("data",
"This parameter must not be null.");
    //Implementation omitted
}

This is a very typical pattern but it leaves us wanting more. The developer won’t know he is giving the API a string that cannot be used until he attempts to run the application. Nothing is stopping the compiler from compiling.
Statically checking if our code is violating a contract is a powerful feature. An experimental language called Spec# currently supports this but unfortunately, being experimental, it makes it difficult to use in real-world scenarios.
Enter Code Contracts. Now built into the .NET Framework 4, all you need to do is download an add-on for Visual Studio 2008 or 2010. In the .NET Framework 4, the magic happens in the System.Diagnostics.Contracts namespace. You can download the add-on from here:
http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx
Once installed, you will see a new tab on your project’s properties called “Code Contracts”.

I’ve configured a few key options here. I’ve set the “Perform Runtime Contract Checking” to Full, and checked the “Perform Static Contract Checking”, “Check in Background”, and “Show squigglies”. Next, we need to define the contracts. We can use the Contract class to indicate these contracts. Our method now looks like this:

public static void Print(string data)
{
    Contract.Requires<ArgumentNullException>(data != null);
    //Implementation omitted
}

This will have the same behavior as before: If data is null, then an ArgumentNullException is thrown. However, in addition to the runtime failure, we will also see a warning when we compile and squigglies under the code.

And of course, if we set “r” to something not null, the contract is happy! However, what happens if r is getting data from somewhere else?

static void Main(string[] args)
{
    string r = SomeMethodThatWillNeverReturnNull();
    Print(r);
}

private static string SomeMethodThatWillNeverReturnNull()
{
    return "notnullstring";
}

The contract analyzer isn’t small enough to check and see what the method actually does. However, the method could potentially return null. We know better than that, so we can tell the Code Contract analyzer that the return value is never null. This is done with a post condition using Contract.Ensure. A post condition is a contract that ensures a condition is met when the method is exiting. In our case, we want a post condition that says “SomeMethodThatWillNeverReturnNull” won’t return null.

private static string SomeMethodThatWillNeverReturnNull()
{
    Contract.Ensures(Contract.Result<string>() != null);
    return "notnullstring";
}

This tells us that the result of the method will never return null. The call to Print is now satisfied knowing that its source of data cannot return null. If SomeMethodThatWillNeverReturnNull could return null, then the post condition will fail.
Let’s go over what we are seeing. The Contract.Requires indicates that the Print method requires its data parameter as not null, before the method is actually executed. If the condition fails, an exception is raised at runtime, and a warning is generated at compile time.
Contract.Ensures is a post condition. It indicates that before the method exits, that condition must be met. Though it is a post condition, it’s still recommended that the constraint be placed at the top of the method. Let’s say we need our code to meet this contract:

  1. Print must never accept a null string.
  2. Print must never accept a string that is greater than 1000 characters.
  3. When print is called, we must set dataWasPrinted to true.

Here is what our contract looks like:

class Program
{
    private static bool dataWasPrinted = false;

    static void Main(string[] args)
    {
        string data = GetData();
        Print(data);
    }

    private static string GetData()
    {
        Contract.Ensures(Contract.Result<string>() != null);
        Contract.Ensures(Contract.Result<string>().Length <= 1000);
        return "data from file";
    }

    public static void Print(string data)
    {
        Contract.Requires<ArgumentNullException>(data != null);
        Contract.Requires<ArgumentNullException>(data.Length <= 1000);
        Contract.Ensures(dataWasPrinted == true);
        try
        {
            //Implementation omitted
        }
        finally
        {
            dataWasPrinted = true;
        }
    }
}

This is a powerful concept and can be extremely useful. I wouldn’t recommend putting contract validation on all of your code, rather just public methods that will be exposed to others. I might also recommend using it in critical cases where a specific contract must be met. It isn’t a replacement for unit tests or mock testing, rather another tool in the belt that makes a software developer’s life easy.

Kevin Jones is a Team Lead at Thycotic Software, an agile software services and product development company based in Washington DC. Secret Server is our flagship password management software product. On Twitter? Follow Kevin

Categories: .NET, Custom Development

The Danger of Single Responsibility in Programming Continued

November 19, 2009 6 comments

The Danger of Single Responsibility in Programming Cont.

October 16th 2009 | David Cooksey

The Dangers of Single Responsibility in Programming Continued

The Dangers of Single Responsibility, Cont.

Doug Rohrer responded to my initial post on this topic with a good refactoring of the classes involved in a manner similar to the Strategy pattern. I agree with many of his points—the hypothetical developer certainly chose the wrong responsibilities; misunderstood the Single Responsibility Principle; and generally made the code a mess. That said, I believe that SRP is most definitely dangerous, not because of what happens when it is used correctly, but because of how easy it is to get it wrong. Misapplying the SRP can result in code that makes God objects look easy to maintain.

For clarity’s sake, I’ll go one step further—it is easy to misunderstand the sentence “A class should have only one reason to change” as a literal commandment to be applied at the line or method level. This results in disaster. One common example of how the SRP is misunderstood can be seen in this thread, where the poster asks if the SRP means that each class can have only one method. Luckily the poster received a good informative answer, but that is not the case for all developers learning about the SRP.

Here is an example of code modifications I have seen motivated by a desire to apply the SRP.

BEFORE

public class Check
    {
        private readonly IDataProvider _dataProvider;

        public Check(IDataProvider dataProvider)
        {
            _dataProvider = dataProvider;
        }

        public bool Run()
        {
            IBusinessObject data = _dataProvider.Get();

            if (data.Condition1 && data.Condition2)
            {
                string message = string.Format("Check failed, {0} {1}", data.Property1, data.Property2);
                throw new Exception(message);
            }
            return data.Property3 != data.Property4;
        }
    }

AFTER

public class Class
    {
        private readonly IDataProvider _dataProvider;
        private readonly ICheckErrorMessageProvider _checkErrorMessageProvider;

        public Check(IDataProvider dataProvider, ICheckErrorMessageProvider checkErrorMessageProvider)
        {
            _dataProvider = dataProvider;
            _checkErrorMessageProvider = checkErrorMessageProvider;
        }

        public bool Run()
        {
            IBusinessObject data = _dataProvider.Get();

            if (data.Condition1 && data.Condition2)
            {
                string message = _checkErrorMessageProvider.GetErrorMessage(data);
                throw new Exception(message);
            }
            return data.Property3 != data.Property4;
        }
    }
public class CheckErrorMessageProvider : ICheckErrorMessageProvider
    {
        public string GetErrorMessage(IBusinessObject data)
        {
            return string.Format("Check failed, {0} {1}", data.Property1, data.Property2);
        }
    }

Here, the developer asked the SRP question “Does this class have only one reason to change?” and got the answer “No, it could change because the formatted text could change, or because the logic could change”, and refactored the String.format out into its own provider. While harmless on the surface, this artificial separation of concerns does not add any value. The new class is so specific that it cannot be used anywhere else. In addition, the developer is likely to forget the CheckErrorMessageProvider name almost immediately, so if a text change is required he will most likely go to the Check class first, and then go the extra level down into the string provider in order to make the text change. In other words, the complexity of the code was increased for no benefit.

I believe that after correctness, simplicity is the most important programming principle. Simpler code is easier to understand when first read; easier to remember; easier to test; easier to refactor; and easier to add features to. Anything that adds complexity makes all of these tasks harder, especially on larger projects with many non-trivial sub-systems. Applying single responsibility at the line or method level diffuses business logic into a cloud of tiny classes that do next-to-nothing individually, and thoroughly obscure the logic they represent.

In conclusion, yes, the SRP is not dangerous when applied correctly. But then, most things are dangerous because of what happens when they are misused, and the Single Responsibility Principle is no exception. Handle with care!

David Cooksey is a Senior .NET Consultant at Thycotic Software, an agile software services and product development company based in Washington DC. Secret Server is our flagship password management software product.

The Dangers of Single Responsibility in Programming

November 9, 2009 3 comments

The Danger of Single Responsibility in Programming

October 16th 2009 | David Cooksey

The Dangers of Single Responsibility in Programming

The principle of single responsibility is an important and well-known object-oriented guideline. Systems designed without any consideration for this principle often result in the God-object anti-pattern, with predictably unfortunate results. However, taking single responsibility too far also results in code that is difficult to read and maintain.

Let’s take a look at a possible system built for a company that sells a wide variety of products. This system was originally built around a Product object, and over the years the product object has continued to grow and acquire responsibility as new features were added to the system. The IProduct interface now looks like this:

  public interface IProduct
  {
    int Id { get; set; }
    string Name { get; set; }
    int TypeId { get; set; }
    DateTime AddedDate { get; set; }
    decimal Cost { get; set; }
    decimal BasePrice { get; set; }
    decimal SalesPrice();
    decimal Price(int customerId);
    decimal Discount(int customerId);
    decimal GetShippingCharge(int customerId, string stateCode);
    int GetMinQuantity(int customerId);
    DateTime GetNumAvailable(int customerId);
  }

This company has built its business model on special services for frequent customers, including special discounts, shipping rates, lower base prices, etc. Some customers receive lower prices on specific products in return for promises to order at least a certain quantity each time. The net result is that the full cost depends on who and where the customer is as much as it depends on the product itself.

Now imagine that a developer on this project has read about the fascinating new concept of single responsibility and decides that IProduct is responsible for too much. In fact, it’s responsible for everything. So he creates a PriceProvider that contains a GetPrice method as shown below, moving the logic of the method directly from the Product class to the PriceProvider.

    public decimal GetPrice(IProduct product, int customerId)
    {
      decimal price = product.BasePrice;
      ICustomer customer = GetCustomer(customerId);
      if (customer.GoldLevelCustomer)
      {
        price = price * (1 - GetGoldLevelDiscount());
      }
      if (ProductIsOnSale() && !FixedDiscountAgreementExists(customer, product))
      {
        decimal salePrice = product.SalesPrice();
        if (salePrice < price)
        {
          price = salePrice;
        }
      }

      return price;
    }

So far, so good. The logic is adequately complex and involves enough dependencies that it should probably exist in its own class. Initially, our developer is happy. However, as he continues to look at this method, he decides that it is doing a lot more than it should. After all, any number of business logic changes could make this class change, and a class should have only one reason to change, right? So he rolls up his sleeves and gets to work, eventually producing the following:

    public decimal GetPrice(IProduct product, int customerId)
    {
      decimal price = product.BasePrice;
      ICustomer customer = GetCustomer(customerId);
      if (goldLevelDeterminator.IsGoldLevelCustomer(customer))
      {
        price = price * (1 - goldLevelDiscountProvider(product));
      }
      if (saleProvider.IsOnSale(product) && !fixedDiscountAgreementProvider.AgreementExists(customer, product))
      {
        decimal salePrice = product.SalesPrice();
        if (useSalesPriceInsteadOfPriceDeterminator.UseSalesPrice(price, salePrice))
        {
          price = salePrice;
        }
      }
      return price;
    }

The goldLevelDiscountProvider, saleProvider, and fixedDiscountAgreementProvider probably refer to their own tables, given the code structure shown, so it makes sense to split them out. However, the goldLevelDeterminator is literally calling the GoldLevelCustomer property on customer and returning it, and the useSalesPriceInsteadOfPriceDeterminator is simply comparing the sales price to the price.

These latter two changes are examples of implementing the principle of single responsibility at a level of granularity below that of the business requirements. It is possible that the company’s needs will change in such a way that these classes will become necessary, but they do not need their own class unless and until their complexity warrants it. The creation of two determinator classes here implies that significant logic is involved in determining whether a customer is a gold level customer, or whether the sales price should be used.

Unnecessary classes like the two mentioned above cause a number of problems. Firstly, they increase the complexity of the code. A developer reading through this class for the first time will need to open both determinators up and mentally piece their contents back into the main function instead of simply reading them. Secondly, their existence as independent entities implies that they are reusable. However, their creation was solely based on a desire to separate method calls into their own class, not a thorough investigation of how the class meshes with the rest of the classes in the project.. Quite often, classes like these are not reused and in fact their functionality is duplicated in other tiny classes used in other places.

In short, when you’re designing or refactoring systems, plan your class structure around business needs, not logical decision points. A method with two if statements should not automatically be considered as having two reasons to change. After all, those two if statements may represent a single business concept.

David Cooksey is a Senior .NET Consultant at Thycotic Software, an agile software services and product development company based in Washington DC. Secret Server is our flagship password management software product.

The Design and Development Benefits of CSS

August 14, 2009 3 comments

August 14th 2009 | Tom Lerchenfeld & Josh Frankel

Tables belong in living rooms, not HTML

Long term sustainability of code is the biggest challenge and central concern to building an application. It’s what the cyclical “O” in the Thycotic logo represents (and you thought we were into recycling), and it’s the ultimate goal of every application we build.

The first challenge encountered when building a Web site or application is to do with keeping the architecture lean while anticipating changes that may arise years down the line. Anyone who has worked with a mature Web application knows the horror of making a ‘minor’ change to something that is within a series of nested tables. Nested tables are like Russian dolls, with each table yielding a smaller table inside. This is cute in crafts, but debilitating in code.

Nice UI, let’s use it for the next 10 years

Tables present just one example of designing without future flexibility in mind. Inline styles can also make it nearly impossible to change or even tweak a UI.

Cascading Style Sheets (CSS) has been implemented in many Web sites and Web apps, but its true potential lies in a back-end architecture that anticipates change and involves minimal back-end development, yet affects the UI in a profound way. That was quite a mouthful, so let me use these two Web pages to illustrate my point:

Want to guess what they have in common? They are both using exactly the same HTML and content.

Seriously, go check it out www.csszengarden.com.

For a back-end developer this means that generic classes and specific ids allow the overall UI to be completely controlled through CSS and other front-end techniques.

Keeping team code the same
A common problem in many businesses is that everyone works on a separate aspect of a project and then brings it together and tries to make it look uniform. Inline styles, BRs, and HTML tags can make a page look the way you want it to, but when you want to change the look of your site, you’ll find these methods can be a real pain. The problem will be even greater for your designer. Each page will be a single design, too complex to change easily and cost-effectively.

Checking for uniformity
I recently came across a test that uses a simple file reader to check Web pages for uniformity. The reader grabs all the .aspx files from the source, identifies ‘bad’ tags, and recommends changes. This solution saves hours of trawling through code manually. If you’re daring, write the method so it changes the tags on its own, saving even more time. If you’re using unit tests, this test will fail every time, alerting a new programmer not to use the bad tag.

Coding for your designer
If you use Master Pages in ASP.NET, you already know the benefits. Other languages have similar templates you can use or make. But how far should you take them? Our designer has suggested splitting our Web site into three different types of pages. This way we won’t have to recreate the layout every time we add a new page.

* Login, logout, and and other similar unauthorized pages will have one template
* Configuration and site maintenance pages will have another
* Content pages for every user will be the third template, with specific ids making them unique

The benefit to this is that when your designer writes the CSS for these pages, he can change them all at once. Of course every page will be slightly different, but that’s where specific ids come in. With this method, when you change your site layout in the future, you don’t have to do it page-by-page.

Adding to your site
If you think this sounds like too much effort to apply, think about the time it takes you to create a brand new page. Each one will most likely have to be designed to match the look of your other pages…adding tables, spacings, BRs, inline style… the works. If you are set on your site and you don’t see much expansion in your future, you’re probably ok for now. But if your Web site is in an expansion stage, consider the time it takes you to create a new page, and how drastically a templated page would decrease the effort.

Maximizing usability
I can’t walk down the street without seeing someone looking at a Web page on a 4″ phone screen. Many people now have phones that can browse the internet to one extent or another. Some Web elements show up better on phones and PDAs than others. Lists, divs, and paragraph formatting all show up great. With CSS, your site will show up effectively on a variety of small wireless devices. But tables, Flash, and images that act as spacers don’t scale well, and show up as bits and pieces of an otherwise functional Web site.

As you can see, with CSS there are no losers. It’s not really a case of whether or not you should embrace the language—it’s simply a case of how soon you can do it.

Tom Lerchenfeld  is interning as a TDD / .NET Developer and Josh Frankel is a graphic designer. Both work for Thycotic Software an agile software services and product development company based in Washington DC. Secret Server is our flagship password management software product. You can follow Josh on Twitter