August 14th 2009 | Tom Lerchenfeld & Josh Frankel
Tables belong in living rooms, not HTML
Long term sustainability of code is the biggest challenge and central concern to building an application. It’s what the cyclical “O” in the Thycotic logo represents (and you thought we were into recycling), and it’s the ultimate goal of every application we build.
The first challenge encountered when building a Web site or application is to do with keeping the architecture lean while anticipating changes that may arise years down the line. Anyone who has worked with a mature Web application knows the horror of making a ‘minor’ change to something that is within a series of nested tables. Nested tables are like Russian dolls, with each table yielding a smaller table inside. This is cute in crafts, but debilitating in code.
Nice UI, let’s use it for the next 10 years
Tables present just one example of designing without future flexibility in mind. Inline styles can also make it nearly impossible to change or even tweak a UI.
Cascading Style Sheets (CSS) has been implemented in many Web sites and Web apps, but its true potential lies in a back-end architecture that anticipates change and involves minimal back-end development, yet affects the UI in a profound way. That was quite a mouthful, so let me use these two Web pages to illustrate my point:
Want to guess what they have in common? They are both using exactly the same HTML and content.
Seriously, go check it out www.csszengarden.com.
For a back-end developer this means that generic classes and specific ids allow the overall UI to be completely controlled through CSS and other front-end techniques.
Keeping team code the same
A common problem in many businesses is that everyone works on a separate aspect of a project and then brings it together and tries to make it look uniform. Inline styles, BRs, and HTML tags can make a page look the way you want it to, but when you want to change the look of your site, you’ll find these methods can be a real pain. The problem will be even greater for your designer. Each page will be a single design, too complex to change easily and cost-effectively.
Checking for uniformity
I recently came across a test that uses a simple file reader to check Web pages for uniformity. The reader grabs all the .aspx files from the source, identifies ‘bad’ tags, and recommends changes. This solution saves hours of trawling through code manually. If you’re daring, write the method so it changes the tags on its own, saving even more time. If you’re using unit tests, this test will fail every time, alerting a new programmer not to use the bad tag.
Coding for your designer
If you use Master Pages in ASP.NET, you already know the benefits. Other languages have similar templates you can use or make. But how far should you take them? Our designer has suggested splitting our Web site into three different types of pages. This way we won’t have to recreate the layout every time we add a new page.
* Login, logout, and and other similar unauthorized pages will have one template
* Configuration and site maintenance pages will have another
* Content pages for every user will be the third template, with specific ids making them unique
The benefit to this is that when your designer writes the CSS for these pages, he can change them all at once. Of course every page will be slightly different, but that’s where specific ids come in. With this method, when you change your site layout in the future, you don’t have to do it page-by-page.
Adding to your site
If you think this sounds like too much effort to apply, think about the time it takes you to create a brand new page. Each one will most likely have to be designed to match the look of your other pages…adding tables, spacings, BRs, inline style… the works. If you are set on your site and you don’t see much expansion in your future, you’re probably ok for now. But if your Web site is in an expansion stage, consider the time it takes you to create a new page, and how drastically a templated page would decrease the effort.
I can’t walk down the street without seeing someone looking at a Web page on a 4″ phone screen. Many people now have phones that can browse the internet to one extent or another. Some Web elements show up better on phones and PDAs than others. Lists, divs, and paragraph formatting all show up great. With CSS, your site will show up effectively on a variety of small wireless devices. But tables, Flash, and images that act as spacers don’t scale well, and show up as bits and pieces of an otherwise functional Web site.
As you can see, with CSS there are no losers. It’s not really a case of whether or not you should embrace the language—it’s simply a case of how soon you can do it.
February 19th 2009 | David Cooksey
The Power of Yield Return
While working on a project recently I ran into a problem that was very difficult to track down. We have a good number of checks that validate business rules on the main business object within the system. Sometimes these checks need to run on related business objects as well. The problem comes into play at this point, because the checks at the time of their creation do not have a direct reference to the business object they will be run on. Since the checks have dependencies that vary with their intended target, the convention is to push an Id onto a static stack, create the checks, run the checks, then pop the Id off of the stack.
Unfortunately, it is very easy to forget to update the stack, or to push but not pop. This results in some checks being set up and then run using the wrong dependencies, which causes all kinds of obscure behavior down the road.
Enter Yield Return.
This is a simple wrapper that guarantees the StaticStack is updated correctly. Even if the calling code throws an exception, the Pop() function will still be called. Note that the real code uses a context-specific stack, the sample code presented here does not handle multi-threaded situations.
Any foreach statement that iterates over a collection of business objects can then be updated as follows.
What I find most interesting about this pattern is that it allows the called function to run any arbitrary setup or teardown code per object it returns, in synch with the caller’s processing of the object.
David Cooksey is a Senior .NET developer at Thycotic Software Ltd.
I will be presenting at the WinProTeam Vienna
.NET Users Group on 3/1/2006 at
We use NAnt
and CruiseControl.NET for all of our products and client projects. The
power of a continuous build for improved quality, rapid feedback and engaging
your customer is enormous! The arrival of MSBuild and its integration into
the Visual Studio suite of tools adds another great option to your possible
build solutions. If you are still building and deploying your application
on a developer’s workstation, think again.
Here is the presentation blurb:
deployment process. NAnt is a free, open source tool
that provides a simple extensible XML-based format for doing all sorts of things
and can even be customized! See how to: separate your development, staging and
production environments – automate your test runs – archive build versions -
develop custom NAnt tasks – write C# script for
NAnt.MSBuild is Microsoft’s response to
the need for an XML-based build tool that is an integral part of the new Visual
Studio 2005 platform. Come learn how to use these tools and the
enormousdifference it can make tothe quality of your software
founder of thycotic, a .NET consulting company and ISV in Washington DC.
thycotic has just released Thycotic Secret
Server which is a secure web-based
solution to both “Where is my Hotmail password?” and “Who has the password for
our domain name?”. Secret Server isthe leader in secret management
and sharing within companies and teams.
I presented on “Build and Deploy with NAnt” (slides/code available)at the WinProTeamRockville .NET User Group on 11/4.
The following are some of the questions (and my answers) from the event:
- Can NAnt be used to compile projects in any .NET language? (particularly C++ projects)
Tracing the source code for the taskin NAnt.VSNet.ProjectFactory (which appears to control the derived ProjectBase class that actually does the compilation) shows that only C#, VB.NET and C++ projects are currentlysupported.
if (projectExt == ".vbproj" || projectExt == ".csproj")
- How does the build and deploy cycle work with multiple developers?
This technique thrives in the multiple developer environment since it provides an isolated predictable place (your integration server) to do all builds, tests, etc. This guarantees that the build is easily reproducible and independent of your development team. Your build should come directly from the source repository which then is the interface for concurrency issues on your team. Certain protocols are still necessary such as don’t check in code that doesn’t compile or doesn’t pass tests.
While the technique works well for teams, a single developer can still take advantage of a build and deploy technique. The developer will gain many of the same advantages by always putting the build together in a predictable reproducible manner (even if he doesn’t have a dedicatedintegration server). It also allows new versions to be quickly deployed by automating the process.
- How do you manage versionsand configurationacross yourQA environment (test, stage, prod)?
Configuration can be managed by either using the task or subnanting. Configuration within the application can be handled in a number of ways (file driven, database driven, etc) with NAnt copying over configuration files however necessary.
Versions should be moved across your QA environment using the idea of a “lastknowngood” build. This “lastknowngood” version would be a successful build that has passed all tests. It would be deployed to test then tested by the QA team then onto stage (then tested) and then production and so on … The idea is to move your build through each stage ensuring quality at each step.
Please note these ideas are just one way to do it. NAnt provides a wealth of options and you can always choose the best set that meets your needs, environment and customer.
On our current project we have been running our NAnt build script using a Windows Scheduled Task. While this never seemed particularly sophisticated, it did the job especially given that our build (plus all test suites) takes around 2 hours to run! Today, I installed CruiseControl.NET for a dabble – my last experience was many moons ago – when SourceGear Vault support was still a do-it-yourself affair.
The SourceGear Vault support was flawless – very easy to configure and worked perfectly. Getting the ccnet.config file to work was a little confusing (firstly, realizing that the tag does actually go into ccnet.config not somewhere in ccnet.exe.config as a .NET developer might expect), then the silly hassle with the merge task exception because the docs show the wrong element name (files/file) and that the CruiseControl.NET website with the latest documentation kept going down with proxy errors. (end of rant!) But the persistence was worth it, it monitors the source repository and automatically kicks off builds – then tracking the changes and viewing them in the ASP.NET app that comes bundled.
My initial concerns about CruiseControl.NET waswhat advantage it would give to kick off builds from source file checkins when your build time is 2 hours anyway! It seems that the secret is to limit your essential build tasks to a manageable timely set and then have your longer test suites (typically integration tests) run on a different periodic schedule.
One thing I did notice was that the ASP.NET portion of CruiseControl.NET seems a little slow since it appears to parse the log file on every request to display summary information. This seems inefficient since the log files do not change after the build. Adding a simple OutputCache directive to the Default.aspx page causes the pages to be cached and made the clicking around the UI a much more pleasant experience.
The key benefits that CruiseControl.NETprovides over our scheduled task approach are:
- Doesn’t wastelessly spin the integration server unless code is being changed
- Quicker feedback to developers when the build goes wrong (tieing it back to the offending checkin)
- Easy visibility into the build process which otherwise is a little complicated to debug
- Summary information on unit test suites
- Simplified build versioning
If you haven’t tried CruiseControl.NET yet, take it for a whirl!
I will be presenting on NAnt and how to use it to build and deploy your software – the session will touch on the commonly used tasks and how to use them in your integration process.It promises tobe a great jumpstart for the NAnt newbie and may even show the seasoned NAnt’er a thing or two – such as C# scripting in NAnt and building your own custom tasks.
NAnt is a wonderful extensible framework for doing all sorts of things – so come along …
The event will be held at the WinProTeam Rockville (Maryland) User Groupon 11/4 at 6:30pm.
[Updated link - thanks Dylan]
On my current project, we have 5 different environments – bleeding, stable, test, stage and production. The application is ASP.NET-based but also has a complicated Extraction Transformation and Loading (ETL) process which synchronizes the application’s data with a legacy system. All of this requires *alot* of configuration!
How do you manage this configuration across 5 environments with continuous integration and automated deployment?
SubNAnting! This is my contribution as a new word but alas the technology is not mine. The NAnt tool which does a wonderful job of building and deploying software also has a task which allows you toexecute a NAnt script from another NAnt script while inheriting all the launching scripts variables (if you set the inheritall attribute to true).
This allows you to specify your configuration variables in a “bootloader” NAnt script that then calls the main NAnt script which does the deployment. That way your main NAnt script is the same across all environments and only the “bootloader” script changes.
Anyone else using this technique?
To launch another NAnt script from an existing NAnt script.
Anyone else using this technique?
My current project (which I am very excited about!) is building an internet facing ASP.NET application for a high profile function. It involves building business objects to map to the database and general functions of the system and then mapping those into the UI. And now the fun part … the entire application has been built using TestDrivenDevelopment (TDD)!
- SourceGear Vault for Source Control
- Visual Studio .NET 2003
- DAL – Thycotic.Data
- Business objects – hand coded (with the help of a custom rolled generator) and using NullableTypes for exposed properties
- NAnt script for pulling out of Vault, compiling, running NUnit unit tests and pushing successful builds to integration server (running every 30 mins on a scheduled task – CruiseControl.NET doesn’t support Vault yet …)
What is the problem? TDD is new to the other developers and management. This means that occassionally there is a tendency to not test drive and just add a feature (null check, private method, etc) without a failing test. If we were always pair programming this would be less of an issue but our deadline is too tight to lose the estimated 15% additional time required to pair program. We did pair program on really critical areas of the system – base objects, establishingour data access pattern, exception managementand security. However, the areas of missed TDD code are a great risk as they stand the best chance of containing bugs and swallowing developer time (and this has already been experienced on a few occassions).In true XP style we need an automated tool to help us catchany lack of coverage …
- Integrates with our NAnt process (copy the source tree, instrument it, run the unit tests, generate a coverage report)
- Provides a metric that we can track and makes management happy by giving them numbers to confirm the TDD process (example reportfrom the NCover website)
- Points us towards the code of least coverage (and greatest risk) to allow us to either delete it (dead code) or write a test for it (not the nicest … but better than not having a test!)
- It is automated and provides continuous feedback with no intervention required once configured
I am very impressed by NCover. We have looked at the code (which compiled and passed all its unit tests right after the download!) and it is reallyneat. Well done to the developers! It would be nice to know the coverage strategy it is using (it uses a set of regular expressions at the moment to find branches in C# code) … after looking at the instrumented code, it may need a little tweaking to catch all possible execution paths but it is an amazing and very welcome product.