Friday, March 19, 2010

The Application of Pareto's Principle to the Adoption of Agile Practices - Part 1 of N

Starting this evening, I will be attending the Agile Coach Camp in Durham, NC. As the only registration fee for attending the ACC is to submit a position paper on a topic of interest to you, I submitted the following abstract.
The Application of Pareto's Principle to the Adoption of Agile Practices
If you believe in Pareto's Principle (otherwise known as the 80-20 Rule), then you believe that it can be applied literally everywhere. At its heart, Agile practices are about doing what works and ignoring the rest (at least until the time is right). In a world where people are constantly searching for silver bullets, getting distracted by zealot turf wars, and feeling the crunch of deadlines, novice adopters of Agile practices need to learn what out of"agile" is immediately important for their situation, and what they can safely ignore until a latter point in time.
So I figure why not put some more concrete thoughts together before the ACC starts. This post is the first of probably a couple posts on this topic, with probably at least one being a follow-up if my topic gets accepted as a discussion topic. Seeing how I average roughly 5 hits/month on this blog, it wouldn't hurt to get a little more traffic!

I also want to emphasize one important point to avoid having to defend a position that I'm not taking. This opinion of mine revolves around the adoption of agile practices. It in no way is an attempt to water down agile, as every individual practice discussed below certainly adds value and should be part of the full vision for any team attempting to go agile. This opinion simply revolves around the idea of getting the most bang for your buck up front so that you can get all of the nay-sayers on board the agile train sooner than later.

Review of the 80-20 Rule
The 80-20 Rule is typically explained as for any given "effort", the first 80% of the results you are looking for will be accomplished with only 20% of your time and energy, while the final 20% of the results will take 80% of your time and energy. When applying this to the adoption of agile practices in software development efforts, the gist is that only 20% of the ideas/topics will get you 80% of what you were looking for when you decided to start using agile practices.

Review of Agile topics
For the sake of discussion, I'm going to use part of James Shore's table of contents from his book The Art of Agile Development. The following is Part II - Practicing XP from his ToC, as these are the concrete practices that his book discusses:
5. Thinking (1. Pair Programming, 2. Energized Work, 3. Informative Workspace, 4. Root-Cause Analysis, 5. Retrospectives

6. Collaborating
(1. Trust, 2. Sit Together, 3. Real Customer Involvement, 4. Ubiquitous Language, 5. Stand-Up Meetings, 6. Coding Standards, 7. Iteration Demo, 8. Reporting)

7. Releasing
(1. "Done Done", 2. No Bugs, 3. Version Control, 4. Ten-Minute Build, 5. Continuous Integration, 6. Collective Code Ownership, 7. Documentation)

8. Planning
(1. Vision, 2. Release Planning, 3. The Planning Game, 4. Risk Management, 5. Iteration Planning, 6. Slack, 7. Stories, 8. Estimating)

9. Developing
(1. Incremental Requirements, 2. Customer Tests, 3. Test-Driven Development, 4. Refactoring, 5. Simple Design, 6. Incremental Design and Architecture, 7. Spike Solutions, 8. Performance Optimization, 9. Exploratory Testing)
The above chunk of his ToC represents a total of 37 practices across 5 concepts areas. As this opinion paper is not about paraphrasing James Shore's book, I will not make any attempt to describe any of the above practices any more than is necessary to make my points. If you are unfamiliar with any of the valuable practices that he describes, buy and read the book.

My 20% - Take 1
Seeing how James broke out the 37 practices into 5 concept areas, that makes it easy for me to make my first arbitrary application of the 80-20 Rule and grab Chapter 7 (Releasing) from above as the one concept area that you will get the biggest bang for your buck on. As software developers we get paid to produce working software. I've seen many a team over the past 15 years get caught up when releasing new versions of their software. Some developers spent days to weeks just trying to get the code to compile. Others had to "wing it" during the deployment of the code to a test server (the consummate "it worked on my machine" excuse). If you can't look at your current situation and proudly say we have the whole release process down pat and can consistently get new releases out with minimal effort, then start with this chapter.

My 20% - Take 2
Given the first application of the 80-20 Rule above to the 5 concept areas, this second application of the rule is going to be across the 37 practices (20% of 37 = 7.4 practices). My list of the 8 practices out of James Shore's 37 practices are:

#1 - Thinking - Root-Cause Analysis

James Shore's 99 words description:

When mistakes occur, blame your process, not people. Root-cause analysis helps. What allowed the mistake to happen? What will prevent them in the future? Assume people will continue to make mistakes and build fault-tolerance into your improvements.

One approach: ask "why" five times. Use it for every problem you encounter, from the trivial to the significant. You can apply some solutions yourself. Some will require team discussion, and others need coordination with the larger organization.

When mistakes become rare, avoid over-applying root-cause analysis. Balance the risk of error against the cost of more process overhead.

My take on it:
This practice is actually a twist on the 80-20 Rule in that it focuses you in on the issue that actually does need to be worked on rather than the issue that you either think needs to be worked on or worse case, the issue that you simply would like to work on.
#2 - Collaborating - Trust

James Shore's 99 words description:
For maximum productivity, team members must rely on each other for help. They must take joint responsibility for their work. Trust is essential. To improve trust, foster empathy. Learn about your teammates' joys and concerns. Sitting together helps, as does eating together. Preserve productive teams by keeping them together for multiple projects.

Organizational trust is also essential. Be energetic, deliver on your commitments, and be honest about problems. Show that you care about your stakeholders' goals. Promote your work through friendly openness. Always be honest about your achievements. Avoid the temptation to whitewash problems or misrepresent partially-done work.
My take on it:
In life as a whole a lack of trust leads to a constant downward spiral of every relationship. In agile, we are talking about teams, including both developers and customers, and therefore we are talking about relationships. Without trust, the process is going to fail.
#3 - Collaborating - Real Customer Involvement

James Shore's 99 words description:
To widen your perspective, involve real customers. The best approach depends on your market.

Personal development: you are the real customer. Congratulations. Go forth and write algorithms.

In-house custom development: turn your real customers into on-site customers.

Outsourced custom development: get real customers to be on-site customers. If you can't, stay in touch and meet face-to-face frequently.

Vertical-market and horizontal-market software: beware of giving one customer too much control. Appoint a product manager to understand customer needs. Create opportunities to solicit feedback. Examples: customer review board, beta releases, and user experience testing.
My take on it:
In my corner of the world, teams of "consultants" attack projects with little to no subject matter expertise but armed with years of experience in producing working software. This demands that customers are involved throughout the process, as only the customers have the subject matter expertise necessary for the project's success.
#4 - Releasing - "Done Done"

James Shore's 99 words description:
A story is only complete when on-site customers can use it as they intended. In addition to coding and testing, completed stories are designed, refactored, and integrated. The build script builds, installs, and migrates data for the story. Bugs have been identified and fixed (or formally accepted), and customers have reviewed the story and agree it's complete.

To achieve this result, make progress on everything each day. Use test-driven development to combine testing, coding, and design. Keep the build up to date and integrate continuously. Demonstrate progress to your customers and incorporate their feedback as you go.
My take on it:
Seeing how I called out Releasing as the single most important concept area, I'm going to go forward and proclaim that this particular practice is the single most important practice of all. Teams must be able to demonstrate progress daily and the sub-practices of test driven development and continuous integration are fundamental to enabling that.
#5 - Planning - Vision

James Shore's 99 words description:
Every project needs a single vision, and the product manager must unify, communicate, and promote that vision.

Distance between visionaries and the product manager increases error and waste. If you only have one visionary, the best approach is for him to be product manager. Alternatively, use the visionary's protogé.

Some projects have multiple visionaries. They need to combine their ideas into a unified vision. The product manager facilitates discussion and consensus.

Document the vision with what success is, why it's important, and how you know you've achieved it. Post it prominently and involve your visionaries in planning and demos.
My take on it:
Without a vision, a team can never confidently answer the question "is this topic important?" And without being able to answer that question, teams can get caught up in building software that they think is needed, without being able to justify it. That ultimately leads to lost productivity when they incorrectly answer that question.
#6 - Planning - Release Planning

James Shore's 99 words description:
Maximize your return on investment by:

1. working on one project at a time;
2. releasing early and often;
3. adapting your plans;
4. keeping your options open; and
5. planning at the last responsible moment.

Use timeboxing to control your schedule. Set the release date, then manage scope to meet that date. This forces important prioritization decisions and makes the endpoint clear.

Prioritized Minimum Marketable Features (MMFs) and stories form the body of your plan. Demonstrate your progress as you develop and use that feedback to revise your plan.

To minimize rework, develop the details of your requirements at the last responsible moment.
My take on it:
Some adopters of agile think that agile is all about "no documentation and no planning". They couldn't be more wrong, and thus why I'm including this practice. As James Shore points out, agile planning is about planning and adapting to changes.

This is in contrast to project managers of the past who created a project plan in MS Project at the beginning of the project and God help you if you as a developer created a situation where that project manager had to open MS Project after the project started to adjust the plan based on new information.

While I don't believe that there is anything particularly wrong with using MS Project, when using it in an agile environment the project manager maintaining the plan must update it weekly. If they don't do this, then they need to either throw out agile or throw out MS Project.
#7 - Developing - Test Driven Development (TDD)

James Shore's short description:
We produce well-designed, well-tested, and well-factored code in small, verifiable steps.
My take on it:
From a code construction point of view, if your team is not already using TDD, adopting TDD will raise the level of quality of the software that your teams are delivering. Simple as that.
#8 - Developing - Incremental Requirements

James Shore short description:
We define requirements in parallel with other work via living requirements documents and working incrementally.
My take on it:
Even in the best of the waterfall projects of days gone by that I worked on (and apparently even on the ones that I didn't), the up front requirements phase never produced a complete set of requirements. The agile practice of incremental requirements simply accepts that as a fact and creates a solution framework for continually dealing with it as the norm, rather than treating it as scope creep and denouncing it as something unexpected and bad.
Closing Thoughts
Hopefully this sparks some level of discussion for this weekend's Agile Coach Camp. If not oh well...

Wednesday, January 27, 2010

Apple iPad and Text Books

I'm probably not going to be the first one to mention this, but I want to go on record as early as possible that the single "killer app" for the newly released iPad is college text books.

I've been an iPhone 3GS owner since the day it was released and have loved every second of it. That is in stark contrast to the prior years of owning Palm and Windows Mobile phones, of which to say were terrible is in understatement. I've also been a huge iPod supporter for years now too, having recently bought my third one.

That said, I personally don't see any use for the new iPad. I currently use my iPhone for email and web surfing, and could see where using an iPad would be nicer, given its bigger screen. But I would never take the iPad anywhere other than the living room couch with me, as the form factor is simply too big (compared to the iPhone which nicely fits in my pants or jacket pocket). And since I can type on a regular keyboard at over 60 words per minute, I don't think that the iPad and its 10" screen is going to replace my regular desktop/laptop with its 24" screen anytime soon either.

But back to the college text book thing... if Apple were to be able to pull off selling college text books (and K-12 text books too) for use on this thing at say 25-50% the normal price of hard copy text books, the iPad would pay for itself in one or two semesters. And students get to stop carrying around the back breaking backpacks filled with the text books that they currently have too. And that's not even taking into account the added technological aspects of having ebooks that you can mark up as you please. Now THAT would lead to Apple selling millions upon millions of these things. Maybe that current stock price of $208 is low after all...

Friday, January 8, 2010

Review of NDepend

DISCLAIMER

Patrick Smacchia, the creator of NDepend, offered me a free license for NDepend Pro if I took the time to review the product and write this review on my blog. While I have not used NDepend prior to doing this review, given the fact that I have been reading Patrick's blog for quite some time now, along with the fact that I'm a big fan of static code analysis (for the regular identification of problematic areas in your source code), I happily agreed to his offer.

THE REVIEW

Installation
The installation of NDepend is a simple unzip and xcopy to your desired location. Seriously as simple as it gets. Once it's there, fire up the VisualNDepend executable and point it at some .NET/C# assemblies you have.

Analysis
I decided to run NDepend against a code base that had been handed down to me from a number of prior developers. While the customer is happy and the application has been running in a production environment for a couple of years now, the application is also a serious legacy code development situation, and a great sample project to see what NDepend has to offer for people in my shoes.

So after loading up the half dozen assemblies in the given application, NDepend made quick work of analyzing everything and producing a nice HTML report. It also shows the results in the Visual NDepend results browser, allowing you to quickly navigate around your code base.

Usage Scenarios
After immediately showing a peer of mine the tool what it does, I started rattling off to him the following list of scenarios that describe how I think NDepend can be useful to developers.

#1 - Newbie - Never used NDepend before and barely understands static analysis or any of the metrics used by NDepend.

In this scenario, much like in my code base above, NDepend will quickly point out problematic areas of your source code so that you can immediately start addressing those issues.

Now I'm sure that plenty of readers will immediately start screaming that the metrics aren't useful.

In response, I say that when you're reviewing a code base, and the documentation says that a Cyclomatic Complexity rating above 15 means that a method is complex, and above 30 is nearly impossible to comprehend, that if you're staring at a list of 20 methods with a rating above 30, including 5 with a rating above 40 and 1 with a rating above 80, you can't for a second argue with me that the tool isn't immediately useful in helping focus you in on the sections of your code base that you REALLY might want to stop and clean up. In other words, this is all about quickly focusing you in on the sections of your code base that just aren't good, no matter what your opinion is on the specific threshold for a particular metric.

#2 - Novice/Intermediate - You're familiar with the tool and the concepts and your code base is generally ok per the tool.

In this case, you're looking at integrating NDepend into your (hopefully pre-existing) continuous integration environment so as to ensure constant feedback to yourself and/or your team regarding the "health" of your code base. The value here is that NDepend comes with pre-existing plug-ins for NAnt, MSBuild, and CruiseControl.NET which means you can very quickly get NDepend integrated into your automated process.

#3 - Expert - You're already highly automated and compliant, but feel the itch for more.

In this case, NDepend really shines given the existence of CQL, it's own built in query language, that allows you to run queries against the metrics that it collected during the analysis phase. All of the functionality that #1 and #2 are leveraging is simply based on a set of pre-built CQL queries. But for the expert who is obsessed with perfection, automating their perfection seeking is even better. And being able to either tweak the pre-built queries, or better yet, build a series of custom queries is priceless.

Gripes
It wouldn't be right to review something and not offer some advice. That is what I get paid for in my real job! Anyway, one big thing that jumped out at me that was annoying was that I quickly found myself lost in the UI. I think that this is due to the specific UI controls and the docking layout stuff that is used by NDepend to try and organize the eleven navigation and results windows. I actually got things so messed up that thankfully NDepend has a "Reset Views" menu item that puts things back to how they were when I first launched it. I also found that once I got the set of windows that I found useful (for scenario #1 above) in view and the rest hidden, things were much smoother.

I think that to help resolve this in general, it would be nice to have a set of additional pre-built views, as there are a couple of task specific ones that already exist, that fit more in line with the three usage scenarios above.

I'd list price as an issue here too, but more from the point of view that I think Microsoft should simply pay Patrick a chunk of change and integrate NDepend into Visual Studio. Now that we've finally got people on board with automated unit testing (hopefully you're already doing this), now's the time for fully automating and enforcing code quality metrics! Unfortunately, unless a developer already knows about and has respect for static analysis, they probably aren't going to hunt down and pay a couple hundred dollars for a tool like NDepend. And then you have teams such as the one I'm currently on that simply doesn't have the budget for ANY tools, so we aren't going to use it either.

Summary
So in the end, NDepend simply rocks! While I had issues with the UI, that's something that after 5-10 minutes of use you don't really even notice. The immediate impact that NDepend can have on a code base is priceless. Just think, all of those code reviews everyone says they should be doing but never have the time to do can now be automated, and team members are simply responsible for maintaining the code quality ratings of the code they write.

Thanks to Patrick for giving me the chance to review what turned out to be a great tool. Using NDepend coupled with ReSharper and FxCop, code bases simply have no excuse to end up in the mess that so many of them are today.

Friday, September 18, 2009

Developing Temporal/Time-Based Database Solutions

Since I've already gotten a couple of requests for this information, I guess it's time for another blog post.

The following is based on my research on the subject, after having developed a temporal/append-only solution a couple of years ago (before knowing what it was called by academics). Considering that Professor Richard Snodgrass of the University of Arizona is THE person that I've come across that has written the most on the subject, you should check out his list of publications on the subject here first:

http://www.cs.arizona.edu/people/rts/publications.html

The key one from what I've read so far is "Developing Time-Oriented Database Applications in SQL". Most, including this one, are available as electronic downloads.

I've also noticed that Joe Celko (http://www.celko.com/books.htm) has written on the topic in his book titled "Joe Celko's Thinking In Sets - Auxiliary, Temporal, and Virtual Tables in SQL".

Now that I've apparently gotten some people's attention on this subject, perhaps it's time for me to start a series of blog posts on the subject...

Monday, August 31, 2009

Temporal Database Design, Soft Deletes, and Ayende's Razor

This is a formal follow-up to a post on Ayende's blog on "Avoiding Soft Delete's" in your database where I question the lack of temporal database solutions being applied to these types of problems.

After Oren claimed the following, I felt it necessary to expand on it in a blog post of my own, rather than continuing to clutter his comments, and hopefully finally bring some traffic to my own blog :-)

Ayende’s Razor

This is a response to a comment on another post:

Oren, in all seriousness, I thought that problems that were "(a) complex, (b) hard to understand (c) hard to optimize" were the kinds that folks like you and I get paid to solve...

Given two solutions the match the requirements of the problem, the simpler one is the better.

I could just as well call that statement "Jim's Razor", as I believe in it as much as you do Oren, so no arguments there.

But in the same vane, "wise" (i.e., experienced) software architects/developers strategically chose RDBMS's over flat text files for the same class of reasons that I believe we should be making temporal database concerns first-class citizens of the modern IT toolbox. The "additional features/functionality" gained by employing temporal databases, while never stated up front in requirements, would be priceless in the long run for business systems. Those features include, but are not limited to:
  • automatic audit logging, since nothing is ever UPDATE'd or DELETE'd, you've got a constant trail of changes
  • automatic support for infinite undo/roll-back support of data, as you simply load a prior version and then save as usual
  • automatic support for labeling of versions, much like in source/version control systems, at an individual record level, table level, "aggregate root level", or database level
  • automatic support for "back querying" a system, in search of what the situation looked like last month, last year, etc. (though raising this "aspect", as in AOP, to the ORM level would be crucial)
IMHO, switch/case statements are generally simpler than polymorphism (demonstrated by how switch/case statements are typically taught before polymorphism in academic settings), but we all know why polymorphism in the long run is the better strategy, and therefore why as soon as we see switch/case statements propagating their way into our code bases, we typically change to a polymorphic strategy.

Again, the goal in my eyes is to raise temporal database concepts to the level of first class citizens in the IT world, as opposed to the back water, academic debates that they are today. The major players like Microsoft with SQL Server have never bothered to implement the temporal extensions to ANSI-94 (yes, from 1994 - see this). Interestingly enough Oracle has implemented some of them via Oracle 9i and 10g, but per the work of Snodgrass and friends, still have room to perfect things.

This is also where the "minor players", i.e., the open source software community via projects such as NHibernate could step in and heavily promote something like this. In the same line of thinking of not wanting to constantly roll your own ORM, thus why you chose something such as NHibernate, needing to roll your own temporal solution for every project should be equally unnecessary.

In closing, for all of the "pain" of implementing something as complex as this topic would be, I'd love to see a platform such as Microsoft Dynamics (MS CRM) implement this all the way from the database through to the GUI, as it would clearly represent a paradigm shift in business information systems development. Of course perhaps I should just start a company to do just this...

Monday, August 3, 2009

MS SQL Server Named Instances and Aliases For Heterogeneous Developer Environments

On the team that I'm working with, we're supporting MS SQL Server 2000, 2005, and 2008. Depending on when the particular developer joined the team, and thus when they installed the various pieces of software on their development workstation, any of the above listed versions might be the default instance (i.e., "(local)"), while the others might be installed as named instances (i.e., "(local)\SQL2008" or "(local)\SQL2K5").

Every once in a while, a developer has to work on a project with a database installed to a local database server that is on a named instance other than the rest of the development team. With the database server name stored in .config files, altering this for each developer just doesn't make much sense. Thankfully, MS SQL Server has a very simple and straightforward solution for this - aliases.

By using aliases, an application under development can be configured to use an alias in the .config file, and each developer simply needs to create a alias on their workstation pointing to their particular named instance.

Anyway, you can either decipher what MSFT KB Article 265808 has to say, or just follow the step-by-step instructions here (for MS SQL Server 2005):

#1 - Open SQL Server Configuration Manager

#2 - Confirm that TCP/IP is enabled for the named instance for which you are creating the alias.


#3 - Confirm that "Listen All" is set to "Yes" for TCP/IP for that instance.


#4 - Take note of the port number listed next to the "TCP Dynamic Ports" setting under the "IPAll" section.


#5 - Right-click the "Aliases" node and select "New Alias..."


#6 - Fill in the details for the new alias.
  • Alias Name - I like the idea of choosing a name for the alias that a) isn't already a name on the network (obviously) and b) is named for the application that we are developing. It is irrelevant that you have 2, 5, 10, or 500 aliases all pointing to the same database server (web hosting services have done this for years when hosting hundreds of websites on a single server).
  • Port No. - This is the value from "TCP Dynamic Ports" in step #4 above.
  • Protocol - Confirm TCP/IP is selected.
  • Server - This is the database server name you would "normally" use to connect to the particular database server. Note that the SQL Server "(local)" alias works as part of this solution, and in conjunction with or without a named instance name, is what should go in this box.


NOTE #1 - Aliases are "local" to the specific workstation they are created on. In other words, every developer on the team will need to create the alias on their individual computer.

NOTE #2 - Yes, the dev team could standardize the way software is installed on their computers. But on my team, we get paid to do what our client wants us to do, and not to arbitrarily streamline" and tweak every last detail of our development environment. Besides, using aliases just works, and unless someone can comment on why this is technically a bad idea, I'm not going to ever condone wasting my client's money.

Monday, June 29, 2009

Hotmail.com and Live.com email access to your iPhone

So being the proud new owner of an iPhone 3GS after years of dealing with the inferior Windows Mobile and Palm platforms, I'm also learning the ins and outs of "things that should be easy".

Take for instance the fact that MSFT only offers crippled POP3 access to Hotmail/Live.com, thus making those nearly worthless on the iPhone. Thankfully the fine folks at FluentFactory make just the thing to make Hotmail/Live.com mail on the iPhone nearly what it should be (for those of us with a many, many year history with our Hotmail accounts).

http://fluentfactory.com/mboxmail/

But that said, looking at what Google has to offer for synching to the iPhone, I can't help myself from laughing!

http://www.google.com/mobile/apple/sync.html

The following paragraph copied from that page is the key:

Important! Google Sync uses the Microsoft© Exchange ActiveSync© protocol. When setting up a new Exchange ActiveSync account on your iPhone, all existing Contacts and Calendar events will be removed from your phone. Please make sure to back up any important data before you set up Google Sync.

Pretty sweet when Google licenses ActiveSync from MSFT for connecting GMail for use on the iPhone, while MSFT refuses to offer the same functionality for Hotmail/LiveMail users, even the paid ones like me! WTF MSFT?!?!?!

Tuesday, October 28, 2008

Touch Typing - i.e., eaking seconds and minutes out of your day

After being forwarded a link to Stevey's Blog Rants: Programming's Dirtiest Little Secret today, I figured I'd do a blog post on this topic too. IMHO, if you sit in front of a computer all day long, especially as a computer programmer, and can't touch type faster than 60wpm, start practicing for 30 minutes a day with one of these tools until you can.
http://www.keybr.com/ (This is the one that I like the best, but a few more follow.)

Tuesday, September 30, 2008

Incorrectly Storing Objects In The ASP.NET Session

In an effort to keep this from happening to someone else, I figured I'd write about it today. I'm surely not the first person to ever write about this, so I'm not claiming to have found something novel. I've personally never written a web application that needed to utilize the ASP.NET Session object, but apparently other folks have not learned that.

Anyway, the following chunk of code was being used to store a reference to the "domain object" currently being viewed/edited.

namespace Your.Web
{
/// <summary>
/// Summary description for ContainerBase.
/// </summary>
public class ContainerBase : Page
{
public virtual IDomainObject DomainObject
{
get
{
if (Session["DomainObject"] != null)
{
return (IDomainObject) Session["DomainObject"];
}
else
{
return null;
}
}
set { Session["DomainObject"] = value; }
}
}
}

The result of doing this is that by opening one "domain object" (DO1) for editing in a window (W1), hitting CTRL-N to open a new window (W2) that by default will still show DO1, navigating to a new domain object (DO2) in W2, then switching back to W1 and clicking the Save button without any other operations, the edits in W1 that should be being applied to DO1 are actually being applied to DO2. Dooh!!!! That eliminates a user's ability to have two or more web browser windows open at a time. MS-DOS, here we come!

Moral of the story, don't use the Session object, unless you really truly have read everything there is about using it and are still convinced that you should still use it. And even then, you're probably still wrong!

For further reading, see the ASP.NET Session State Overview on MSDN.

Friday, September 26, 2008

The Overuse of the StringBuilder class in .NET

So this friend of mine Adam wrote this post today. The main point of the post I love, as it correctly abstracts the creation of the query string behind a class and away from the rest of the code that is consuming the query results. Why I'm writing this post is that while his post was not directly having to do with the StringBuilder class, IMHO, he is displaying what I believe is indicative of the over zealous use of the StringBuilder class.

For the sake of convenience, this is how he wrote it:

private static string BuildDefaultViewQuery()
{
var builder = new StringBuilder();
builder.Append("<Where>");
builder.Append("<Eq><FieldRef Name='DefaultView' /><Value Type='Boolean'>");
builder.Append("1");
builder.Append("</Value></Eq></Where>");
return builder.ToString();
}

This is how I would have written the guts of this method:

private static string BuildDefaultViewQuery()
{
return String.Format(@"
<Where>
<Eq>
<FieldRef Name='DefaultView' />
<Value Type='Boolean'>{1}</Value>
</Eq>
</Where>",
1);
}
I just happen to think that there is too much "noise" when using a StringBuilder for this kind of stuff. Yes, if you're looping through a potentially large set of data and producing a potentially large string as a result, then by all means, use a StringBuilder, as that is what you should be doing. But if you're just building a relatively static string, with a handful of "variables" inserted into the string, use String.Format() in conjunction with a string that supports line breaks (i.e., using the @"" syntax). The resulting code is so much easier to read and understand what is really going on besides the building of a string.