Coding Horror

programming and human factors

Software Branching and Parallel Universes

Source control is the very bedrock of software development. Without some sort of version control system in place, you can't reasonably call yourself a software engineer. If you're using a source control system of any kind, you're versioning files almost by definition. The concept of versioning is deeply embedded in every source control system. You can't avoid it.

But there's another concept, equally fundamental to source control, which is much less frequently used in practice. That concept is branching. The Subversion documentation has a decent layman's description of branching:

Suppose it's your job to maintain a handbook for a particular division of your company. One day a different division asks you for the same handbook-- but with a few parts modified specifically for them, as they do things slightly differently.

What do you do in this situation? You do the obvious thing: you make a second copy of your document, and begin maintaining the two copies separately. As each department asks you to make small changes, you incorporate them into one copy or the other. But you often find yourself wanting to make the same change to both copies. For example, if you discover a typo in the first copy, it's very likely that the same typo exists in the second copy. The two documents are almost the same, after all; they only differ in small, specific ways.

This is the basic concept of a branch-- a line of development that exists independently of another line, yet still shares a common history. A branch always begins life as a copy of something, and moves on from there, generating its own history.

If you don't ever use the branching feature of your source control system, are you really taking full advantage of your source control system?

I find that almost every client I visit is barely using branching at all. Branching is widely misunderstood, and rarely implemented-- even though branching, like versioning, lies at the very heart of source control, and thus software engineering.

Perhaps the most accessible way to think of branches is as parallel universes. They're places where, for whatever reason, history didn't go quite the same way as it did in your universe. From that point forward, that universe can be slightly different-- or it can be radically and utterly transformed. Like the Marvel comic book series What If?, branching lets you answer some interesting and possibly even dangerous "what if" questions with your software development.

What If #43: What if Conan The Barbarian was stranded in the 20th century?

Parallel universes offer infinite possibility. They also allow you to stay safely ensconced in the particular universe of your choice, completely isolated from any events in other alternate universes. An alternate universe where the Nazis won World War II is an interesting idea, so long as we don't have to live in that universe. There could potentially be thousands of these parallel universes. Although branching offers the seductive appeal of infinite possibility with very little risk, it also brings along something far less desirable: infinite complexity.

The DC comic book series Crisis on Infinite Earths is a cautionary tale of the problems you can encounter if you start spinning off too many parallel universes.

Prior to Crisis on Infinite Earths, DC was notorious for its continuity problems. No character's backstory, within the comic books, was entirely self-consistent and reliable. For example, Superman originally couldn't fly (he could instead leap over an eighth of a mile), and his powers came from having evolved on a planet with stronger gravity than Earth's. Over time, he became able to fly, his powers were explained as coming from the sun, and a more complex backstory (the now-familiar "last survivor of Krypton" origin story) was invented. Later it was altered to include his exploits as Superboy. It was altered further to include Supergirl, the bottled city of Kandor, and other survivors of Krypton, further watering down the original idea of Superman having been the sole Kryptonian to survive the destruction of his world. There was also an issue of character aging; for instance, Batman, an Earth-born human being without super powers, retained his youth and vitality well into the 1960s despite having been an active hero during World War II, and his sidekick Robin never seemed to age beyond adolescence in over 30 years.

Crisis on Infinite Earths #7:

These issues were addressed during the Silver Age by DC creating parallel worlds in a multiverse: Earth-One was the contemporary DC Universe, which had been depicted since the advent of the Silver Age; Earth-Two was the parallel world where the Golden Age events took place, and where the heroes who were active during that period had aged more or less realistically since that time; Earth-Three was an "opposite" world where heroes were villains, and historical events happened the reverse of how they did in real life (such as, for instance, President John Wilkes Booth being assassinated by a rebel named Abraham Lincoln); Earth Prime was ostensibly the "real world," used to explain how real-life DC staffers (such as Julius Schwartz) could occasionally appear in comics stories; and so forth. If something happened outside current continuity (such as the so-called "Imaginary Stories" that were a staple of DC's Silver Age publications), it was explained away as happening on a parallel world, a premise not dissimilar to the company's current "Elseworlds" imprint.

Start juggling too many parallel universes at once, and you're bound to drop a few. In most source control systems, you can create hundreds of branches with no performance issues whatsoever; it's the mental overhead of keeping track of all those branches that you really need to worry about. Your developer's brains can't exactly be upgraded the same way your source control server can, so this is a serious problem.

I find that the analogy of parallel universes helps developers grasp the concept of branching, along with its inevitable pros and cons. But it doesn't get much easier from there. Branching is a complex beast. There are dozens of ways to branch, and nobody can really tell you if you're doing it right or wrong. Here are a few common branching patterns you might recognize.

Branch per Release
Every release is a new branch; common changes are merged between the releases. Branches are killed off only when the releases are no longer supported.

Branch Per Release

Branch per Promotion
Every tier is a permanent branch. As changes are completed and tested, they pass the quality gate and are "promoted" as merges into successive tiers.

Branch Per Promotion

Branch per Task
Every development task is a new, independent branch. Tasks are merged into the permanent main branch as they are completed.

Branch Per Task

Branch per Component
Each architectural component of the system is a new, independent branch. Components are merged into the main branch as they are completed.

Branch Per Component

Branch per Technology
Each technology platform is a permanent branch. Common parts of the codebase are merged between each platform.

Branch Per Technology

You may notice a few emerging themes in these branch patterns:

  • All branches have a clearly defined lifecycle. They either live forever, or they are eventually killed off.
  • All branches are created with the intention of eventually merging, somewhere. A branch without a merge is pointless.
  • As we add branches, our development model gets complicated.

But that complication is often justified. The more developers you have on a project, the higher the chances are that one of those developers will check something really bad into source control and disrupt everyone else's work. It's simple statistics. People make mistakes. The more developers you have, the more mistakes you'll have. And the more developers you have, the greater the consequences when everyone's work is simultaneously disrupted by a bad checkin. So what are our options?

  1. Maximum Productivity
    Everyone works in the same common area. There are no branches, just a long, unbroken straight line of development. There's nothing to understand, so checkins are brainlessly simple-- but each checkin can break the entire project and bring all progress to a screeching halt.

  2. Minimum Risk
    Every single person on the project works in their own private branch. This minimizes risk; everyone works independently, and nobody can disrupt anyone else's work. But it also adds incredible process overhead. Collaboration becomes almost comically difficult-- every person's work has to be painstakingly merged with everyone else's work to see even the smallest part of the complete system.

The answer usually lies somewhere between these two extremes. Like everything else, branching can be abused. Chris Birmele notes that branching has its own set of anti-patterns you should watch out for:

Merge ParanoiaMerging is avoided at all cost, due to a fear of the consequences.
Merge ManiaThe team spends an inordinate amount of time merging software assets rather than developing them.
Big Bang MergeMerging has been deferred to the very end of the development effort and an attempt is made to merge all branches simultaneously.
Never Ending MergeMerge activity never seems to end; there's always more to merge.
Wrong Way MergeA software asset is merged with a previous version.
Branch ManiaBranches are created often and for no apparent reason.
Cascading BranchesBranches are never merged back to the main development line.
Mysterious BranchesNobody can tell you what the branches are for.
Temporary BranchesThe purpose of a branch keeps changing; it effectively serves as a permanent "temporary" workspace.
Volatile BranchesAn unstable branch is shared by other branches or merged into another branch.
Development FreezeAll development activities are stopped during branching, merging and building new baselines
Berlin WallBranches are used to divide the development team members, rather than divide the work they are performing.

If you've managed to read this far, perhaps you can understand why so many software development teams are completely sold on version control, but hesitant to take on branching and merging. It's a powerful, fundamental source control feature, sure, but it's also complicated. If you're not careful, the wrong branching strategy could do more harm to your project than good.

Still, I urge developers to make an effort to understand branching-- really understand it-- and explore using branching strategies where appropriate on their projects. Done right, the mental cost of the branching tax pales in comparison to the benefits of concurrent development it enables. Embrace the idea of parallel universes in your code, and you may find that you can get more done, with less risk. Just try to avoid a crisis on infinite codebases while you're at it.

Discussion

Pushing Operating System Limits

Raymond Chen notes that if you have to ask where the operating system limits are, you're probably doing something wrong:

If you're nesting windows more than 50 levels deep or nesting menus more than 25 levels deep or creating a dialog box with more than 65535 controls, or nesting tree-view items more than 255 levels deep, then your user interface design is in serious need of rethought, because you just created a usability nightmare.

If you have to ask about the maximum number of threads a process can create or the maximum length of a command line or the maximum size of an environment block or the maximum amount of data you can store in the registry, then you probably have some rather serious design flaws in your program.

I'm not saying that knowing the limits isn't useful, but in many cases, if you have to ask, you can't afford it.

In general, I agree with Raymond. Asking these kinds of questions definitely raises red flags. Edge conditions should never be a goal, and if your design is skirting that close to an operating system edge condition, you're either doing something incredibly brilliant or incredibly stupid. Guess which one is more common?

However, it can also be surprising how quickly you can run into operating system limits-- even when you're not doing anything that unusual.

When researching blog posts, I tend to open a lot of browser windows and tabs. At least twice per week, I have so many browsers and tabs open that I run into some internal limitation of the browser and I can't open any more. My system gets a little wonky in this state, too: right-clicking no longer shows a menu, and I'm prevented from launching other applications. But if I close a few errant browser windows or tabs, everything goes back to normal.

I prefer to use Internet Explorer for day-to-day browsing chores, but it appears that IE 7 is particularly vulnerable to these limitations. I ran a quick test in which I opened as many instances of the Yahoo home page as I could, with nothing else running:

Maximum number of IE7 windows I can open39
Maximum number of IE7 tabs I can open47

I don't think having 47 typical web pages open, spread across a couple instances of Internet Explorer on my three monitors, is so unreasonable. And yet that's a hard limit I run into on a semi-regular basis. It's annoying. It looks like IE6 had a similar limit; Theodore Smith found that he could only open 38 pages before new windows were frozen. Firefox fares quite a bit better in the same test:

Maximum number of Firefox 2 windows I can open55
Maximum number of Firefox 2 tabs I can open100+

These aren't hard limits in Firefox; they're practical limits. After I opened 55 Firefox windows, Vista automatically kicked me into Vista Basic mode due to Aero performance degradation. I was unable to close all the instances of Firefox, and I had to kill the task. Tabs worked better; I got bored opening new Yahoo homepage tabs after about seventy and gave up. I was able to close all the tabs without incident. I'm guessing you could have at least a hundred tabs open in Firefox before something suitably weird happened.

So we've learned that Internet Explorer sucks, right? Maybe. The results I saw are largely due to a key architectural difference between the two browsers. IE allows you to choose between opening web pages in the same iexplore.exe process (Open New Tab, Open New Window), or opening web pages in a new, isolated instance of iexplore.exe. Unlike IE, Firefox only allows one Firefox.exe process, ever. This clearly helps it scale better. But there is a downside: if any web page crashes, it will take down the entire firefox.exe process and every other web page you have open.

I understand the need for practical limits in the operating system. Most of the limits Raymond cites are so high that they're borderline absurd. Can you imagine subjecting a user to a menu with 254 nesting levels? Open a zillion copies of notepad, and you'll eventually have problems, too. I get that. The point is to keep those operating system limits far enough above typical usage that developers and users-- even power users-- aren't likely to run into them.

I'm not sure if we're running into an application or operating system limit here; I suspect a little bit of both. Still, I'm disappointed. A limit of only 47 tabbed web pages open at any time under Internet Explorer 7 seems artificially and unacceptably low to me. The introduction of the tabbed browsing metaphor makes it much more likely that users will open lots of web pages at the same time. I'd expect the developers on the IE team to test their application for moderately heavy usage scenarios like this. It's another case of failing to test with a reasonably large data set.

Discussion

Computer Display Calibration 101

If you've invested in a quality monitor for your computer, you owe it to yourself-- and your eyes-- to spend 15 minutes setting it up properly for your viewing environment. I'm not talking about a high-end color calibration, although you can certainly do that. I'm talking about basic computer display calibration 101.

The first piece of advice is essential -- make sure your LCD display is connected to your computer via a digital connection.

DVI and VGA ports

The DVI port, on the left, is the one you want. Avoid using the standard analog VGA connector, on the right. A DVI connection guarantees that your display is sent a pure, digital stream of bits, shuttled directly from your video card with no analog impurities introduced along the way.

In the bad old days of analog CRTs, we had to worry about a whole host of analog issues with the monitor itself, such as convergence, display curvature and geometry, refresh rate, bloom, resolution sizing, and so on. Every time I bought a new CRT, I'd spend a solid hour going through Nokia's classic monitor test program, adjusting monitor settings to reduce all the unavoidable analog side effects of an electron scanning CRT. It was a tweaker's paradise.

The good news is that a digitally connected LCD is much closer to perfect out of the box than any CRT ever was. There's very little tweaking necessary to get it looking its best.

Display calibration probably isn't anyone's idea of a good time, but it can be relatively painless. One of my favorite basic display calibration wizards is the one built into Windows Media Center. It's accessible via Settings, TV, Configure Your TV or Monitor. It's based on a series of brief, themed video clips that do a great job of explaining why each setting matters without bogging down in display-geek terminology. There are five sections:

  1. Onscreen Centering and Sizing
  2. Aspect Ratio (Shape)
  3. Brightness (Black & Shadow)
  4. Contrast (White)
  5. RGB Color Balance

The first two are mostly irrelevant for a digitally connected LCD display, provided you're running at the native resolution of the LCD display. I'll assume you are. The last three are the only adjustments that typically matter on a desktop LCD. I'll summarize each, along with a static screenshot from the video, so you can follow along on your display.

3. Brightness (Black & Shadow)

Locate the brightness control for your display. Adjust the brightness, making sure you can distinguish the shirt from the suit. The suit should be black, not gray. If you see a moving X, turn the brightness down until the X just disappears.

vista display calibration, brightness

On a LCD, the brightness control doesn't have quite the same meaning as it does on a CRT. If your LCD has a gamma adjustment, that will be more effective at bringing out the nearly-black details on the shirt than increasing the backlight intensity will. Also, if you're looking for that X, you won't find it. I had trouble capturing the very dark moving X in my static screenshot for some reason. I've seen a very similar calibration technique used in video games which rely on dark environments. The goal is the same-- we want to see the deepest possible blacks on our display, without losing details in the darkness.

4. Contrast (White)

Locate the contrast control for your display. Set the contrast as high as possible without losing the wrinkles and buttons on the shirt. Lower the contrast if the white cue stick does not appear straight and smooth.

vista display calibration, contrast

Digital fixed pixel displays won't have blooming problems, so you can ignore that last bit about the stick. But you can see where this is a complementary operation to the brightness adjustment we just made-- we want to see the brightest white details on our display, without blowing them out.

5. RGB Color Balance

Locate the RGB color balance control for your display. If your monitor has a color temperature setting, set it to 6500k (sometimes called "Warm" or "Low"). Make sure none of the gray bars have a tinge of red, green, or blue. You may need to fine tune brightness and contrast again after adjusting the color balance.

vista display calibration, RGB color balance

And that's it. A few minor adjustments to the Brightness, Contrast, and Color settings of your monitor is all it takes to get the most out of today's LCD displays-- to see all the colors, and the entire range of light to dark, that you paid for.

You should always start with the controls on the monitor itself. Unfortunately, some monitors won't allow you to change the brightness, contrast, and color settings in digital mode. Or perhaps you can't quite get the precision you need from the monitor's controls. Most video drivers will allow you to change these settings at the video card level.

nvidia video color settings

Be careful, however, as there are usually two sets of settings: one for video playback, and the other for your desktop itself. I'd avoid changing brightness, contrast, or color settings via the video driver unless you have no other choice. It adds another layer of complexity to an already complex situation.

The general calibration steps for a LCD television are awfully similar to the Windows Media Center wizard I outlined above. But both are still rudimentary. You'll need to do much more involved calibration for professional color work.

These calibrations are also video-centric. It's an entirely fair point to note that we are talking about LCD computer displays here, and not LCD televisions. They aren't the same thing. People spend far more time reading text than watching videos on their computer monitors-- and at a distance of two feet, not ten feet. You might find that an optimal brightness for the above test images produces a screen that's painfully bright for workaday reading tasks. This is an important point that's glossed over in most LCD reviews, but Dan covered it with aplomb in his 30" Dell LCD review:

The minimum brightness setting for the 3007WFP-HC is still pretty bloody bright. The maximum brightness is down a bit from the non-HC model, at a mere 300 candelas per square metre, but that's still outrageously bright. Not nearly as bright as sunlight on paper, but way brighter than anybody should set a normal indoor desktop monitor.

Ideally, your monitor shouldn't be any brighter than a well-lit book (something which is probably new to the 60Hz-CRT brigade who, today, don't know how to adjust their laptop's screen brightness...). But I can't turn the 3007WFP-HC down that far. Well, not without opening the thing up and fooling with the backlight power supply or something.

I've rigged up a quick-'n'-dodgy bias light behind the monitor to reduce eyestrain, and JediConcentrate and the Darken bookmarklet help to reduce the number of minutes I spend with millions of bright white pixels tanning my retinas.

Far too much default brightness is easily the number one problem I see on most LCDs these days. Keep Dan's rule of thumb in mind as you're adjusting the brightness and contrast on your LCD against the reference images. Most display calibration guides care only about accuracy, not your eyeballs. For reading purposes, your monitor shouldn't end up any brighter than a well-lit book.

Discussion

Why Are Web Uploads So Painful?

As video on the web becomes increasingly mainstream, I've been dabbling a bit with video sharing myself. But I've found that publishing video content on the web is extraordinarily painful, bordering on cruel and unusual punishment. The web upload process is a serious barrier to online video sharing, and here's why:

  1. Video files are huge

    Video files are easily the largest files most users will ever create. Even at very modest resolutions and bitrates, the filesize will be more than 10 megabytes for anything except the shortest of video clips. And high definition? Forget about it. That's hundreds of megabytes.

  2. Limited upstream bandwidth

    Most people have highly asymmetric internet connections: massive download bandwidth, but the tiniest trickle of upload bandwidth. This trickle of upload has to be shared among all the applications competing for bandwidth. Uploading giant video files is challenging under the best of conditions, and most people's internet connections are more like worst case scenarios for uploading.

  3. Uploads are precious

    Downloads are a dime a dozen. If a download fails, who cares? There are a hundred different sources to get any particular download from. Re-downloading is fast and easy. But uploads are different. If you're uploading a video, it's likely something you have somehow edited and invested time in. Maybe it's a video you created yourself. You're uploading it with the intent of sharing. If the upload fails, you won't be able to share what you've created with anyone-- so you care intensely about that upload. Uploads are far more precious than downloads, and should be treated with appropriate respect by the browser and the server.

Worst of all, our existing browser and HTTP infrastructure is absolutely horrible at handling large file uploads. I mean profoundly, abysmally bad.

Consider the upload form for Google Video. It does as little as it possibly can without actually being a vanilla HTML 4.01 form. Once I start an upload of my many-megabyte video file, there's no feedback whatsoever about what's happening. There's only a generic animated GIF and an admonishment not to close the browser window. When will my upload be done? Could be 10 minutes; could be 10 hours. Who knows?

Google video upload UI

The YouTube video upload page is slightly better; it uses a flash-based element to provide basic percentage-done feedback on the upload.

YouTube video upload UI

Despite the spartan progress feedback, the YouTube upload page is hardly any better than the Google Video upload page. If I accidentally navigate away from the upload page-- or much to my chagrin, if I click on that "having trouble" link-- my upload is arbitrarily cancelled with no warning whatsoever. There's no hope of resuming where I left off. I have to start over from scratch, which is punishing when you're dealing with a large video file and a typical trickle-upload internet connection.

If Google Video and YouTube represent the state of the art for web-based video uploads, that's an embarrassment.

I can't find any video sharing sites that do uploads well. Large file upload seems to be a textbook case for the advantages of desktop applications over even the most modern of web applications. The Google Video page actually recommends using their desktop uploader for video files over 100 megabytes in size. Based on my abysmal user experience to date, I'm inclined to use a desktop uploader for any video file over 10 megabytes.

Large file uploads are an inhospitable wasteland on today's web. But what really drives me crazy is that it doesn't have to be this bad. Our web browsers are failing us. Current web browsers treat downloads as first-class citizens, but uploads don't even rate third-class treatment. Internet Explorer provides a nice enough download user interface:

Internet Explorer 6 download UI

Like so much about IE, this download dialog has barely changed since 1999. Firefox has an improved download dialog that handles multiple simultaneous downloads.

Firefox 2 download UI

Why can't browsers, at the very least, provide the same level of feedback about uploads as they do about downloads? The browser surely knows the size of the file I just told it to upload, and it's responsible for streaming those bytes to the server, so it should also be able to tell me when the upload will finish. Longer term, I'd like to see support for resumable uploads, just like today's browsers can resume HTTP downloads in some select scenarios.

It's clear to me that large file uploads will become increasingly prevalent on the web as video trickles down to the mainstream. Uploads are not the freakish edge conditions they might have been in 2001. I hope future browsers can extend the same great support they have for file downloads to file uploads. But that doesn't help us today. Perhaps more sophisticated browser plugin environments-- such as Silverlight and AIR -- can enable a better user experience for these large file uploads, sooner rather than later.

Discussion

Can Your Team Pass The Elevator Test?

Software developers do love to code. But very few of them, in my experience, can explain why they're coding. Try this exercise on one of your teammates if you don't believe me. Ask them what they're doing. Then ask them why they're doing it, and keep asking until you get to a reason your customers would understand.

What are you working on?

I'm fixing the sort order on this datagrid.

Why are you working on that?

Because it's on the bug list.

Why is it on the bug list?

Because one of the testers reported it as a bug.

Why was it reported as a bug?

The tester thinks this field should sort in numeric order instead of alphanumeric order.

Why does the tester think that?

Evidently the users are having trouble finding things when item 2 is sorted under item 19.

If this conversation seems strange to you, you probably haven't worked with many software developers. Like the number of licks it takes to get to the center of a tootsie pop, it might surprise you just how many times you have to ask "why" until you get to something – anything – your customers would actually care about.

It's a big disconnect.

Software developers think their job is writing code. But it's not.* Their job is to solve the customer's problem. Sure, our preferred medium for solving problems is software, and that does involve writing code. But let's keep this squarely in context: writing code is something you have to do to deliver a solution. It is not an end in and of itself.

As software developers, we spend so much time mired in endless, fractal levels of detail that it's all too easy for us to fall into the trap of coding for the sake of coding. Without a clear focus and something to rally around, we lose the context around our code. That's why it's so important to have a clear project vision statement that everyone can use as a touchstone on the project. If you've got the vision statement down, every person on your team should be able to pass the "elevator test" with a stranger – to clearly explain what they're working on, and why anyone would care, within 60 seconds.

If your team can't explain their work to a layperson in a meaningful way, you're in trouble, whether you realize it or not. But you are in good company. Jim Highsmith is here to help. He explains a quick formula for building a project vision model:

A product vision model helps team members pass the elevator test – the ability to explain the project to someone within two minutes. It comes from Geoffrey Moore's book Crossing the Chasm. It follows the form:

  • for (target customer)
  • who (statement of need or opportunity)
  • the (product name) is a (product category)
  • that (key benefit, compelling reason to buy)
  • unlike (primary competitive alternative)
  • our product (statement of primary differentiation)

Creating a product vision statement helps teams remain focused on the critical aspects of the product, even when details are changing rapidly. It is very easy to get focused on the short-term issues associated with a 2-4 week development iteration and lose track of the overall product vision.

I'm not a big fan of formulas, because they're so, well, formulaic. But it's a reasonable starting point. Play Mad Libs and see what you come up with. It's worlds better than no vision statement, or an uninspiring, rambling, ad-hoc mess masquerading as a vision statement. However, I think Jim's second suggestion for developing a vision statement holds much more promise.

Even within an IT organization, I think every project should be considered to produce a "product." Whether the project results involve enhancements to an internal accounting system or a new e-commerce site, product-oriented thinking pays back benefits.

One practice that I've found effective in getting teams to think about a product vision is the Design-the-Box exercise. This exercise is great to open up a session to initiate a project. The team makes the assumption that the product will be sold in a shrink-wrapped box, and their task is to design the product box front and back. This involves coming up with a product name, a graphic, three to four key bullet points on the front to "sell" the product, a detailed feature description on the back, and operating requirements.

Coming up with 15 or 20 product features proves to be easy. It's figuring out which 3 or 4 would cause someone to buy the product that is difficult. One thing that usually happens is an intense discussion about who the customers really are.

Design-the-Box is a fantastic way to formulate a vision statement. It's based on a concrete, real world concept that most people can easily wrap their heads around. Forget those pie-in-the-sky vision quests: what would our (hypothetical) product box look like?

We're all consumers; the design goals for a product box are obvious and universal. What is a product box if not the ultimate elevator pitch? It should...

  • Explain what our product is in the simplest possible way.
  • Make it crystal clear why a potential customer would want to buy this product.
  • Be uniquely identifiable amongst all the other boxes on the shelf.

Consider the box for the ill-fated Microsoft Bob product as an example. How do you explain why customers should want Microsoft Bob? How would you even explain what the heck Microsoft Bob is?

Microsoft Bob, front
Microsoft Bob, back

It's instructive to look at existing product boxes you find effective, and those you find ineffective. We definitely know what our product box shouldn't look like.

Have a rock solid vision statement for your project from day one. If you don't, use one of Jim's excellent suggestions to build one up immediately. Without a coherent vision statement, it's appalling how many teams can't pass the elevator test– they can't explain what it is they're working on, or why it matters. Don't make that same mistake. Get a kick-ass vision statement that your teammates can relate their work to. Make sure your team can pass the elevator test.

* Completely stolen from Billy Hollis' great 15-minute software addicts talk.

Discussion