Coding Horror

programming and human factors

The Large Display Paradox

As displays increase in size and prices drop, more and more users will end up with relatively large displays by default. Nobody buys 15 or 17 inch displays any more; soon, it won't make financial sense to buy a display smaller than 20 inches. Eventually, if this trend continues, everyone will have 30-inch displays on their desktops. This is clearly a good thing. You can never have enough display space. But there is one unintended consequence of large displays.

One of the advantages of small monitors, ironically, is that because they're small, they nudge users into a simpler, windowless method of working. Instead of wasting time sizing, moving, and z-ordering windows, users only need to deal with one maximized window at a time. They can flip between maximized applications in much the same way they change channels on the television. But once your display gets to 1600 x 1200 or beyond, this easy one-app-per-display model isn't feasible any more. Dan recently ran into this problem when he upgraded to a 30" LCD:

Users of 30-inch monitors face the terrible, terrible problem of how to effectively use all of that space. You don't often want to maximise a folder or document window on a screen this big; either you'll end up with a lot of white space and important program buttons separated by a vast expanse of nothing, or you'll get lines of text 300 or more characters long, which are difficult to read.

That's the large display paradox. Having all that space can make you less productive due to all the window manipulation excise you have to deal with to make effective use of it.

Personally, I'm a card-carrying member of the prestigious three monitor club, which means I'm one step ahead of Dan. At least until he doubles or triples down:

Al Gore using three 30 inch monitors

Although my displays are only 20 inches in size, I have three of them. Maximizing a window to a 20 inch, 1600 x 1200 display area is a reasonable thing to do most of the time. I also use UltraMon, which gives me the indispensible ability to drag maximized windows between monitors. I'm constantly grabbing maximized windows and "throwing" them from monitor to monitor, ala Minority Report.

Minority Report user interface

With my triple monitor setup, I have a very large display surface with a primary area of focus and secondary areas that I can "snap" items to when I want them available for reference, but out of the way. I have a natural snapping grid because I use three physical monitors. It's a side-effect of the hardware, but a crucial one that I've absolutely come to rely on.

Dan only has a single large 30 inch monitor, so he has no natural grid to snap windows to. He needs a software solution:

I've been using WinSplit Revolution to manage this problem. It's a neat little Windows utility that makes it easy to bounce (most) windows around the screen and quickly resize them to take up the amounts of screen you probably want them to occupy. Two panes, each 1280 by 1600, give you a couple of twenty inch portrait-aspect-ratio "screens" that work great for many tasks.

I run into this problem a little bit on my three 20 inch displays, but it's only a minor nuisance. I'm in serious trouble if I ever get a multiple monitor setup with displays larger than 20 inches. (I'd also need a much, much larger desk.) There's no question that maximized windows aren't effective on large displays. For larger displays, I'd need to extend the "snap grid" effect of my three monitors to each individual monitor.

That's exactly what the WinSplit Revolution app does. It's quite intuitive; you use CTRL+ALT+(numpad) to push the currently selected window towards the quadrant of the screen represented by the number. Pressing the key sequence multiple times iterates through the two or three possible sizes at that particular position. This diagram explains it better than I can in text:

window grids possible using WinSplit Revolution

As you can see, you end up with a few dozen possible grid arrangements just using the simple numpad direction metaphor. But it's still quite a bit of work; I have to select each window and then use the numeric keypad (or the popup window equivalent) to push it over where I want it to go. As of version 1.8, WinSplit Revolution is perfectly multiple monitor aware, and even offers a convenient key combo to move windows from monitor to monitor, too.

Fortunately, there's GridMove, which supports multiple monitors. Just use the middle mouse button to drag a window, and you invoke the current grid template, which provides automatic snappable drop targets for that window.

GridMove animation

In the not-too-distant future, every user will have a monitor so large that maximizing a window no longer makes sense for most applications. It's too bad some kind of automatic snap grid support can't be embedded into the operating system to help users deal with large display areas. Like Dan, we're all going to need it sooner or later. Until then, these applications-- or ones like them-- can fill the gap.

Discussion

Catalogs of Data Visualization

In the spirit of Jennifer Tidwell's excellent Designing Interfaces book, there are a few great catalogs of data visualization emerging online.

Start with the oft-cited Periodic Table of Visualization Methods.

Periodic Table of Visualization Methods

There's another excellent collection at Data Visualization: Modern Approaches.

visualization, html graph

visualization, munterbund

If you're looking for visualization with a less practical, more web-oriented bent, a colleague recently discovered the FWA: Favourite Web Awards site. It's a huge catalog of websites that use interesting, unique designs and visualizations. That's where we found the Uniqlock "clock", and the giant rickshaw pointer.

At any rate, if you're a student of Tufte like I am, you might find it helpful to review a sample of what visualizations and techniques are possible (or even advisable) before plowing ahead on your next "Rich Internet Application".

Discussion

Configuring The Stack

A standard part of my development kit is Microsoft's Visual Studio. Here's what I have to install to get a current, complete version of Visual Studio 2005 on a new PC:

  1. Visual Studio 2005 Team Suite Edition
  2. Visual Studio Team Explorer (Team Foundation Client)
  3. Visual Studio 2005 Service Pack 1
  4. Visual Studio 2005 Service Pack 1 Update for Windows Vista
  5. SQL Server 2005 Express Service Pack 2
  6. Visual Studio 2005 Team Edition for Database Professionals
  7. Visual Studio 2005 Team Edition for Database Professionals Service Pack 1

Note that this is only a partial list; it doesn't include any of the other Visual Studio add-ons you might need to code against newer Microsoft technologies, such as ASP.NET AJAX, WF, or .NET 3.0.

What's wrong with this picture?

I appreciate that some of these products were released out of order, which is partially why the install is so convoluted. But if one of the disadvantages of open-source software is "configuring the stack", I'm having a hard time seeing how Microsoft's commercial stack is any easier to configure than the alternative open source stacks these days. Either the open source stuff has gotten a lot more streamlined and mature, or the Microsoft stuff is somehow devolving into complexity. I'm not sure which it is, exactly, but the argument that choosing a commercial development stack saves you time rings more and more hollow over time.

As the old adage goes, Linux is only free if your time is worthless*. But apparently your time can be worthless even if you've paid for the privilege.

* attributed to Jamie Zawinski.

Discussion

What's Worse Than Crashing?

Here's an interesting thought question from Mike Stall: what's worse than crashing?

Mike provides the following list of crash scenarios, in order from best to worst:

  1. Application works as expected and never crashes.
  2. Application crashes due to rare bugs that nobody notices or cares about.
  3. Application crashes due to a commonly encountered bug.
  4. Application deadlocks and stops responding due to a common bug.
  5. Application crashes long after the original bug.
  6. Application causes data loss and/or corruption.

Mike points out that there's a natural tension between...

  • failing immediately when your program encounters a problem, eg "fail fast"
  • attempting to recover from the failure state and proceed normally

The philosophy behind "fail fast" is best explained in Jim Shore's article (pdf).

Some people recommend making your software robust by working around problems automatically. This results in the software "failing slowly." The program continues working right after an error but fails in strange ways later on. A system that fails fast does exactly the opposite: when a problem occurs, it fails immediately and visibly. Failing fast is a nonintuitive technique: "failing immediately and visibly" sounds like it would make your software more fragile, but it actually makes it more robust. Bugs are easier to find and fix, so fewer go into production.

Fail fast is reasonable advice-- if you're a developer. What could possibly be easier than calling everything to a screeching halt the minute you get a byte of data you don't like? Computers are spectacularly unforgiving, so it's only natural for developers to reflect that masochism directly back on users.

But from the user's perspective, failing fast isn't helpful. To them, it's just another meaningless error dialog preventing them from getting their work done. The best software never pesters users with meaningless, trivial errors-- it's more considerate than that. Unfortunately, attempting to help the user by fixing the error could make things worse by leading to subtle and catastrophic failures down the road. As you work your way down Mike's list, the pain grows exponentially. For both developers and users. Troubleshooting #5 is a brutal death march, and by the time you get to #6-- you've lost or corrupted user data-- you'll be lucky to have any users left to fix bugs for.

What's interesting to me is that despite causing more than my share of software crashes and hardware bluescreens, I've never lost data, or had my data corrupted. You'd figure Murphy's Law would force the worst possible outcome at least once a year, but it's exceedingly rare in my experience. Maybe this is an encouraging sign for the current state of software engineering. Or maybe I've just been lucky.

So what can we, as software developers, do about this? If we adopt a "fail as often and as obnoxiously as possible" strategy, we've clearly failed our users. But if we corrupt or lose our users' data through misguided attempts to prevent error messages-- if we fail to treat our users' data as sacrosanct-- we've also failed our users. You have to do both at once:

  1. If you can safely fix the problem, you should. Take responsibility for your program. Don't slouch through the easy way out by placing the burden for dealing with every problem squarely on your users.
  2. If you can't safely fix the problem, always err on the side of protecting the user's data. Protecting the user's data is a sacred trust. If you harm that basic contract of trust between the user and your program, you're hurting not only your credibility-- but the credibility of the entire software industry as a whole. Once they've been burned by data loss or corruption, users don't soon forgive.

The guiding principle here, as always, should be to respect your users. Do the right thing.

Discussion

Speeding Up Your PC's Boot Time

I frequently hear apocryphal stories about Macs booting much faster than Windows boxes. There's a great set of Mac boot time benchmarks on the Silver Mac site that provide solid empirical data to back up those claims:

Intel iMac G5 iMac G5 iMac Mac Mini
10.4.4 10.4.4 10.4.5 10.4.5
Mac sound 4.5 3.5 3.6 4.0
Apple logo 6.7 15.6 15.2 10.2
Mac OS X 31.9 34.4 34.9 22.8
Ready to use 37.9 40.8 41.6 25.8

To be clear, the standard convention for "boot time" is the time from initial power on to the time we can finally interact with the desktop. The Silver Mac benchmarks are admirably thorough, as they break out important milestones during boot: the first boot sound, the appearance of the Apple logo on the screen, the OS X loading screen, and finally the ability to interact with the desktop. The intermediate milestones help us see where the real bottlenecks are in the boot process.

For perspective, a 1986 Mac Plus boots to the desktop in eleven seconds. The modern PC it is compared to clocks in at just over a minute of boot time. It's not even remotely a fair comparison for a whole host of reasons, but it's a fun data point nonetheless. How long does it take for your car to boot? Your MP3 player? Your television? Your cell phone?

For typical PC boot times, I turn to Ed Bott's excellent blog.

2006 vintage
PC Desktop
2005 vintage
PC Laptop
2004 vintage
PC Desktop
Windows XP1:011:470:58
Windows Vista1:121:201:14
Ubuntu Linux 6.101:49

Wow, PC boot times really do suck, right? Well, maybe. It depends on the PC.

The "Ultimate Developer Rig" I built for Scott Hanselman boots to a clean install of Vista x64 in 22 seconds. According to Scott, 10 seconds of that is attributable to the BIOS, and the other 12 is the operating system loading from disk. It's sobering to consider that almost half of the system's total boot time is spent in the third-party motherboard BIOS-- something Microsoft has no control over.

Now, these kinds of speedy PC boot times are only attainable if you have a clean install of the operating system. A clean install is de rigueur for Apple, because they're a single-source vendor. They have the luxury of complete control over the way their operating system is shipped-- not to mention the system BIOS itself. Every Apple box should boot consistently quickly as a matter of course. It'd be a crushing disappointment if they didn't.

On a Windows box, however, you almost never get a clean install. You typically get Microsoft's operating system plus a bevy of performance-sapping craplets the third-party vendor was paid to install on your system. Your boot times are already compromised the second you break the seal on the box.

Tweaking the BIOS to improve boot time is usually out of the question. But it is possible to restore most Windows boxes to near-clean-install boot speeds, at least. The process isn't exactly rocket surgery -- just stop doing so much stuff at startup! The primary tool for turning off unnecessary startup tasks is conveniently built into both XP and Vista: MSCONFIG.

msconfig utility screenshot

In my experience, anything that wants to runs at boot almost never needs to. It's generally safe to turn off almost everything in the MSCONFIG startup tab. If you have any applets that you recognize and want to run on boot, leave those; for everything else, when in doubt, turn it off. This not only speeds up your boot time, it also frees up memory on the PC. If you later decide you made a mistake, reverting is easy enough-- just run MSCONFIG again and tick the appropriate checkbox.

It's also quite common for your boot time to degrade over time as you install certain kinds of software, as noted by Adrian Kingsley-Hughes:

Sudden changes in boot times are usually quite noticeable, but what usually happens in that boot times grow slowly over time. You start off with a PC with a fresh install of Windows on it and it feels nice and fast (hopefully - if it doesn't then you're in serious trouble and things are only going to get worse, no matter how much you trash your system trying to speed it up). You then install security software and performance takes a hit. Install some big apps like Office and boot times take another nose-dive. I've seen boot times increase by over 100% over the course of setting up a new PC. It's actually quite depressing to watch.

Indeed, and the vast majority of that boot slowdown is attributable to security and anti-virus software, as documented on PC Spy. That's why I urge people to pursue other methods of securing their PCs; if you rely on commercial anti-virus, you are literally crippling your PC's performance. Anti-virus software barely works these days anyway, so it's a raw deal no matter how you slice it.

Of course, the best boot time of all is no boot time-- as Adrian so aptly points out:

How many times a day do you boot up your PC? If you [boot] more than two or three times a day on a regular basis then you're not making proper use of the features that your PC offers, such as hibernate or sleep. My systems can go for days, and sometimes weeks, without a reboot, being hibernated/put to sleep at the end of the day or during any big breaks in the work day. In fact, I like the hibernate feature a lot because it lets me shut my systems down yet leave my work open. Next time I restart the system, all my apps and documents are open and waiting for me.

Even if I did need to reboot my system a few times a day, I don't think that I'd be all that worried about boot times unless they were really long (+3 minutes) or my system was really unstable and needed rebooting several times a day. In either case, there's a problem somewhere that needs to be solved. If the system only takes a few seconds or a couple of minutes to boot up then I'm really not worried about the effect that the lost time will have on my productivity.

He's right. Maybe boot time is ultimately irrelevant; your best bet is to avoid booting altogether. Make use of those "Sleep" and "Hibernation" options in lieu of powering all the way down. Support is fairly mature for these modes, even in the wild-and-wooly PC ecosystem-- and they're many times faster than cold booting and loading up all your applications again.

Discussion