Coding Horror

programming and human factors

The Principle of Least Power

Tim Berners-Lee on the Principle of Least Power:

Computer Science spent the last forty years making languages which were as powerful as possible. Nowadays we have to appreciate the reasons for picking not the most powerful solution but the least powerful. The less powerful the language, the more you can do with the data stored in that language. If you write it in a simple declarative from, anyone can write a program to analyze it. If, for example, a web page with weather data has RDF describing that data, a user can retrieve it as a table, perhaps average it, plot it, deduce things from it in combination with other information. At the other end of the scale is the weather information portrayed by the cunning Java applet. While this might allow a very cool user interface, it cannot be analyzed at all. The search engine finding the page will have no idea of what the data is or what it is about. The only way to find out what a Java applet means is to set it running in front of a person.

This was later codified in a more formal W3C document, The Rule of Least Power. I propose a corollary to this rule, which in the spirit of recent memes, I'll call Atwood's Law: any application that can be written in JavaScript, will eventually be written in JavaScript.

If you liked that article, I recommend the rest of Berners-Lee's architectural and philosophical points page. Although the content is quite old in internet time-- only two of the articles were written in the last year-- it still contains some timeless nuggets of advice and insight from the guy who invented the world wide web.

Discussion

The Non-Maximizing Maximize Button

One of my great frustrations with the Mac is the way the maximize button on each window fails to maximize the window. In a comment, Alex Chamberlain explained why this isn't broken, it's by design:

This is a textbook example of how Microsoft's programmers got the original Mac GUI wrong when they copied it for Win 3.1, and never bothered to fix it: there's no zoom button on Mac OS windows because it's unnecessary. What you're mistaking for a "maximize" button is actually a "snap window to size of contents" button. Far more useful and elegant. Once again, Microsoft has no taste and no clue when it comes to the GUI. All that money and Gates has never been able to hire a decent human factors person.

In other words, pressing the maximize button shouldn't maximize the window to the size of your monitor ...

Suck.com, in a browser window maximized to the monitor

... according to Apple, pressing the maximize button should maximize the window to the size of the content.

Suck.com, in a browser window maximized to the content

This is oddly reminiscent of the recent font smoothing debate, where Apple sided with the designers, and Microsoft sided with the realities of current hardware. Neither approach is wrong, per se; it depends what you want to emphasize and which tradeoff you think is more important.

I think the maximization problem is even more ambiguous than font rendering. With font rendering, the answers are based on objective mathematics: at low DPI you should favor the pixel grid and thus the user; at higher DPI you have enough pixels to favor the designer and render the font more accurately. And it's not an either-or distinction; the operating system could choose the font rendering strategy opportunistically depending on the capabilities of the display.

Unfortunately, there is no optimal window maximizing strategy. As you can see in the above screenshot, we end up with a vast expanse of unwanted whitespace when suck.com is maximized to a 1600x1200 monitor. Excessively long lines are hard to read, which is why most newspapers are formatted into columns. It's also why websites with any design chops at all never let text extend the full width of the browser.

New York Times online, columnar layout

I agree in principle that windows shouldn't be larger than their maximum usable size. But I also think windows with a fixed layout shouldn't be resizable in the first place. This is the subject of an entire sidebar in Neilsen's latest book, Prioritizing Web Usability.

While the Maximize button tempts many users, they are often poorly served by it. For example, a 1024-pixel-wide window will result in overly long lines for text-heavy applications such as web browsing. The preponderance of maximized windows also makes it difficult for users to understand the multiwindow nature of modern GUIs. In theory people are supposed to work with overlapping windows but in practice they can't when windows take up the entire screen. Maximized windows deceive people into thinking of the computer as a full-screen environment rather than one with multiple, simultaneously active areas.

Fortunately, maximized windows will gradually vanish as people get bigger monitors. With a 2048-pixel-wide screen, a maximized window is so grotesquely oversized that most users will resize it and work with two or more windows at a time. Tiled windows may also enjoy a renaissance with huge screens, making it easy to deal with two to four windows simultaneously.

Here's where I think this argument starts to break down in a big way. Dealing with multiple windows is far too difficult, even for sophisticated computer users. Adding Z-order in addition to the traditional X and Y positioning is one variable too many. I don't think it's a coincidence that single window interfaces, such as the web browser, or Tivo, dominate the market. Microsoft killed off the multiple document interface in Office-- a form of per-application windowing-- years ago. Can you name one application with a multiple window interface that's even popular?

Desktop with overlapping windows

Manipulating windows is pure excise-- extra work that stands between the user and completing their task. The more windows you have to deal with, the less work you get done, and the more time you spend sizing them, moving them, bringing them to the top, and dragging them around so they aren't overlapping. That's one reason I'm such a fan of double and triple monitor setups; more desktop space equals fewer overlapping windows, and less time spent futzing around with window layout.

That's my problem with Apple's non-maximizing maximize button. Allowing users to maximize any window to a monitor has its problems, to be sure. But Apple's method of forcing users to deal with more windows by preventing maximization is not good user interface design. It is fundamentally and deeply flawed. Users don't want to deal with the mental overhead of juggling multiple windows, and I can't blame them: neither do I. Designers should be coming up with alternative user interfaces that minimize windowing, instead of forcing enforcing arbitrary window size limits on the user for their own good.

Discussion

Don't Be a Commodity Blogger

Jakob Nielsen's "Write Articles, Not Blog Postings" is highly critical of so-called commodity bloggers. As you might imagine, it wasn't received well by the blog community. Robert Scoble's stereotypical reaction was perhaps the worst of the bunch. In a legendary display of narcisissm, Robert assumes the article is directed squarely at him, when it clearly wasn't. He then treats it as a personal attack, which it clearly isn't. He piles on with retaliatory personal attacks of his own, which was totally unnecessary.

Isn't it ironic how Robert's response reinforces all the negative stereotypes of bloggers that Nielsen addresses in that very article? Maybe it really was about Robert Scoble all along. How embarrassing. For all of us.

Can we respond to Nielsen's article without degenerating into knee-jerk narcisissm and ad-hominem attacks? Sure we can. Larry O'Brien's response and Mike Pope's response are proof of that.

It's true that Nielsen's article is richly deserving of criticism. At its core, the blogging advice he offers is awfully simplistic-- I'd summarize it as "don't suck, and if you suck, stop sucking". Not exactly helpful. And the "mathematical modeling" he so proudly showcases in his article is questionable at best, as Steve McConnell pointed out in an email to me.

Even if you're the world's top expert, your worst posting will be below average, which will negatively impact on your brand equity. If you do start a blog despite my advice, at least screen your postings: wait an hour or two, then reread your comments and avoid uploading any that are average or poor. (Even average content undermines your brand. Don't contribute to information pollution by posting material that isn't above the average of other people's writings. Plus, of course, follow guidelines for blog usability.)

The number one writer out of 1000 doesn't have the same standard deviation as the general population. When Tiger Woods has a bad day, he's not worse than average. He might possibly be worse than the average player at a particular professional tournament, but all the players at the tournament are in the top fraction of 1%. So even when Tiger Woods is at his worst, he's still in the top 1%.

The fatter the report became, the more it sold. Of course, page count is only a rough indication of the amount of insight, which is what customers are really paying for. The new edition has a large number of eyetracking heatmaps, showing how users read various newsletters, and these many illustrations eat up pages ferociously. Still, there's no doubt that each report edition contains significantly more information than previous editions.

I'm reminded of the old statistical fallacy that fire stations must cause fires because there are more fire stations in those areas. If his business is growing overall, it wouldn't matter whether the page count of his report on e-mail newsletter was going up or down-- he's selling more reports because his business is growing. If he provided a correlation analysis newsletter-by-newsletter that showed he got more sales immediately following publication of a longer newsletter, that would be more compelling.

Despite the shaky statistical evidence, Nielsen is still an astute observer of human behavior. I loved his previous article on blogging, which cheekily explained why if you're reading this, you are a low value demographic. There's a similar kernel of truth in his latest article on blogging that deserves closer examination.

What matters is that the user experience is that of immersion in comprehensive treatment of a topic, as opposed to a blog-style linear sequence of short, frequent postings commenting on the hot topic of the day. It doesn't matter what software is used to host the content, the distinctions are:

  • in-depth vs. superficial
  • original/primary vs. derivative/secondary
  • driven by the author's expertise vs. being reflectively driven by other sites or outside events

Even if you find the rest of the article completely useless, take this advice to heart. I'm no fan of Chris Pirillo, but his 10 Ways to Eliminate the Echo Chamber is a more detailed form of the same advice, and should be required reading for every blogger.

As with everything else in life, what you get out of blogging is directly proportional to what you put into it. Let us know there's a unique human being in there, not just another mindless, link propagating automaton writing about the same damn current events everyone else is writing about. The world already has enough commodity bloggers. When you post a blog entry, don't forget to add the you.

Yes, it takes more effort-- but you're worth it.

Discussion

Steve Mann, Cyborg

I may have an unusual affinity for hardware, but Steve Mann is in a class of his own. He lives the hardware. Steve Mann may be the world's original cyborg.

Steve Mann, an engineering professor at the University of Toronto, has lived as a cyborg for more than 20 years, wearing a web of wires, computers and electronic sensors that are designed to augment his memory, enhance his vision and keep tabs on his vital signs.

Mr. Mann has been exploring the frontiers of wearable computers since 1980.

Steve Mann, wearable computers from 1980-1998

Steve is evidently at version 7 of his wearable computer. I'm surprised he hasn't updated the rig since 1999.

There was a flurry of interest in his work in 2001, which coincided with a book and a movie made about Steve. There's a companion article on LinuxDevices describing how to build a similar rig from 2001, but it feels ancient by today's standards.

It's hard to tell what's currently happening in the field of wearable computing. Steve's personal web page is a shambles; nothing is dated. The MIT wearable computing page hasn't been updated since 2005, and the Wearable Computing home appears to be in a state of limbo. At least Andy's wearable computing resource looks like it's being updated on a regular basis.

Somehow, I thought the cyborgs would have a more compelling web presence.

Discussion

Building a PC, Part III - Overclocking

Now that we have Scott Hanselman's computer completely built up and stable -- or at least that's what our torture tests told us yesterday-- it's time to see how far we can overclock this system.

Overclocking a computer sounds complicated, but it really isn't. We'll use four tools:

 

  1. The motherboard's BIOS settings
  2. CPU-Z
  3. Prime95
  4. CoreTemp

While overclocking, CPU-Z will become your new best friend. It tells us exactly what's happening inside our PC. Let's start with a shot of CPU-Z showing the stock settings for this Core 2 Quad 6600.

 

CPU-Z, stock settings, EIST low CPU-Z, stock settings, EIST high

 

The left side is idle, and the right side is under load. Notice how the system automatically and dynamically adjusts the multiplier and voltage (the areas highlighted in red) to reduce power consumption. On the Intel platform, this technique is known as EIST, Enhanced Intel SpeedStep Technology.

First, a little basic math. This Core 2 Quad is clocked at 2.4 GHz, or 2400 MHz. The CPU speed is a function of the front side bus speed times a multiplier.

 

Bus Speed Front Side Bus Speed Multiplier   CPU Speed
266 MHz 1066 MHz x9 = 2400 MHz
266 MHz 1066 MHz x6 = 1600 MHz
333 MHz 1333 MHz x9 = 3000 MHz

 

As you can see, increasing the front side bus speed from 266 MHz to 333 Mhz increases the speed of the CPU. It's also theoretically possible to increase the CPU's speed by changing the multiplier, but Intel locked down multiplier changes years ago at the silicon level. So the only way to increase the CPU's speed is to increase the front side bus speed. (There are exceptions, but they're rare.)

Armed with that knowledge, let's go into the BIOS (by pressing DELETE while the system is booting) and start adjusting the CPU's speed. On the MSI P6N, the CPU speed options are tucked away under a menu titled "Cell Menu". This varies from BIOS to BIOS, but the gist of the settings is the same.

PC build, BIOS settings

I covered overclocking briefly last year when I built my last home computer. The principles are still the same. To overclock, we simply bump up the front side bus speed from 1066 MHz to something larger. To give ourselves additional headroom for overclocking, we also need to bump up the memory and CPU voltages a tad.

Note that this particular motherboard allows me to overclock the CPU front side bus independently of the memory front side bus. The memory bus speed is expressed as a ratio of the FSB, eg, 5:4 or 3:2. Modern motherboards offer a wide range of ratios, so almost any memory bus speed is achievable within a few MHz. I had terrible results overclocking the memory in this system, so the memory is staying locked at its stock 800 MHz speed.

Successful overclocking is a game of inches, not yards. Start small. Increase voltage slightly and the FSB slightly, then reboot. Remember-- we started from a stable system. If your system shows any sign of instability, no matter how small, you've definitely overclocked too far.

And if your system fails to boot, or if you can't enter the BIOS to set things back, don't fret. This is why vendors include the "reset CMOS" function on the motherboard. On the MSI P6N, it's a small button; on other motherboards it's a jumper. If all else fails, you can also pop out the CMOS battery and let the system sit for a minute or so. That'll definitely clear the BIOS settings. However, make sure you disconnect the power before clearing the CMOS.

PC build, CMOS reset button

If you successfully boot into Windows, that doesn't mean your overclock is stable. You have to run Prime95 torture test for at least an hour to see if things are truly working, and ideally overnight to be completely sure.

Overclocking ability varies widely per sample of CPU and even per motherboard. Some overclock well, some don't. It's largely the luck of the draw, although most Core 2 chips have reputations as solid overclockers for good reason. The Core 2 Quad Q6600 and MSI P6N combo in this system were able to deliver a successful overclock from 2.4 GHz to 3.0 GHz, as shown in this CPU-Z screenshot:

PC build, overclocked CPU-Z results

But is it stable? I ran four instances of Prime95 torture test, alongside CoreTemp to make sure the CPU temperatures stayed under control. CPU temperatures are the enemy of speed-- all other things being equal, the higher the CPU temperature, the less likely it is your system will be stable. (This is also why extreme overclockers use water cooling and liquid nitrogen.) That's one reason why we have a fancy aftermarket CPU cooler.

Once the torture test is running, you can minimize Prime95. The taskbar icon should stay red if things are working properly. If you see a yellow icon, that means Prime95 bombed out and your overclock isn't stable.

PC build, overclocking torture test

After a few hours of heavy Prime95 load on each core, I was satisfied that the system was nominally stable.

Here are the benchmark results reflecting the newly overclocked CPU:

PC build, overclocking windows experience score

 

  Stock Overclocked
3DMark2006 (@1024x768) 7217 7398
PCmark2005 7353 8432

 

Yes, overclocking is a lot of manual effort, a lot of tedious trial and error tweaking. Is the risk of instability and all that effort worth it for a tiny speed bump? Why bother? Well, judge for yourself:

 

Core 2 Quad Q6600 (stock) 2.40 GHz $480
Core 2 Extreme QX6800 2.93 GHz $1,345
Core 2 Quad Q6600 (overclocked) 3.0 GHz $480

 

I'd say saving nearly a thousand bucks is a pretty good argument in favor of overclocking.

Discussion