Coding Horror

programming and human factors

Three Monitors For Every User

As far as I'm concerned, you can never be too rich, too thin, or have too much screen space. By "screen", I mean not just large monitors, but multiple large monitors. I've been evangelizing multiple monitors since the dark days of Windows Millennium Edition:

If you're a long time reader you're probably sick of hearing about this stuff by now, but something rather wonderful has happened since I last wrote about it:

If you're only using one monitor, you are cheating yourself out of potential productivity. Two monitors is a no-brainer. It's so fundamental that I included it as a part of the Programmer's Bill of Rights.

But you can do better.

As good as two monitors is, three monitors is even better. With three monitors, there's a "center" to focus on. And 50% more display area. While there's certainly a point of diminishing returns for additional monitors, I think three is the sweet spot. Even Edward Tufte, in the class I recently attended, explicitly mentioned multiple monitors. I don't care how large a single display can be; you can never have enough desktop space.

Normally, to achieve three monitors, you have to either:

  1. Buy an exotic video card that has more than 2 monitor connections.
  2. Install a second video card.

Fortunately, that is no longer true. I was excited to learn that the latest ATI video cards have gone from two to three video outputs. Which means you can now achieve triple monitors with a single video card upgrade! They call this "eyefinity", but it's really just shorthand for "raising the standard from two display outputs to three".

But, there is a (small) catch. The PC ecosystem is in the middle of shifting display output standards. For evidence of this, you need look no further than the back panel of one of these newfangled triple display capable ATI video cards:

Radeon-eyefinity-video-card-outputs

It contains:

  • two DVI outputs
  • one HDMI output
  • one DisplayPort output

I suspect part of this odd connector layout is due to space restrictions (DVI is awfully chunky), but I've always understood DisplayPort to be the new, improved DVI connector for computer monitors, and HDMI to be the new, improved s-video/component connector for televisions. Of course these worlds are blurring, as modern high-definition TVs make surprisingly effective computer monitors, too.

Anyway, since all my monitors have only DVI inputs, I wasn't sure what to do with the other output. So I asked on Super User. The helpful answers led me to discover that, as I suspected, the third output has to be DisplayPort. So to connect my third monitor, I needed to convert DisplayPort to DVI, and there are two ways:

  1. a passive, analog DisplayPort to DVI conversion cable for ~$30 that supports up to 1920x1200
  2. an active, digital DisplayPort to DVI converter for $110 that supports all resolutions

I ended up going with the active converter, which has mixed reviews, but it's worked well for me over the last few weeks.

Accell-ultraav-displayport-to-dvi-adapter

Note that this adapter requires USB power, and given the spotty results others have had with it, some theorize that it needs quite a bit of juice to work reliably. I plugged it into my system's nearby rear USB ports which do tend to deliver more power (they're closer to the power supply, and have short cable paths). Now, I have gotten the occasional very momentary black screen with it, but nothing severe enough to be a problem or frequent enough to become a pattern. If you have DisplayPort compatible monitors, of course, this whole conversion conundrum is a complete non-issue. But DisplayPort is fairly new, and even my new-ish LCD monitors don't support it yet.

The cool thing about this upgrade, besides feeding my video card addiction, is that I was able to simplify my hardware configuration. That's always good. I went from two video cards to one, which means less power consumption, simpler system configuration, and fewer overall driver oddities. Basically, it makes triple monitors -- dare I say it -- almost a mainstream desktop configuration. How could I not be excited about that?

I was also hoping that Nvidia would follow ATI's lead here and make three display outputs the standard for all their new video cards, too, but sadly that's not the case. It turns out their new GTX 480 fails in other ways, in that it's basically the Pentium 4 of video cards -- generating ridiculous amounts of heat for very little performance gain. Based on those two facts, I am comfortable endorsing ATI wholeheartedly at this point. But, do be careful, because not all ATI cards support triple display outputs (aka "eyefinity"). These are the ones that I know do:

Unless you're a gamer, there's no reason to care about anything other than the least expensive model here, which will handily crush any 2D or 3D desktop GUI acceleration needs you might have. As an addict, of course I bought the high end model and it absolutely did not disappoint -- more than doubling my framerates in the excellent game Battlefield: Bad Company 2 over the GTX 280 I had before.

I'm excited that a triple monitor setup is now, thanks to ATI, so easily attainable for desktop users -- as long as you're aware of the DisplayPort caveat I discussed above. I'd encourage anyone who is even remotely interested in the (many) productivity benefits of a triple monitor setup to seriously consider an ATI video card upgrade.

Discussion

Usability On The Cheap and Easy

Writing code? That's the easy part. Getting your application in the hands of users, and creating applications that people actually want to use — now that's the hard stuff.

I've been a long time fan of Krug's book Don't Make Me Think. Not just because it's a quick, easy read (and it is!) but because it's the most concise and most approachable book I've ever found to teach the fundamental importance of usability. As far as I'm concerned, if you want to help us make the software industry a saner place, the first step is getting Don't Make Me Think in the hands of as many of your coworkers as you can. If you don't have people that care about usability on your project, your project is doomed.

Beyond getting people over the hurdle of at least paging through the Krug book, and perhaps begrudgingly conceding that this usability stuff matters, the next challenge is figuring out how to integrate usability testing into your project. It's easy to parrot "Usability is Important!", but you have to walk the walk, too. I touched on some low friction ways to get started in Low-Fi Usability Testing. That rough outline is now available in handy, more complete book form as Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems.

Rocket-surgery-made-easy

Don't worry, Krug's book is just as usable as his advice. It's yet another quick, easy read. Take it from the man himself:

  • Usability testing is one of the best things people can do to improve Web sites (or almost anything they're creating that people have to interact with).
  • Since most organizations can't afford to hire someone to do testing for them
    on a regular basis, everyone should learn to do it themselves. And …
  • I could probably write a pretty good book explaining how to do it.

If you're wondering what the beginner's "how do I boil water?" recipe for software project usability is, stop reading this post and get a copy of Rocket Surgery Made Easy. Now.

One of the holy grails of usability testing is eyetracking – measuring where people's eyes look as they use software and web pages. Yes, there are clever JavaScript tools that can measure where users move their pointers, but that's only a small part of the story. Where the eye wanders, the pointer may not, and vice-versa. But, who has the time and equipment necessary to conduct an actual eyetracking study? Almost nobody.

That's where Eyetracking Web Usability comes in.

Eyetracking-web-usability

Eyetracking Web Usability is chock full of incredibly detailed eyetracking data for dozens of websites. Even though you (probably) can't afford to do real eyetracking, you can certainly use this book as a reference. There is enough variety in UI and data that you can map the results, observations, and explanations found here to what your project is doing.

This particular book is rather eyetracking specific, but it's just the latest entry in a whole series on usability, and I recommend them all highly. These books are a fount of worthwhile data for anyone who works on software and cares about usability, from one of the most preeminent usability experts on the web.

Usability isn't really cheap or easy. It's an endless war, with innumerable battlegrounds, stretching all the way back to the dawn of computing. But these books, at least, are cheap and easy in the sense that they give you some basic training in fighting the good (usability) fight. That's the best I can do, and it's all I'd ask from anyone else I work with.

Discussion

The Opposite of Fitts' Law

If you've ever wrangled a user interface, you've probably heard of Fitts' Law. It's pretty simple – the larger an item is, and the closer it is to your cursor, the easier it is to click on. Kevin Hale put together a great visual summary of Fitts' Law, so rather than over-explain it, I'll refer you there.

The short version of Fitts' law, to save you all that tedious reading, is this:

  • Put commonly accessed UI elements on the edges of the screen. Because the cursor automatically stops at the edges, they will be easier to click on.
  • Make clickable areas as large as you can. Larger targets are easier to click on.

I know, it's very simple, almost too simple, but humor me by following along with some thought exercises. Imagine yourself trying to click on ...

  • a 1 x 1 target at a random location
  • a 5 x 5 target at a random location
  • a 50 x 50 target at a random location
  • a 5 x 5 target in the corner of your screen
  • a 1 x 100 target at the bottom of your screen

Fitts' Law is mostly common sense, and enjoys enough currency with UI designers that they're likely to know about it even if they don't follow it as religiously as they should. Unfortunately, I've found that designers are much less likely to consider the opposite of Fitts' Law, which is arguably just as important.

If we should make UI elements we want users to click on large, and ideally place them at corners or edges for maximum clickability – what should we do with UI elements we don't want users to click on? Like, say, the "delete all my work" button?

Alan Cooper, in About Face 3, calls this the ejector seat lever.

In the cockpit of every jet fighter is a brightly painted lever that, when pulled, fires a small rocket engine underneath the pilot's seat, blowing the pilot, still in his seat,
out of the aircraft to parachute safely to earth. Ejector seat levers can only be used
once, and their consequences are significant and irreversible.

Applications must have ejector seat levers so that users can "occasionally" move
persistent objects in the interface, or dramatically (sometimes irreversibly) alter the function or behavior of the application. The one thing that must never happen is accidental deployment of the ejector seat.

Unintended-ejection-seat-lever-consequences

The interface design must assure that a user can never inadvertently fire the ejector seat when all he wants to do is make some minor adjustment to the program.

I can think of a half-dozen applications I regularly use where the ejector seat button is inexplicably placed right next to the cabin lights button. Let's take a look at our old friend GMail, for example:

Gmail-send-vs-save-now

I can tell what you're thinking. Did he click Send or Save Now? Well, to tell you the truth, in all the excitement of composing that angry email, I kind of lost track myself. Good thing we can easily undo a sent mail! Oh wait, we totally can't. Consider my seat, or at least that particular rash email, ejected.

It's even worse when I'm archiving emails.

Gmail-archive-vs-report-spam

While there were at least 10 pixels between the buttons in the previous example, here there are all of ... three. Every few days I accidentally click Report Spam when I really meant to click Archive. Now, to Google's credit, they do offer a simple, obvious undo path for these accidental clicks. But I can't help wondering why it is, exactly, that these two buttons with such radically different functionality just have to be right next to each other.

Undo is powerful stuff, but wouldn't it be better still if I wasn't pulling the darn ejector seat lever all the time? Wouldn't it make more sense to put that risky ejector seat lever in a different location, and make it smaller? Consider the WordPress post editor.

Wordpress-update-vs-trash

Here, the common Update operation is large and obviously a button – it's easy to see and easy to click on. The less common Move to Trash operation is smaller, presented as a vanilla hyperlink, and placed well away from Update.

The next time you're constructing a user interface, you should absolutely follow Fitts' law. It just makes sense. But don't forget to follow the opposite of Fitts' law, too – uncommon or dangerous UI items should be difficult to click on!

Discussion

Compiled or Bust?

While I may have mixed emotions toward LINQ to SQL, we've had great success with it on Stack Overflow. That's why I was surprised to read the following:

If you are building an ASP.NET web application that's going to get thousands of hits per hour, the execution overhead of Linq queries is going to consume too much CPU and make your site slow. There’s a runtime cost associated with each and every Linq Query you write. The queries are parsed and converted to a nice SQL Statement on every hit. It’s not done at compile time because there’s no way to figure out what you might be sending as the parameters in the queries during runtime.

So, if you have common Linq to Sql statements like the following one ..

var query = from widget in dc.Widgets
where widget.ID == id && widget.PageID == pageId
select widget;
var widget = query.SingleOrDefault();

.. throughout your growing web application, you are soon going to have scalability nightmares.

J.D. Conley goes further:

So I dug into the call graph a bit and found out the code causing by far the most damage was the creation of the LINQ query object for every call! The actual round trip to the database paled in comparison.

I must admit, these results seem ... unbelievable. Querying the database is so slow (relative to typical code execution) that if you have to ask how long it will take, you can't afford it. I have a very hard time accepting the idea that dynamically parsing a Linq query would take longer than round-tripping to the database. Pretend I'm from Missouri: show me. Because I am unconvinced.

All of this is very curious, because Stack Overflow uses naive, uncompiled Linq queries on every page, and we are a top 1,000 website on the public internet by most accounts these days. We get a considerable amount of traffic; the last time I checked it was about 1.5 million pageviews per day. We go to great pains to make sure everything is as fast as we can. We're not as fast as we'd like to be yet, but I think we're doing a reasonable job so far. The journey is still very much underway -- we realize that overnight success takes years.

Anyway, Stack Overflow has dozens to hundreds of plain vanilla uncompiled Linq to SQL queries on every page. What we don't have is "scalability nightmares". CPU usage has been one of our least relevant constraints over the last two years as the site has grown. We've also heard from other development teams, multiple times, that Linq to SQL is "slow". But we've never been able to reproduce this even when armed with a profiler.

Quite the mystery.

Now, it's absolutely true that Linq to SQL has the performance peculiarity both posters are describing. We know that's true because Rico tells us so, and Rico ... well, Rico's the man.

In short the problem is that the basic Linq construction (we don’t really have to reach for a complex query to illustrate) results in repeated evaluations of the query if you ran the query more than once.

Each execution builds the expression tree, and then builds the required SQL. In many cases all that will be different from one invocation to another is a single integer filtering parameter. Furthermore, any databinding code that we must emit via lightweight reflection will have to be jitted each time the query runs. Implicit caching of these objects seems problematic because we could never know what good policy is for such a cache -- only the user has the necessary knowledge.

It's fascinating stuff; you should read the whole series.

What's unfortunate about Linq in this scenario is that you're intentionally sacrificing something that any old and busted SQL database gives you for free. When you send a garden variety parameterized SQL query through to a traditional SQL database, it's hashed, then matched against existing cached query plans. The computational cost of pre-processing a given query is only paid the first time the database sees the new query. All subsequent runs of that same query use the cached query plan and skip the query evaluation. Not so in Linq to SQL. As Rico said, every single run of the Linq query is fully parsed every time it happens.

Now, there is a way to compile your Linq queries, but I personally find the syntax kind of ... ugly and contorted. You tell me:

Func<Northwinds, IQueryable<Orders>, int> q =
CompiledQuery.Compile<Northwinds, int, IQueryable<Orders>>
((Northwinds nw, int orderid) =>
from o in nw.Orders
where o.OrderId == orderid
select o );
Northwinds nw = new Northwinds(conn);
foreach (Orders o in q(nw, orderid))
{
}

Anyway, that's neither here nor there; we can confirm the performance penalty of failing to compile our queries ourselves. We recently wrote a one time conversion job against a simple 3 column table containing about 500,000 records. The meat of it looked like this:

db.PostTags.Where(t => t.PostId == this.Id).ToList();

Then we compared it with the SQL variant; note that this is also being auto-cast down to the handy PostTag object as well, so the only difference is whether or not the query itself is SQL.

db.ExecuteQuery(
"select * from PostTags where PostId={0}", this.Id).ToList();

On an Intel Core 2 Quad running at 2.83 GHz, the former took 422 seconds while the latter took 275 seconds.

The penalty for failing to compile this query, across 500k iterations, was 147 seconds. Wow! That's 1.5 times slower! Man, only a BASIC programmer would be dumb enough to skip compiling all their Linq queries. But wait a second, no, wait 147 seconds. Let's do the math, even though I suck at it. Each uncompiled run of the query took less than one third of a millisecond longer.

At first I was worried that every Stack Overflow page was 1.5 times slower than it should be. But then I realized it's probably more realistic to make sure that any page we generate isn't doing 500 freakin' thousand queries! Have we found ourselves in the sad tragedy of micro-optimization theater ... again? I think we might have. Now I'm just depressed.

While it's arguably correct to say that every compiled Linq query (or for that matter, any compiled anything) will be faster, your decisions should be a bit more nuanced than compiled or bust. How much benefit you get out of compilation depends how many times you're doing it. Rico would be the first to point this out, and in fact he already has:

Testing 1 batches of 5000 selects

uncompiled 543.48 selects/sec compiled 925.75 selects/sec

Testing 5000 batches of 1 selects

uncompiled 546.03 selects/sec compiled 461.89 selects/sec

Have I mentioned that Rico is the man? Do you see the inversion here? Either you're doing 1 batch of 5000 queries, or 5000 batches of 1 query. One is dramatically faster when compiled; the other is actually a big honking net negative if you consider the developer time spent converting all those beautifully, wonderfully simple Linq queries to the contorted syntax necessary for compilation. Not to mention the implied code maintenance.

I'm a big fan of compiled languages. Even Facebook will tell you that PHP is about as half as fast as it should be on a good day with a tailwind. But compilation alone is not the entire performance story. Not even close. If you're compiling something -- whether it's PHP, a regular expression, or a Linq query, don't expect a silver bullet, or you may end up disappointed.

Discussion

The Non-Programming Programmer

I find it difficult to believe, but the reports keep pouring in via Twitter and email: many candidates who show up for programming job interviews can't program. At all. Consider this recent email from Mike Lin:

The article Why Can't Programmers... Program? changed the way I did interviews. I used to lead off by building rapport. That proved to be too time-consuming when, as you mentioned, the vast majority of candidates were simply non-technical. So I started leading off with technical questions. First progressing from easy to hard questions. Then I noticed I identified the rejects faster if I went the other way – hard questions first – so long as the hard questions were still in the "if you don't know this then you can't work here" category. Most of my interviews still took about twenty minutes, because tough questions take some time to answer and evaluate. But it was a big improvement over the rapport-building method; and it could be done over the phone.

After reading your article, I started doing code interviews over the phone, using web meetings. My interview times were down to about 15 minutes each to identify people who just can't code— the vast majority.

I wrote that article in 2007, and I am stunned, but not entirely surprised, to hear that three years later "the vast majority" of so-called programmers who apply for a programming job interview are unable to write the smallest of programs. To be clear, hard is a relative term -- we're not talking about complicated, Google-style graduate computer science interview problems. This is extremely simple stuff we're asking candidates to do. And they can't. It's the equivalent of attempting to hire a truck driver and finding out that 90 percent of the job applicants can't find the gas pedal or the gear shift.

I agree, it's insane. But it happens every day, and is (apparently) an epidemic hiring problem in our industry.

You have to get to the simple technical interview questions immediately to screen out the legions of non-programming programmers. Screening over the telephone is a wise choice, as I've noted before. But screening over the internet is even better, and arguably more natural for code.

I still wasn't super-happy with having to start up the web meeting and making these guys share their desktops with me. I searched for other suitable tools for doing short "pen-and-paper" style coding interviews over the web, but I couldn't find any. So I did what any self-respecting programmer would do. I wrote one.

Man, was it worth it! I schedule my initial technical screenings with job applicants in 15-minute blocks. I'm usually done in 5-10 minutes, sadly. I schedule an actual interview with them if they can at least write simple a 10-line program. That doesn't happen often, but at least I don't have to waste a whole lot of time anymore.

Mike adds a disclaimer that his homegrown coding interview tool isn't meant to show off his coding prowess. He needed a tool, so he wrote one -- and thoughtfully shared it with us. There might well be others out there; what online tools do you use to screen programmers?

Three years later, I'm still wondering: why do people who can't write a simple program even entertain the idea they can get jobs as working programmers? Clearly, some of them must be succeeding. Which means our industry-wide interviewing standards for programmers are woefully inadequate, and that's a disgrace. It's degrading to every working programmer.

At least bad programmers can be educated; non-programming programmers are not only hopeless but also cheapen the careers of everyone around them. They must be eradicated, starting with simple technical programming tests that should be a part of every programmer interview.

Discussion