Coding Horror

programming and human factors

Smart Tags and Sane Keyboard Shortcuts

I constantly rename variables. It's probably the single most frequent refactoring activity I do. And that's why I love Visual Studio 2005's built-in Smart Tags feature.

If you're not familiar with smart tags, check out K. Scott Allen's post; he has some nice screenshots illustrating how it works. Here's a demo movie of it in action:

Smart tag rename demo movie

Unfortunately, as Scott points out, the smart tags are a pain to use via the keyboard. To maintain compatibility with Microsoft Office's implementation of smart tags, they chose the same keyboard shortcut:

Shift + Alt + F10

Go ahead. Just try to type that. I double dast dare you. It's the most psychotic keyboard shortcut ever.

If that's the best keyboard shortcut they can come up, I'll stick with mousing over the approximately 5 pixel drop-down trigger area of the smart tag. This is really unfortunate, because the keyboard shortcuts in Visual Studio are generally well thought-out and usable in my experience.

Luckily, I don't have to suffer through this bad keyboard shortcut for a frequent activity. There's an alternate shortcut defined:

Ctrl + .

But that's still not very intuitive. I prefer to use ..

Alt + Down

.. which is an excellent physical map to the conceptual activity of "dropping down" a smart tag menu. You can easily set this new keyboard shortcut up via Options, Environment, Keyboard. Search for commands containing "ShowSmartTag":

Smart Tag keyboard customization

Then just map a new global shortcut. Way better!

In general, it's safer to learn the default keyboard shortcuts for an environment-- too much customization is a self-defeating exercise. But sometimes the defaults are poorly chosen and you have no alternative but to customize them. What default keyboard shortcuts are you obliged to change in your environment, and why?

Discussion

Don't Acronymize Your Users

As a commenter noted in my previous post on how not to give a presentation, I have another complaint about software development presentations that I didn't list. They're chock full of meaningless acronyms. SOAP, BI, SOA, RDBMS, SGML, CRUD, RMS, RDBMS, XML, ORM, FAQ. I appreciate the need for brevity on slides, but can you at least have one slide that explains what the acronym means before giving up on words altogether?

The problem of acronymization was highlighted in bold relief for me last night as I was browsing through Dell's Core Duo laptops. "Gee", I thought, "I wonder what resolution this laptop screen has?" 1600 by 1200? 1280 by 1024?

You might expect the resolution of the display to be expressed in something typical and understandable, like pixels. But you'd be wrong.

This particular laptop has a "17 inch Wide Screen XGA+ display". What the hell does that mean?

What's a "WUXGA"? You might make a stab at guessing it's a rare four-horned bovid found in Central Asia, but in fact it's one of the grotesque acronyms vendors of personal computers, display boards, monitors, and projectors use to obfuscate one of the most fundamental specifications of the gear they're selling you: just how many pixels it can display.

After the introduction of the IBM Personal Computer in 1982, the PC industry got into the (bad) habit of identifying various screen resolutions by the name of the IBM or equivalent display adaptor having the corresponding maximum resolution. At the outset, this made sense: a user whose machine was equipped with an EGA (Enhanced Graphics Adaptor) need only look for monitors labeled "EGA Compatible". As the years passed and Moore's Law worked its magic on display resolution as well as compute speed, more and more mnemonics were coined as higher resolutions appeared and were adopted, resulting in today's ridiculous alphabet soup. Many of the current acronyms are, in fact, examples of "nested acronyms" of which Lisp programmers and other ber-nerds are so fond. "WUXGA", for example, is "Wide UXGA", where "UXGA" means "Ultra XGA", and "XGA" is itself an acronym for "Extended Graphics Array--three levels of acronyms!

I needed an entire translation page just to determine that "wide" XGA+ is probably 1366 x 768. Or thereabouts. I'm not entirely sure. Thanks a lot, Dell.

Don't do this to your users. If you must use acronyms in your application, be sure the acronym is explained up front. And in at least a few other places that are easily discoverable, too.

Discussion

How Not to Give a Presentation

I hold speakers to relatively high standards. They get paid to present to large groups because they're ostensibly good communicators. And I cannot believe the beginner mistakes some of the speakers are making here at VSLive.

Based on my experiences over the last two days, here are a few sure-fire ways not to give a presentation:

  1. Begin by establishing how impressive you are. Make sure we know all about your accomplishments and any books you've written. Be sure to plug your company and/or website. After all, this presentation is about you.

  2. Present a detailed presentation agenda. Before you can get to any content at all, you must dutifully itemize the table of contents! You know how people love reading the table of contents. It builds suspense. It's exciting. It keeps the audience on the edges of their seats, wondering "when will I actually see any content in this presentation"?

  3. Every slide should be absolutely jam-packed with information. Use as many bullet points and words on your slides as possible. Feel free to slap a few helpful URLs in there, too. If you can't fit it all on one slide, try a smaller font.

    Sample slide, blurred to protect the guilty

  4. Explain everything with bullet points. Don't show the audience. Tell them. Avoid pictures or, even worse, actual demonstrations. Feel free to use several slides to properly explain things.

  5. Read every word on your slides. Audiences can't read. It's your responsibility to do all the reading for them. But don't waste their time with a bunch of elaboration. Be succinct. Say exactly what is on each slide, then move on to the next slide.

  6. If you make a mistake or something goes wrong, take a few minutes to fix it. The audience can wait. While you're fixing things up, try that NASCAR joke again. It's hilarious.

  7. Use the highest possible desktop resolution. Show off your laptop's new widescreen LCD. Besides, limited resolutions and large fonts are childish and unprofessional.

  8. Summarize everything at the end. Audiences are notoriously forgetful. Spend the last few minutes patiently recapping everything they just saw.

  9. If you run out of time at the end of your session, keep going. The audience paid good money to see your presentation, so make sure they see it all. Your time is important.

  10. Don't take any questions. The content and quality of your presentation speaks for itself.

This stuff would be funny if it wasn't happening every single day. Death by PowerPoint, indeed.

Discussion

Presentation Magnification

Here at VSLive! 2006 San Francisco, I've been sitting through a lot of presentations. Unfortunately, I've spent a disproportionate amount of that time staring at tiny, unreadable 12 and 10 point IDE text.

Presenters, please don't do this to your audiences. If you can't pre-scale the font appropriately in the application, make use of one of the many automatic magnification utilities out there.

Heck, you can even use Windows XP's built in magnifier utility: Start, Run, Magnify.

magnifier-screenshot.png

The options for magnify.exe are limited but entirely servicable:

  • It can automatically follow:
    • the mouse cursor
    • the keyboard focus
    • text editing
  • Click and drag the magnification surface to move it. It can float as a window or it can dock to any side of the screen like the taskbar.
  • Drag the corners to resize the surface.
  • The magnification is adjustable from 1x to 9x.

In the screenshot above, I have the magnification window docked to the top of the screen. The IDE is maximized normally under it. Since magnify follows all my mouse and keyboard actions automatically, everything I do is now perfectly visible -- even from the last row of the room.

Discussion

Not All Bugs Are Worth Fixing

triage-tag.png One thing that continually frustrates me when working with dedicated test teams is that, well, they find too many bugs.

Don't get me wrong. I want to be the first person to know about any bug that results in inconvenience for a user. But how do you distinguish between bugs that users are likely to encounter, and bugs that users will probably never see?

The first thing you do is take that list of bugs from the testers and have yourself a triage meeting:

The term "triage" was borrowed from medical triage where a doctor or nurse has to prioritize care for a large group of injured people. The main job of a software bug triage team is to decide which bugs need to be fixed (or conversely, which bugs we're willing to ship with).

Eric lists four questions that need to be answered during triage to decide whether a bug should be fixed or not:

  1. Severity: When this bug happens, how bad is the impact?
  2. Frequency: How often does this bug happen?
  3. Cost: How much effort would be required to fix this bug?
  4. Risk: What is the risk of fixing this bug?

Triage isn't exactly my idea of a good time. But you have to do it, because you'll always have far more bugs than you have development time. Nobody has the luxury of fixing all the bugs in their software.

Testers produce two kinds of bugs:

  1. A small subset of very serious bugs that everyone can immediately agree on. These are great. They're the kind of catches that make me thank my lucky stars that we have dedicated testers. You go, girl-slash-boy!
  2. Everything else. A vast, gray wasteland of pseudo-bugs that nobody can really agree on. Is it an inconvenience for the user? Would users really do things this way? Would a user ever run into this? Do we even care?

It's a clear win for the bugs everyone agrees on. That's usually about ten to twenty percent of the bug list in my experience. But for everything else, there's a serious problem: testers aren't real users. I'd give a bug from a customer ten times the weight of a bug reported by a tester.

The source of the bug is just one factor to consider. Bug triage isn't a science. It's highly subjective and totally dependent on the specifics of your application. In Bugs Are a Business Decision, Jan Goyvaerts describes how different triage can be for applications at each end of that spectrum:

Last July I flew to Denver to attend the Shareware Industry Conference. I flew the leg from Taipei to Los Angeles on a Boeing 747 operated by China Airlines. This aircraft has two major software systems on board: the avionics software (flight computer), and the in-flight entertainment system. These two systems are completely independent of each other, developed by different companies, to different standards.

The avionics software is the software that flies the plane. No, the pilots don't fly the plane, the flight computer does. How many bugs would you tolerate in the avionics software? How many do you think Boeing left unfixed? How many people have ever been killed by software bugs in modern airliners? Zero. A flawed flight computer would immediately ground all 747s worldwide. Boeing would not recover.

The in-flight entertainment system is a completely different story. It's not essential to the plane. It only serves to make the passengers forget how uncomfortable those economy seats really are. If the entertainment system barfs all over itself, the cost is minimal. Passengers are already out of their money, and most will choose their next flight based on price and schedule rather than which movies are on those tiny screens, if any. I was actually quite pleased with Chine Airlines' system, which offered economy passengers individual screens and a choice of a dozen or so on-demand movies (i.e. each passenger can start viewing any movie at any time, and even pause and rewind). That is, until the system started acting up. It locked up a few times causing everybody's movie to pause for several minutes. Once, the crew had to reboot the whole thing. That silly Linux penguin mocked me for several minutes while the boot messages crept by. X11 showed off its X-shaped cursor right in the middle of the screen even longer. Judging from the crew's attitude about it, the reboot seemed like something that's part of their training.

Bugs also cost money to fix. In My Life as a Code Economist, Eric Sink outlines all the decisions that go into whether or not a bug gets fixed at his company:

Don't we all start out with the belief that software only gets better as we work on it? The fact that we need regression testing is somehow like evidence that there is something wrong with the world. After all, it's not like anybody on our team is intentionally creating new bugs. We're just trying to make sure our product gets better every day, and yet, somewhere between 3.1.2 and 3.1.3, we made it worse.

But that's just the way it is. Every code change is a risk. A development cycle that doesn't recognize this will churn indefinitely and never create a shippable product. At some point, if the product is ever going to converge toward a release, you have to start deciding which bugs aren't going to get fixed.

To put it another way, think about what you want to say to yourself when look in the mirror just after your product is released. The people in group 2 want to look in the mirror and say this:

"Our bug database has ZERO open items. We didn't defer a single bug. We fixed them all. After every bug fix, we regression tested the entire product, with 100% code coverage. Our product is perfect, absolutely flawless and above any criticism whatsoever."

The group 1 person wants to look in the mirror and say this:

"Our bug database has lots of open items. We have carefully reviewed every one of them and consider each one to be acceptable. In other words, most of them should probably not even be called bugs. We are not ashamed of this list of open items. On the contrary, we draw confidence from this list because we are shipping a product with a quality level that is well known. There will be no surprises and no mulligans. We admit that our product would be even better if all of these items were "fixed", but fixing them would risk introducing new bugs. We would essentially be exchanging these bugs which we find acceptable for the possibility of newly introduced bugs which might be showstoppers."

I'm not talking about shipping crappy products. I'm not suggesting that anybody ship products of low quality. I'm suggesting that decisions about software quality can be tough and subtle, and we need to be really smart about how to make those decisions. Sometimes a "bug" should not be fixed.

To me, triage is about one thing: making life better for your users. And the best way to do that is to base your triage decisions on data from actual usage -- via exception reporting, user feedback, and beta testing. Otherwise, triage is just a bunch of developers and testers in a room, trying to guess what users might do.

Discussion