Coding Horror

programming and human factors

Troubleshooting .NET performance using Peanut Butter

Here's some excellent, concise advice on troubleshooting performance in managed code. It all starts with peanut butter, naturally:

My last entry was some generic advice about how to do a good performance investigation. I think actually it's too generic to be really useful -- in fact I think it fails my Peanut Butter Sandwich Test. I review a lot of documents and sometimes they say things that are so obvious as to be uninteresting. The little quip I have for this situation is, "Yes what you are saying is true of [the system] but it's also true of peanut butter sandwiches." Consider a snippet like this one, "Use a cache where it provides benefits," and compare with, "Use a peanut butter sandwich where it provides benefits." Both seem to work... that's a bad sign.

As promised, Rico then provides a prescriptive list of windows performance monitor counters with comments indicating what the values should look like in a healthy app. He also linked to another blog post with a bit more detail specific to .NET memory performance counters which is also worth reading through.

Discussion

Incompetence Considered Harmful

A research paper from two psychologists at Cornell offers an interesting insight:

For example, consider the ability to write grammatical English. The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter.

We focus on the metacognitive skills of the incompetent to explain, in part, the fact that people seem to be so imperfect in appraising themselves and their abilities. Perhaps the best illustration of this tendency is the "above-average effect," or the tendency of the average person to believe he or she is above average, a result that defies the logic of descriptive statistics.

According to the data presented in this paper, the least competent people are the ones most likely to erroneously think they are competent:

Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.

That is a paradox indeed, but our profession is rife with exactly this kind of paradox. It has parallels in several areas of software development:

  1. Wicked Problems. You can't understand the problem you're trying to solve until you've partially solved it.
  2. Iterative development. Users can't fully express what they want you to build until you build a version of the software for them to experience.
  3. Extreme skill disparities. The worst software developers are profoundly bad; the best software developers are absurdly good.

According to this paper, it's extremely likely that the authors of The Daily WTF code snippets remain blissfully unaware of the pain they are inflicting on themselves, and everyone else. As I've said before, our biggest challenge is disseminating best practices to other developers. Making fun of incompetence is amusing, but until these developers' skills are bootstrapped to a moderate level, they're going to keep pounding out more and more WTFs. And that's no laughing matter.

Discussion

John Dvorak, blogging O.G.

Like Steve Broback, I spent many of my formative years in computing reading John Dvorak's magazine column.

I started enthusiastically reading John Dvorak's columns back in 1984, at my first job selling IBM PCs and Mac 128k computers from a storefront in Seattle. I have always enjoyed his candor and attitude despite the fact that he has been so wrong, so many times. I still have the 1984 column where he derides the Macintosh mouse as being like a "joystick" and how it tries to make computing like "a game".

It's true-- John Dvorak was the archetypal cranky blogger, way before blogging was even a glint in Dave Winer's eye. But as Steve wryly notes with a graph of Google search results, Dvorak now plays second fiddle to the very bloggers he derides in his latest column:

The influential bloggers should be defined here. These are people whom you've never heard of, but whom other influential A-list utopianist bloggers all know. I reckon there are about 500 of them. He (or she) influences other like-minded bloggers, creating a groupthink form of critical mass, just like atomic fission, as they bounce off each other with repetitive cross-links: trackback links, self-congratulatory links, confirmations, and praise-for-their-genius links. BOOM! You get a formidable explosion -- an A-bomb of groupthink. You could get radiation sickness if you happen to be in the area. Except for Wired online and a few media bloggers, nobody is in the area, so nobody outside the groupthink community really cares about any of this. These explosions are generally self-contained and harmless to the environment.

One thing is for sure: all those damn blogging kids need to get the hell off Dvorak's lawn. It's fascinating how the web can cause these amazing inversions of power. A guy like Dvorak who "has been pounding the keyboard since the day the World Wide Web came online, and was one of the first and most prolific contributors of ongoing content to the Web" won't even rate the first page of your search results.

(May 2007: Steve posted an interesting update elaborating on the inversion of influence between traditional print media and bloggers.)

Discussion

VM Server Hosting

My friend Josh Carlisle was kind enough to host this website during my move to California. Josh set me up with a Microsoft Virtual Server slice of Windows 2003 Standard on his Xeon 2.8 server. I'm currently running a WIMP (Windows, IIS, MySql, Perl) configuration which I was able to set up remotely without issue.

Although everything is generally running quite well, and the commit charge is well under 256mb in Task Manager, I am disappointed with VM performance ..again. Intel's Xeon 2.8ghz is basically just a rebranded Pentium 4 2.8ghz, but that's still way more performance than I need. Unfortunately, under actual use, it performs more like a 1.4ghz Pentium 4-- the older version with only 512kb L2 cache! HTTP post operations that used to take under a second take multiple seconds; installs that used to be a minute long take upwards of five minutes, etcetera.

VMs are great for convenience, but the performance cost is quite a bit higher than I expected it to be-- on both client and server. Even if you aren't emulating the x86 processor, the cost of emulating the motherboard hardware is clearly substantial. Particularly for disk and video. I found this list of Virtual Server performance tips, although it's not very server specific-- it's basically the same advice I've seen for Virtual PC. No silver bullet there; get the fastest disks you can afford, dedicate them to VMs, and make sure you have enough memory. Virtual PC guy also has some interesting tips for remote desktop-ing into a virtual server.

Discussion

Success through Failure

I found this Will Wright quote, from a roundtable at last week's E3, rather interesting:

Will Wright said he's learned the most from games that seemed appealing on paper, but were failures in the marketplace. "I actually ask people when hiring how many failures they've worked on," he said, "and I'm actually more likely to hire someone based on how many failures they've experienced. I think it's the best learning system."

As a developer, the likelihood that you're working on a project that will fail is high. Every failure should be considered a rich opportunity for learning what doesn't work, and why. As Thomas Edison once said:

I remember thinking, rather bitterly at the time, about the story of Thomas Edison's early attempts to come up with the right material for a lightbulb. He had tried a thousand different elements and all had failed. A colleague asked him if he felt his time had been wasted, since he had discovered nothing. "Hardly," Edison is said to have retorted briskly. "I have discovered a thousand things that don't work."

In fact, the difference between success and failure can ultimately hinge on how you handle failure – as illustrated in this New Yorker article about predicting the success or failure of surgeons:

Charles Bosk, a sociologist at the University of Pennsylvania, once conducted a set of interviews with young doctors who had either resigned or been fired from neurosurgery-training programs, in an effort to figure out what separated the unsuccessful surgeons from their successful counterparts.

He concluded that, far more than technical skills or intelligence, what was necessary for success was the sort of attitude that Quest has – a practical-minded obsession with the possibility and the consequences of failure. "When I interviewed the surgeons who were fired, I used to leave the interview shaking," Bosk said. "I would hear these horrible stories about what they did wrong, but the thing was that they didn't know that what they did was wrong. In my interviewing, I began to develop what I thought was an indicator of whether someone was going to be a good surgeon or not. It was a couple of simple questions: Have you ever made a mistake? And, if so, what was your worst mistake? The people who said, 'Gee, I haven't really had one,' or, 'I've had a couple of bad outcomes but they were due to things outside my control' -- invariably those were the worst candidates. And the residents who said, 'I make mistakes all the time. There was this horrible thing that happened just yesterday and here's what it was.' They were the best. They had the ability to rethink everything that they'd done and imagine how they might have done it differently."

This should always be a key interview question when you're hiring. Software development is difficult in the best of conditions. You should always be failing some of the time, and learning from those failures in an honest way. Otherwise, you're cheating yourself out of the best professional development opportunities.

Discussion