Coding Horror

programming and human factors

Performance is a Feature

We've always put a heavy emphasis on performance at Stack Overflow and Stack Exchange. Not just because we're performance wonks (guilty!), but because we think speed is a competitive advantage. There's plenty of experimental data proving that the slower your website loads and displays, the less people will use it.

[Google found that] the page with 10 results took 0.4 seconds to generate. The page with 30 results took 0.9 seconds. Half a second delay caused a 20% drop in traffic. Half a second delay killed user satisfaction.

In A/B tests, [Amazon] tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.

I believe the converse of this is also true. That is, the faster your website is, the more people will use it. This follows logically if you think like an information omnivore: the faster you can load the page, the faster you can tell whether that page contains what you want. Therefore, you should always favor fast websites. The opportunity cost for switching on the public internet is effectively nil, and whatever it is that you're looking for, there are multiple websites that offer a similar experience. So how do you distinguish yourself? You start by being, above all else, fast.

Do you, too, feel the need – the need for speed? If so, I have three pieces of advice that I'd like to share with you.

1. Follow the Yahoo Guidelines. Religiously.

The golden reference standard for building a fast website remains Yahoo's 13 Simple Rules for Speeding Up Your Web Site from 2007. There is one caveat, however:

There's some good advice here, but there's also a lot of advice that only makes sense if you run a website that gets millions of unique users per day. Do you run a website like that? If so, what are you doing reading this instead of flying your private jet to a Bermuda vacation with your trophy wife?

So … a funny thing happened to me since I wrote that four years ago. I now run a network of public, community driven Q&A web sites that do get millions of daily unique users. (I'm still waiting on the jet and trophy wife.) It does depend a little on the size of your site, but if you run a public website, you really should pore over Yahoo's checklist and take every line of it to heart. Or use the tools that do this for you:

We've long since implemented most of the 13 items on Yahoo's list, except for one. But it's a big one: Using a Content Delivery Network.

The user's proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective. But where should you start?

As a first step to implementing geographically dispersed content, don't attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.

Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is the Performance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it's better to first disperse your static content. This not only achieves a bigger reduction in response times, but it's easier thanks to content delivery networks.

As a final optimization step, we just rolled out a CDN for all our static content. The results are promising; the baseline here is our datacenter in NYC, so the below should be read as "how much faster did our website get for users in this area of the world?"

Cdn-performance-test-world-map

In the interests of technical accuracy, static content isn't the complete performance picture; you still have to talk to our servers in NYC to get the dynamic content which is the meat of the page. But 90% of our visitors are anonymous, only 36% of our traffic is from the USA, and Yahoo's research shows that 40 to 60 percent of daily vistors come in with an empty browser cache. Optimizing this cold cache performance worldwide is a huge win.

Now, I would not recommend going directly for a CDN. I'd leave that until later, as there are a bunch of performance tweaks on Yahoo's list which are free and trivial to implement. But using a CDN has gotten a heck of a lot less expensive and much simpler since 2007, with lots more competition in the space from companies like Amazon's, NetDNA, and CacheFly. So when the time comes, and you've worked through the Yahoo list as religiously as I recommend, you'll be ready.

2. Love (and Optimize for) Your Anonymous and Registered Users

Our Q&A sites are all about making the internet better. That's why all the contributed content is licensed back to the community under Creative Commons and always visible regardless of whether you are logged in or not. I despise walled gardens. In fact, you don't actually have to log in at all to participate in Q&A with us. Not even a little!

The primary source of our traffic is anonymous users arriving from search engines and elsewhere. It's classic "write once, read – and hopefully edit – millions of times." But we are also making the site richer and more dynamic for our avid community members, who definitely are logged in. We add features all the time, which means we're serving up more JavaScript and HTML. There's an unavoidable tension here between the download footprint for users who are on the site every day, and users who may visit once a month or once a year.

Both classes are important, but have fundamentally different needs. Anonymous users are voracious consumers optimizing for rapid browsing, while our avid community members are the source of all the great content that drives the network. These guys (and gals) need each other, and they both deserve special treatment. We design and optimize for two classes of users: anonymous, and logged in. Consider the following Google Chrome network panel trace on a random Super User question I picked:

  requests data transferred DOMContentLoaded onload
Logged in (as me) 29 233.31 KB 1.17 s 1.31 s
Anonymous 22 111.40 KB 768 ms 1.28 s

We minimize the footprint of HTML, CSS and Javascript for anonymous users so they get their pages even faster. We load a stub of very basic functionality and dynamically "rez in" things like editing when the user focuses the answer input area. For logged in users, the footprint is necessarily larger, but we can also add features for our most avid community members at will without fear of harming the experience of the vast, silent majority of anonymous users.

3. Make Performance a Point of (Public) Pride

Now that we've exhausted the Yahoo performance guidance, and made sure we're serving the absolute minimum necessary to our anonymous users – where else can we go for performance? Back to our code, of course.

When it comes to website performance, there is no getting around one fundamental law of the universe: you can never serve a webpage faster than it you can render it on the server. I know, duh. But I'm telling you, it's very easy to fall into the trap of not noticing a few hundred milliseconds here and there over the course of a year or so of development, and then one day you turn around and your pages are taking almost a full freaking second to render on the server. It's a heck of a liability to start 1 full second in the hole before you've even transmitted your first byte over the wire!

That's why, as a developer, you need to put performance right in front of your face on every single page, all the time. That's exactly what we did with our MVC Mini Profiler, which we are contributing back to the world as open source. The simple act of putting a render time in the upper right hand corner of every page we serve forced us to fix all our performance regressions and omissions.

Mvc-mini-profiler-question-page

(Note that you can click on the SQL linked above to see what's actually being run and how long it took in each step. And you can use the share link to share the profiler data for this run with your fellow developers to shame them diagnose a particular problem. And it works for multiple AJAX requests. Have I mentioned that our open source MVC Mini Profiler is totally freaking awesome? If you're on a .NET stack, you should really check it out. )

In fact, with the render time appearing on every page for everyone on the dev team, performance became a point of pride. We had so many places where we had just gotten a little sloppy or missed some tiny thing that slowed a page down inordinately. Most of the performance fixes were trivial, and even the ones that were not turned into fantastic opportunities to rearchitect and make things simpler and faster for all of our users.

Did it work? You bet your sweet ILAsm it worked:

Google-webmaster-crawl-stats-download-time

That's the Google crawler page download time; the experimental Google Site Performance page, which ostensibly reflects complete full-page browser load time, confirms the improvements:

Google-webmaster-site-performance-overview

While server page render time is only part of the performance story, it is the baseline from which you start. I cannot emphasize enough how much the simple act of putting the page render time on the page helped us, as a development team, build a dramatically faster site. Our site was always relatively fast, but even for a historically "fast" site like ours, we realized huge gains in performance from this one simple change.

I won't lie to you. Performance isn't easy. It's been a long, hard road getting to where we are now – and we've thrown a lot of unicorn dollars toward really nice hardware to run everything on, though I wouldn't call any of our hardware choices particularly extravagant. And I did follow my own advice, for the record.

I distinctly remember switching from AltaVista to Google back in 2000 in no small part because it was blazing fast. To me, performance is a feature, and I simply like using fast websites more than slow websites, so naturally I'm going to build a site that I would want to use. But I think there's also a lesson to be learned here about the competitive landscape of the public internet, where there are two kinds of websites: the quick and the dead.

Which one will you be?

Discussion

Geek Transportation Systems

On my first visit to the Fog Creek Software offices in 2008, I was surprised to see programmers zooming around the office on scooters. I didn't realize that scooters were something geeks would be into, but it sure looked like fun, albeit borderline dangerous fun, on the 25th floor of an office building in Manhattan.

It turns out that having children is a great excuse reason to get into fun things like scooters. I didn't know much about scooters for adults, so being an obsessive geek, of course I had to research the heck out of this topic. My research turned up the Xootr MG as a top choice.

Xootr-mg-scooter

News flash: scooters are fun. Really fun!

But per my research (and now, personal experience) scooters are also surprisingly practical forms of transportation in certain situations, namely when …

  • you need to travel 1-3 miles
  • the route is not too hilly
  • it is not raining or wet
  • the route is mostly paved or has sidewalks
  • you are comfortable being "that awkward looking guy on a scooter"

Scooters are very primitive machines; it is both their greatest strength and their greatest weakness. It's arguably the simplest personal wheeled vehicle there is. In these short distance scenarios, scooters tend to win over, say, bicycles because there's less setup and teardown necessary – you don't have to lock up a scooter, nor do you have to wear a helmet (though I highly recommend one). Just hop on and go! You get almost all the benefits of gravity and wheeled efficiency with a minimum of fuss and maintenance. And yes, it's fun, too!

I'm just a scooter newbie, but the Xootr MG has a few characteristics I liked a lot, including rock-solid construction, a front brake (not super efficient, but reasonably effective when combined with the rear foot fender brake), and a wide, comfortable platform for your feet. But it does take some effort to kick around – don't forget to alternate your legs – and the ride can be rough at times depending on the surface. Large bumps and very uneven surfaces are wreck material. And going uphill on a scooter, beyond the absolute wussiest and mildest of grades, is simply out of the question.

For longer distances, or if the terrain is rougher or hillier, a scooter might work, but it'd be a tough way to travel. What you need in those cases is a small, portable bicycle – one you can take with you. I've dabbled in foldable bicycles before, and we own two Dahon folding bicycles. They're great, versatile and inexpensive bikes.

New-foldable-bikes

Dahon makes fine traditional folding bicycles, but they are not quite as pick-up-and-go as I would like for short trips. As an experiment, I purchased something I've had my eye on for a long time: the Strida LT folding bicycle. Or, as I like to call it, my "mid-life crisis vehicle".

Strida-green-side

(also pictured: some cool accessories that I recommend for Strida owners: a Cateye Reflex rear LED light on the rack, Knog beetle silicone front LED light on the handlebars, and a Sunlite Bicycle bungie cargo net.)

The appeal of the Strida is that it folds down to an incredibly small size.

Strida-green-folded

It's almost a pogo stick in folded form. I took my Strida on a short trip into San Francisco for a speaking gig in the city, which involved riding on BART, and the Strida in practice is everything I dreamed a modern ultra-portable folding bicycle could be:

  • front and rear disc brakes; superb stoppers
  • belt drive so no grease on your hands or pants
  • built in fenders in case you encounter puddles or rain
  • comfortable, full size(ish) upright riding position
  • super-easy, crazy fast folding: five seconds, no kidding!
  • when folded, the bike can be propped by the rear rack (as pictured) or strolled along by rolling it on its wheels.

The Strida may look odd, and perhaps it is odd, but I found it to be shockingly close to an ideal go-anywhere do-anything convenience bicycle. It isn't perfect, of course:

  • My only real beef with the Strida: the seat adjustment is horrendously kludgey. Adjusting the seat height on a Strida is painfully awkward even in the garage; on the go it's not an option.
  • It is a small wheel bicycle, with all the unavoidable physical compromises that entails. It'll always be a little twitchy and not something you would want to go on a 10 or 20 mile ride with.
  • It's a single speed, and you're not supposed to stand out of the saddle for power pedaling at any time. The frame and belt drive won't take it. On anything other than a moderate uphill you'll need to hop off and walk. (There is a slightly fancier Strida that has two internal hub gears, but I know nothing about it.)
  • Because the fold involves a ball joint, it is possible to permanently damage the bike if you aren't careful when you fold and force it. I doubt this is a real concern for anyone who has folded a Strida more than once, but if a ham-fisted friend tries to fold your Strida to "test it out", you might be in trouble.

None of these criticisms apply to the Dahon, so hopefully you can get a sense of the dividing line between an ultra-folder and a plain old folding bicycle.

Being a geek, it's not like I spend a lot of time outdoors. But when I do venture outside, I like to travel in a manner befitting a geek. That is, with my utility belt fully equipped, and in the dorkiest, most efficient vehicle possible for a trip of that particular length. Scooters, folding bicycles, recumbents, pogo sticks … whatever it takes. If you, too, would like to geek out around town, consider adding the Xootr MG scooter and Strida LT folding bicycle to your stable of geek transportation systems.

Discussion

Suspension, Ban or Hellban?

For almost eight months after launching Stack Overflow to the public, we had no concept of banning or blocking users. Like any new frontier town in the wilderness of the internet, I suppose it was inevitable that we'd be obliged to build a jail at some point. But first we had to come up with some form of government.

Stack Overflow was always intended to be a democracy. With the Stack Exchange Q&A network, we've come a long way towards that goal:

  • We create new communities through the open, democratic process defined at Area 51.
  • Our communities are maintained and operated by the most avid citizens within that community. The more reputation you have, the more privileges you earn.
  • We hold yearly moderator elections once each community is large enough to support them.

We strive mightily to build self organizing, self governing communities of people who are passionate about a topic, whether it be motor vehicles or homebrewing or musical instruments, or … whatever. Our general philosophy is power to the people.

Power-to-the-people

But in the absence of some system of law, the tiny minority of users out to do harm – intentionally or not – eventually drive out all the civil community members, leaving behind a lawless, chaotic badland.

Our method of dealing with disruptive or destructive community members is simple: their accounts are placed in timed suspension. Initial suspension periods range from 1 to 7 days, and increase exponentially with each subsequent suspension. We prefer the term "timed suspension" to "ban" to emphasize that we do want users to come back to their accounts, if they can learn to refrain from engaging in those disruptive or problematic behaviors. It's not so much a punishment as a time for the user to cool down and reflect on the nature of their participation in our community. (Well, at least in theory.)

Timed suspension works, but much like democracy itself, it is a highly imperfect, noisy system. The transparency provides ample evidence that moderators aren't secretly whisking people away in the middle of the night. But it can also be a bit too … entertaining for some members of the community, leading to hours and hours of meta-discussion about who is suspended, why they are suspended, whether it was fair, what the evidence is, how we are censoring people, and on and on and on. While a certain amount of introspection is important and necessary, it can also become a substitute for getting stuff done. This might naturally lead one to wonder – what if we could suspend problematic users without anyone knowing they had been suspended?

There are three primary forms of secretly suspending users that I know of:

  1. A hellbanned user is invisible to all other users, but crucially, not himself. From their perspective, they are participating normally in the community but nobody ever responds to them. They can no longer disrupt the community because they are effectively a ghost. It's a clever way of enforcing the "don't feed the troll" rule in the community. When nothing they post ever gets a response, a hellbanned user is likely to get bored or frustrated and leave. I believe it, too; if I learned anything from reading The Great Brain as a child, it's that the silent treatment is the cruelest punishment of them all.

    I've always associated hellbanning with the Something Awful Forums. Per this amazing MetaFilter discussion, it turns out the roots of hellbanning go much deeper – all the way back to an early Telnet BBS system called Citadel, where the "problem user bit" was introduced around 1986. Like so many other things in social software, it keeps getting reinvented over and over again by clueless software developers who believe they're the first programmer smart enough to figure out how people work. It's supported in most popular forum and blog software, as documented in the Drupal Cave module.

    (There is one additional form of hellbanning that I feel compelled to mention because it is particularly cruel – when hellbanned users can see only themselves and other hellbanned users. Brrr. I'm pretty sure Dante wrote a chapter about that, somewhere.)

  2. A slowbanned user has delays forcibly introduced into every page they visit. From their perspective, your site has just gotten terribly, horribly slow. And stays that way. They can hardly disrupt the community when they're struggling to get web pages to load. There's also science behind this one, because per research from Google and Amazon, every page load delay directly reduces participation. Get slow enough, for long enough, and a slowbanned user is likely to seek out greener and speedier pastures elsewhere on the internet.

  3. An errorbanned user has errors inserted at random into pages they visit. You might consider this a more severe extension of slowbanning – instead of pages loading slowly, they might not load at all, return cryptic HTTP errors, return the wrong page altogether, fail to load key dependencies like JavaScript and images and CSS, and so forth. I'm sure your devious little brains can imagine dozens of ways things could go "wrong" for an errorbanned user. This one is a bit more esoteric, but it isn't theoretical; an existing implementation exists in the form of the Drupal Misery module.

Because we try to hew so closely to the real world model of democracy with Stack Exchange, I'm not quite sure how I feel about these sorts of reality-altering tricks that are impossible in the world of atoms. On some level, they feel disingenuous to me. And it's a bit like wishing users into the cornfield with superhuman powers far beyond the ken of normal people. But I've also spent many painful hours trapped in public dialog about users who were, at best, just wasting everyone's time. Democracy is a wonderful thing, but efficient, it ain't.

That said, every community is different. I've personally talked to people in charge of large online communities – ones you probably participate in every day – and part of the reason those communities haven't broken down into utter chaos by now is because they secretly hellban and slowban their most problematic users. These solutions do neatly solve the problem of getting troublesome users to "voluntarily" decide to leave a community with a minimum of drama. It's hard to argue with techniques that are proven to work.

I think everyone has a right to know what sort of jail their community uses, even these secret, invisible ones. But keep in mind that whether it's timed suspensions, traditional bans, or exotic hellbans and beyond, the goal is the same: civil, sane, and safe online communities for everyone.

Discussion

The Infinite Version

One of the things I like most about Google's Chrome web browser is how often it is updated. But now that Chrome has rocketed through eleven versions in two and a half years, the thrill of seeing that version number increment has largely worn off. It seems they've picked off all the low hanging fruit at this point and are mostly polishing. The highlights from Version 11, the current release of Chrome?

HTML5 Speech Input API. Updated icon.

Exciting, eh? Though there was no shortage of hand-wringing over the new icon, of course.

Chrome's version number has been changing so rapidly lately that every time someone opens a Chrome bug on a Stack Exchange site, I have to check my version against theirs just to make sure we're still talking about the same software. And once -- I swear I am not making this up -- the version incremented while I was checking the version.

another nanosecond, another Chrome version.

That was the day I officially stopped caring what version Chrome is. I mean, I care in the sense that sometimes I need to check its dogtags in battle, but as a regular user of Chrome, I no longer think of myself as using a specific version of Chrome, I just … use Chrome. Whatever the latest version is, I have it automagically.

For the longest time, web browsers have been strongly associated with specific versions. The very mention of Internet Explorer 6 or Netscape 4.77 should send a shiver down the spine of any self-respecting geek. And for good reason! Who can forget what a breakout hit Firefox 3 was, or the epochs that Internet Explorer 7, 8 and 9 represent in Microsoft history. But Chrome? Chrome is so fluid that it has transcended software versioning altogether.

Chrome-infinite-version

This fluidity is difficult to achieve for client software that runs on millions of PCs, Macs, and other devices. Google put an extreme amount of engineering effort into making the Chrome auto-update process "just work". They've optimized the heck out of the update process.

Rather then push put a whole new 10MB update [for each version], we send out a diff that takes the previous version of Google Chrome and generates the new version. We tried several binary diff algorithms and have been using bsdiff up until now. We are big fans of bsdiff - it is small and worked better than anything else we tried.

But bsdiff was still producing diffs that were bigger than we felt were necessary. So we wrote a new diff algorithm that knows more about the kind of data we are pushing - large files containing compiled executables. Here are the sizes for the recent 190.1 -> 190.4 update on the developer channel:

  • Full update: 10 megabytes
  • bsdiff update: 704 kilobytes
  • Courgette update: 78 kilobytes

The small size in combination with Google Chrome's silent update means we can update as often as necessary to keep users safe.

Google's Courgette -- the French word for Zucchini, oddly enough -- is an amazing bit of software optimization, capable of producing uncannily small diffs of binary executables. To achieve this, it has to know intimate details about the source code:

The problem with compiled applications is that even a small source code change causes a disproportional number of byte level changes. When you add a few lines of code, for example, a range check to prevent a buffer overrun, all the subsequent code gets moved to make room for the new instructions. The compiled code is full of internal references where some instruction or datum contains the address (or offset) of another instruction or datum. It only takes a few source changes before almost all of these internal pointers have a different value, and there are a lot of them - roughly half a million in a program the size of chrome.dll.

The source code does not have this problem because all the entities in the source are symbolic. Functions don't get committed to a specific address until very late in the compilation process, during assembly or linking. If we could step backwards a little and make the internal pointers symbolic again, could we get smaller updates?

Since the version updates are relatively small, they can be downloaded in the background. But even Google hasn't figured out how to install an update while the browser is running. Yes, there are little alert icons to let you know your browser is out of date, and you eventually do get nagged if you are woefully behind, but updating always requires the browser to restart.

Please-restart-google-chrome

Web applications have it far easier, but they have version delivery problems, too. Consider WordPress, one of the largest and most popular webapps on the planet. We run WordPress on multiple blogs and even have our own WordPress community. WordPress doesn't auto-update to each new version, but it makes it as painless as I've seen for a webapp. Click the update link on the dashboard and WordPress (and its add-ons) update to the latest version all by themselves. There might be the briefest of interruptions in service for visitors to your WordPress site, but then you're back in business with the latest update.

Wordpress-update

WordPress needs everyone to update to the latest versions regularly for the same reasons Google Chrome does -- security, performance, and stability. An internet full of old, unpatched WordPress or Chrome installations is no less dangerous than an internet full of old, unpatched Windows XP machines.

These are both relatively seamless update processes. But they're nowhere near as seamless as they should be. One click updates that require notification and restart aren't good enough. To achieve the infinite version, we software engineers have to go a lot deeper.

Twitter-google-docs-infinite-version

Somehow, we have to be able to automatically update software while it is running without interrupting the user at all. Not if -- but when -- the infinite version arrives, our users probably won't even know. Or care. And that's how we'll know we've achieved our goal.

Discussion

Who Needs a Sound Card, Anyway?

The last sound card I purchased was in 2006, and that's only because I'm (occasionally) a bleeding edge PC gamer. The very same card was still in my current PC until a few days ago. It's perhaps too generous to describe PC sound hardware as stagnant; it's borderline irrelevant.

The default, built-in sound chips on most motherboards have evolved from "totally crap" to "surprisingly decent" in the last 5 years. But besides that, in this era of ubiquitous quad core CPUs nearing 4 GHz, it'd be difficult to make a plausible case that you need a discrete set of silicon to handle sound processing, even for the very fanciest of 3D sound algorithms and HRTFs.

That said, if you enjoy music even a little, I still strongly recommend investing in a quality set of headphones. As I wrote in 2005's Headphone Snobbery:

Am I really advocating spending two hundred dollars on a set of headphones? Yes. Yes I am. Now, you could spend a lot more. This is about extracting the maximum bang for your buck:

  1. Unlike your computer, or your car, your headphones will never wear out or become obsolete. I hesitate to say lifetime, but they're multiple decade investments at the very least.
  2. The number one item that affects the music you hear is the speakers. Without a good set of headphones, everything else is irrelevant.
  3. The right headphones can deliver sound equivalent to extremely high-end floorstanding speakers worth thousands of dollars.

If you're the type of person who is perfectly happy listening to 64 kilobit MP3s through a $5 set of beige headphones, that's fine. There's nothing wrong with that. Keep on scrolling; this post is not for you.

I realize that there's a fine line between audiophile and bats**t insane -- and that line better not be near any sources of interference! But nice headphones require powerful, reasonably clean output to deliver the best listening experience. This isn't high end audio crackpot snake oil, it's actual physics.

I'll let the guys at headroom explain:

You may have heard of a headphone's "impedance." Impedance is the combined resistance and reactivity the headphones present to the amplifier as an electrical load. High impedance cans will usually need more voltage to get up to a solid listening level, so they will often benefit from an amp, especially with portable players that have limited voltage available from their internal batteries. But low impedance cans may require more current, and will lower the damping factor between the amp and headphones. So while low impedance headphones may be driven loud enough from a portable player, the quality of sound may be dramatically improved with an amp.

The size of your headphone will give you some clues to whether an amp may be warranted. Most earbud and in ear headphones are typically very efficient and are less likely to benefit strongly from an amp. Many larger headphones will benefit, or even require, a headphone amp to reach listenable volume levels with portable players.

Thus, once you have a set of nice headphones, you do need some kind of amplified output for them. Something like the Boostaroo, or a Total BitHead. And if you're on a laptop these outboard solutions might be your only options.

Total-bithead

But desktops offer the option of adding a sound card. The good news is that arguably the best sound card on the planet, the Xonar DG, is all of 30 measly bucks. It's a big step up in fundamental sound quality from even the best current integrated HD audio motherboard sound chips, per this Tech Report review.

RightMark Audio Analyzer audio quality, 16-bit/44.1kHz
freq response noise level range THD THD + Noise IMD + Noise crosstalk IMD at 10kHz overall
Realtek ALC892 HD 5 4 4 3 1 3 5 3 4
Xonar DG 5 6 6 5 4 6 6 6 5

It also includes a little something extra of particular interest to us music loving programmers with nice headphones:

Built-in headphone amplification is something you won't find on a motherboard, but it's featured in both Xonars. On the DG, Asus has gone with Texas Instruments' DRV601RTJR, which is optimized for headphone impedances of 32-150 Ω according to the card's spec sheet. The Xense gets something considerably fancier: a TI amp capable of pushing headphones with impedances up to 600 Ω. Of course, the headphones bundled with the card are rated for an impedance of only 150 Ω. Mid-range stereo cans like Sennheiser's excellent HD 555s, which we use for listening tests, have a rated impedance of just 50 Ω. You don't need big numbers for high-quality sound.

The headphone amplification options are a bit buried in the Xonar driver user interface. To get there, select headphone mode, then click the little hammer icon to bring up the headphone amp gain settings.

Xonar-dg-audio-center-headphones

After my last upgrade, I was truly hoping I could get away with just the on-board Realtek HD audio my motherboard provides. I resisted mightily -- but the drop in headphone output quality with the onboard stuff was noticeable. Not to mention that I had to absolutely crank the volume to get even moderate loudness with my fancy-ish Sennheiser HD 600 headphones. The Xonar DG neatly solves both of these problems.

As you probably expected, the answer to the question "Who needs a sound card?" is "Almost nobody." Except those of us who invested in quality headphones. Rather than spending $30 or $150 on an outboard headphone amp, spend $30 on the Xonar DG to get a substantial sound quality upgrade and a respectable headphone amp to boot.

Discussion