Coding Horror

programming and human factors

Choosing Dual or Quad Core

I'm a big fan of dual-core systems. I think there's a clear and substantial benefit for all computer users when there are two CPUs waiting to service requests, instead of just one. If nothing else, it lets you gracefully terminate an application that has gone haywire, consuming all available CPU time. It's like having a backup CPU in reserve, waiting to jump in and assist as necessary. But for most software, you hit a point of diminishing returns very rapidly after two cores. In Quad-Core Desktops and Diminishing Returns, I questioned how effectively today's software can really use even four CPU cores, much less the inevitable eight and sixteen CPU cores we'll see a few years from now.

To get a sense of what kind of performance improvement we can expect going from 2 to 4 CPU cores, let's focus on the Core 2 Duo E6600 and Core 2 Quad Q6600 processors. These 2.4 GHz CPUs are identical in every respect, except for the number of cores they bring to the table. In a recent review, Scott Wasson at the always-thorough Tech Report presented a slew of benchmarks that included both of these processors. Here's a quick visual summary of how much you can expect performance to improve when upgrading from 2 to 4 CPU cores:

Task Manager CPU Graph improvement
2 to 4 cores
CPU graph The Elder Scrolls IV: Oblivion none
CPU graph Rainbow 6: Vegas none
CPU graph Supreme Commander none
CPU graph Valve Source engine particle simulation 1.8 x
CPU graph Valve VRAD map compilation 1.9 x
CPU graph 3DMark06: Return to Proxycon none
CPU graph 3DMark06: Firefly Forest none
CPU graph 3DMark06: Canyon Flight none
CPU graph 3DMark06: Deep Freeze none
CPU graph 3DMark06: CPU test 1 1.7 x
CPU graph 3DMark06: CPU test 2 1.6 x
CPU graph The Panorama Factory 1.6 x
CPU graph picCOLOR 1.4 x
CPU graph Windows Media Encoder x64 1.6 x
CPU graph Lame MT MP3 encoder none
CPU graph Cinebench 1.7 x
CPU graph POV-Ray 2.0 x
CPU graph Myrimatch 1.8 x
CPU graph STARS Euler3D 1.5 x
CPU graph SiSoft Sandra Mandelbrot 2.0 x

The results seem encouraging, until you take a look at the applications that benefit from quad-core-- the ones that aren't purely synthetic benchmarks are rendering, encoding, or scientific applications . It's the same old story. Beyond encoding and rendering tasks which are naturally amenable to parallelization, the task manager CPU graphs tell the sad tale of software that simply isn't written to exploit more than two CPUs.

Unfortunately, CPU parallelism is inevitable. Clock speed can't increase forever; the physics don't work. Mindlessly ramping clock speed to 10 GHz isn't an option. CPU vendors are forced to deliver more CPU cores running at nearly the same clock speed, or at very small speed bumps. Increasing the number of CPU cores on a die should defeat raw clock speed increases, at least in theory. In the short term, we have to choose between faster dual-core systems, or slower quad-core systems. Today, a quad-core 2.4 GHz CPU costs about the same as a dual-core 3.0 GHz CPU. But which one will provide superior performance? A recent Xbit Labs review performed exactly this comparison:

3.0 GHz
Dual Core
2.4 GHz
Quad Core
improvement
2 to 4 cores 
PCMark05  9091 8853 -3%
SysMark 2007, E-Learning 167 140 -16%
SysMark 2007, Video Creation 131 151 15%
SysMark 2007, Productivity 152 138 -9%
SysMark 2007, 3D 160 148 -8%
Quake 4 136 117 -15%
F.E.A.R. 123 110 -10%
Company of Heroes 173 161 -7%
Lost Planet 62 54 -12%
Lost Planet "Concurrent Operations" 62 81 30%
DivX 6.6 65 64 0%
Xvid 1.2 43 45 5%
H.264 QuickTime Pro 7.2 189 188 0%
iTunes 7.3 MP3 encoding 110 131 -16%
3ds Max 9 SP2 4.95 6.61 33%
Cinebench 10 5861 8744 49%
Excel 2007 39.9 24.4 63%
WinRAR 3.7 188 180 5%
Photoshop CS3 70 73 -4%
Microsoft Movie Maker 6.0 73 80 -9%

It's mostly what I would expect-- only rendering and encoding tasks exploit parallelism enough to overcome the 25% speed deficit between the dual and quad core CPUs. Outside of that specific niche, performance will actually suffer for most general purpose software if you choose a slower quad-core over a faster dual-core.

However, there were some surprises in here, such as Excel 2007, and the Lost Planet "concurrent operations" setting. It's possible software engineering will eventually advance to the point that clock speed matters less than parallelism. Or eventually it might be irrelevant, if we don't get to make the choice between faster clock speeds and more CPU cores. But in the meantime, clock speed wins most of the time. More CPU cores isn't automatically better. Typical users will be better off with the fastest possible dual-core CPU they can afford.

Discussion

Falling Into The Pit of Success

Eric Lippert notes the perils of programming in C++:

I often think of C++ as my own personal Pit of Despair Programming Language. Unmanaged C++ makes it so easy to fall into traps. Think buffer overruns, memory leaks, double frees, mismatch between allocator and deallocator, using freed memory, umpteen dozen ways to trash the stack or heap – and those are just some of the memory issues. There are lots more "gotchas" in C++. C++ often throws you into the Pit of Despair and you have to climb your way up the Hill of Quality. (Not to be confused with scaling the Cliffs of Insanity. That's different.)

That's the problem with C++. It does a terrible job of protecting you from your own worst enemy – yourself. When you write code in C++, you're always circling the pit of despair, just one misstep away from plunging to your doom.

A deep pit

Wouldn't it be nice to use a language designed to keep you from falling into the pit of despair? But avoiding horrific, trainwreck failure modes isn't a particularly laudable goal. Wouldn't it be even better if you used a language that let you effortlessly fall into The Pit of Success?

The Pit of Success: in stark contrast to a summit, a peak, or a journey across a desert to find victory through many trials and surprises, we want our customers to simply fall into winning practices by using our platform and frameworks. To the extent that we make it easy to get into trouble we fail.

Rico Mariani coined this term when talking about language design. You may give up some performance when you choose to code in C#, Python, or Ruby instead of C++. But what you get in return is a much higher likelihood of avoiding the miserable Pit of Despair – and the opportunity to fall into the far more desirable Pit of Success instead.

As Brad Abrams points out, this concept extends beyond language. A well designed API should also allow developers to fall into the pit of success:

[Rico] admonished us to think about how we can build platforms that lead developers to write great, high performance code such that developers just fall into doing the "right thing". That concept really resonated with me. It is the key point of good API design. We should build APIs that steer and point developers in the right direction.

I think this concept extends even farther, to applications of all kinds: big, small, web, GUIs, console applications, you name it. I've often said that a well-designed system makes it easy to do the right things and annoying (but not impossible) to do the wrong things. If we design our applications properly, our users should be inexorably drawn into the pit of success. Some may take longer than others, but they should all get there eventually.

If users aren't finding success on their own – or if they're not finding it within a reasonable amount of time – it's not their fault. It's our fault. We didn't make it easy enough for them to fall into the pit of success. Consider your project a Big Dig: your job is to constantly rearchitect your language, your API, or your application to make that pit of success ever deeper and wider.

Discussion

Was The Windows Registry a Good Idea?

One of the hot new features introduced with Windows 95 was the Windows Registry. The Windows Registry offered a centralized database-like location to store application and system settings. No more plain text .INI files splattered all over your system. Instead, issue a few easy API calls and your application settings are safely nestled away deep inside the registry hive.

But after living with the Windows Registry for more than a decade, I'm starting to wonder if we were better off with those .INI files.

Windows Registry Editor, [X]

I understand the need to store truly system-wide settings in one place. Let the operating system store settings however it deems fit. The real problem with the registry is that it was exposed to the outside world. Instead of being a secure, central hive for only the most essential and global settings, over time the registry has slowly become a trash heap of miscellaneous junk settings for every rinky-dink application on the planet.

Woe to the poor computer user who naively attempts to manipulate the filesystem without first supplicating to the Registry Gods. Manipulating the filesystem is utterly obvious, completely intuitive, and unfortunately also the fastest way to break an application in Windows. You have to reconcile almost everything you do in the filesystem with that opaque, unforgiving binary blob of data known as the Windows Registry.

For instance, when I upgrade and reinstall Windows, most of the games I have installed on my secondary drive are instantly broken because they store cd-key and (redundant) path information in the registry. The game vendors' support teams will tell you to reinstall all your games and patches. Personally, I'd rather search forums and spelunk through the registry to manually recreate the two or three registry keys the game is looking for.

My life would be a heck of a lot easier if per-application settings were stored in a place I could easily see them, manipulate them, and back them up. Like, say... in INI files.

There is an alternative, though. If Windows applications weren't so busy mindlessly piling all their settings on the registry garbage dump with everyone else, they could elect to follow the new, much saner Windows Vista conventions for storing application-specific data:

/Users/Jeff/AppData/Local
/Users/Jeff/AppData/LocalLow
/Users/Jeff/AppData/Roaming

Local and LocalLow are for bits of application data that are truly machine-specific. Roaming is for non-machine specific settings that will follow the user. That's where the lion's share of the application settings will be. It's all explained in the Roaming User Data Deployment Guide (Word doc). However, these are still user-specific settings, obviously, as they're under the /Users folder. I can't find any new Windows filesystem convention for system level, non-user-specific settings. I suppose that's still Ye Olde Registry by default.

It is possible to write Windows applications that don't use the registry in any way. These are some of my favorite applications. But they're also the most rare and precious of all applications in the Windows software ecosystem.

Over time, it's fair to say that I've grown to hate the Windows Registry. How do I hate it? Let me count the ways:

  • The registry is a single point of failure. That's why every single registry editing tip you'll ever find starts with a big fat screaming disclaimer about how you can break your computer with regedit.
  • The registry is opaque and binary. As much as I dislike the angle bracket tax, at least XML config files are reasonably human-readable, and they allow as many comments as you see fit.
  • The registry has to be in sync with the filesystem. Delete an application without "uninstalling" it and you're left with stale registry cruft. Or if an app has a poorly written uninstaller. The filesystem is no longer the statement of record-- it has to be kept in sync with the registry somehow. It's a total violation of the DRY principle.
  • The registry is monolithic. Let's say you wanted to move an application to a different path on your machine, or even to a different machine altogether. Good luck extracting the relevant settings for that one particular application from the giant registry tarball. A given application typically has dozens of settings strewn all over the registry.

What's depressing about all of this is how prescient the UNIX conventions are in retrospect. How many billions of man-hours could we have saved by now if some early Windows NT 3.0 or 3.5 developers had decided to turn off public access to the registry, and transparently redirected the public registry API calls so they followed simpler, UNIX-like filesystem storage conventions instead?

Discussion

Computer Workstation Ergonomics

I spend almost every waking moment in front of a computer. I'm what you might call an indoor enthusiast. I've been lucky not to experience any kind of computer-related injury due to my prolonged use of computers, but it is a very real professional risk. I get some occasional soreness in my hands or wrists, mostly after marathon binges where I've clearly overdone it – but that's about the extent of it. All too many of my friends have struggled with long-term back pain or hand pain. While you can (and should) exercise your body and exercise your hands to strengthen them, there's one part of this equation I've been ignoring.

I've been on a quest for the ultimate computer desk for a few years now, and I've talked at length about the value of investing in a great chair. But I hadn't considered whether my current desk and chair is configured properly to fit my body. What about the ergonomics of my computer workstation?

The OSHA has an official page on computer workstation ergonomics, which is a good starting point. But like all government documents, there's a lot more detail here than most people will ever need. The summary picture does give you an idea of what an ergonomic seating position looks like, though. How close is this to the way you're sitting right now?

OSHA computer workstation diagram

Microsoft doesn't get enough credit for their often innovative hardware division, which first popularized ergonomic computer input devices, starting with the Microsoft Mouse 2.0 in 1993 and following with the Microsoft Natural Keyboard in 1994. With Microsoft's long-standing interest in hardware ergonomics, perhaps it's not too surprising to find that their healthy computing guide is one of the best and most succinct references for ergonomic computing I've found. But you don't have to read it. I'll summarize the key guidelines for computer workstation ergonomics here, distilling the best advice from all the sources I found.

I know I've harped on this, but it bears repeating: a quality desk and quality chair will be some of the best investments you'll ever make as a software developer. They will last you for 10 years or more, and contribute directly to your work happiness every single day.

If you value your physical health, this is not an area you want to economize on. Hopefully you've invested in a decent computer desk and chair that provide the required adjustability to achieve an ergonomically correct computer workstation. Beyond the chair, you'll need to potentially adjust the height of your desk and your monitor, too.

Computing ergonomics, adjustable desk and chair

1. The top of your monitor should be at eye level, and directly centered in front of you. It should be about an arm's length in front of you.

Computing ergonomics, monitor position

2. Your desk surface should be at roughly belly button level. When your arms are placed on the desk, your elbows should be at a ~90 degree angle, just below the desk surface. The armrests of your chair should be at nearly the same level as the desk surface to support your elbows.

Computing ergonomics, desk surface

3. Your feet should be flat on the floor with your knees at a ~90 degree angle. Your seat should not be pressing into the back of your knees; if necessary, tilt it slightly forward to alleviate any knee pressure. Sit fully back in your chair, with your back and shoulders straight and supported by the back of the chair.

Computing ergonomics, legs

4. When typing, your wrists should be in line with your forearms and not bent up, down, or to the side. Your keyboard should be directly centered in front of you. Other frequently used items should be nearby, within arm's reach.

Computing ergonomics, arms

When it comes to computer workstation ergonomics, these are the most basic, most commonly repeated guidelines I saw. Ergonomics is a holistic discipline, not a science, so your results may vary. Still, I'm surprised how many of these very basic guidelines I've been breaking for so many years, without even thinking about it. I'll be adjusting my home desk tomorrow in hopes of more comfortable computing.

Discussion

Widescreen and FOV

As far as I'm concerned, you can never have enough pixels on your desktop. Until a few years ago, buying a larger display meant buying a larger display in the same, standard 4:3 screen layout-- 640 x 480, 800 x 600, 1024 x 768, 1600 x 1200, and so forth. But widescreen monitors are increasingly popular. It's difficult to buy a larger monitor today without changing your aspect ratio to widescreen.

As the new owner of my very first non-4:3 widescreen monitor, I'm learning first hand that widescreen displays can be problematic in certain rendering contexts. The issue of scaling pre-rendered content to a widescreen display is a well-understood problem at this point; non-linear stretching techniques work reasonably well.

But when rendering dynamic 3D content, things are a bit more problematic. I just purchased the game Bioshock, which "supports" widescreen displays-- but, in fact, it doesn't. Here's a screenshot of the same scene displayed in 1600 x 1200 (4:3), and in widescreen 1920 x 1200 (16:10).

bioshock 1600x1200 overlaid with 1920x1200

It's wider, technically, but you actually see less. The sides are the same, but the top and bottom of the display is clipped away in widescreen. In effect, the viewport is zoomed in. This is what you have to do to get static, pre-rendered content to fit a widescreen format, because that content is immutable. But this is a terrible solution for dynamically rendered content in a 3D world. Instead, the developers should increase the field of view.

fov (field of view) diagram

If we turn down the FOV in Bioshock to something like 0.84 to accommodate our widescreen 16:10 aspect ratio, we can see more of the world, not less:

Bioshock FOV comparison, 4:3 vs 16:10

With the adjusted FOV, the wider screen is used to display more of the scene on the left and right edges. Makes sense, doesn't it? But this is not something you get for free-- the rendering engine must be programmed to allow and support changing the FOV.

In multiplayer circles, a wider FOV is considered cheating. If you can view more of the world than your opponent, then you might be able to see them coming before they see you. But this is a moot point for Bioshock; it's a single-player game. It's definitely possible to go a little crazy with FOV if you don't have enough physical display size to justify the field of view you've chosen:

Quake with a large FOV

It's a tricky balancing act, and not many rendering engines get it right. That's probably why there's a Widescreen Gaming Forum dedicated to dealing with FOV and widescreen issues, along with at least one other website, Widescreen Gamer. As the widescreen display format becomes increasingly popular, you can expect to run into this little rendering quirk eventually.

Discussion