Coding Horror

programming and human factors

Total Users Does Not Equal Total Usage

As of August 9th, 2006, MySpace has 100 million members. For reference, the population of California is approximately 36 million, and the population of the United States is approximately 300 million.

myspace-usa-population.png myspace-california-population.png myspace-population.png

I have a hard time believing that 1 in 3 Americans could conceivably be MySpace users.

I'm not the only person to be skeptical of these inflated "member" numbers. Robert Scoble recently took issue with the claim that Windows Live Spaces is the largest blogging service on the planet. He did a little ad-hoc investigation of Windows Live Spaces. Most of the member pages he visited were virtual ghost towns.

Dare Obasanjo addresses this criticism head-on:

The number of spaces on Windows Live Spaces isn't a particularly interesting metric to me nor is it to anyone I know who works on the product. We are more interested in the number of people who actually use our service and get value added to their lives by being able to share, discuss and communicate with their friends, families and total strangers.

Kudos to Dare for cutting to the heart of this debate. Who cares how many users signed up for your free service? How many users actually use it?

To illustrate the absurdity of user counts as a metric, Dare cited the LiveJournal stats page. According to that page, LiveJournal has approximately 11 million accounts. However:

  • 1 in 3 accounts are never used.
  • 1 in 5 accounts are "active in some way".
  • 1 in 10 accounts updated in the last 30 days.

If the LiveJournal statistics are representative of other social networking websites, then we should immediately divide the total number of "users", "spaces", or "accounts", by five. Maybe even by ten! That's a realistic estimate of how many people are actually using the service.

Total number of LiveJournal accounts:

11 million

Total number of active LiveJournal accounts:

1.5 million

And that's still an optimistic estimate, because it doesn't factor in multiple accounts established by the same person.

Now if only MySpace and Windows Live Spaces would be as forthcoming about the actual usage of their service, instead of bandying around meaningless user counts.

Discussion

DirectX Version Number Abuse

Has anyone noticed that Microsoft defines "version" a little loosely when it comes to DirectX 9.0c? Here's a screenshot of the DirectX 9.0c download page on FileHippo:

DirectX 9.0c versions

DirectX 9.0c was originally released in August 2004, according to the DirectX Wikipedia entry. But Microsoft has surreptitiously been updating DirectX 9.0c since August 2005 without incrementing the version number.

It is not known why Microsoft has not used new version numbers for the updates to DX9.0c -- including the December 2005 update versioning could now be at DX9.0j, although this is nowhere reflected in the internal code.

So do you want version 9.0c, 9.0c, 9.0c, or perhaps.. 9.0c?

The versions are all fully backwards compatible, of course, but why is Microsoft abusing version numbers this way?

It's impossible to tell what version of DirectX 9.0 you're actually running. I've installed several games over the past year which inexplicably demanded to re-install DirectX 9.0c; now I know why.

At least Vista stops the madness by finally changing the version number to DirectX 9.0L.

Discussion

Video Card Power Consumption

With the release of Intel's Core Duo and Core Duo 2 chips, it's finally happened-- mainstream video card GPUs are about to overtake CPUs as the largest consumers of power inside your PC.

Witness this chart, derived from XBit labs' latest roundup, of video card power consumption in watts:

graph of video card power consumption in watts

Now compare it to this chart of maximum CPU power consumption in watts:

graph of CPU power consumption in watts

Notice a trend here?

The idea that your video card consumes more power than your CPU is old hat to PC gaming enthusiasts, who have always lived at the top of that video card power consumption chart. But it's about to trickle down to the mainstream; you'll need a moderately fast gaming video card to get the best-looking 3D effects in Windows Vista.

Perhaps the trick is to select an video card that offers the best bang for the watt. Here's a graph, derived from the June 2006 Digit-Life video card roundup, which divides the 3DMark2006 score* of the video card by its peak 3D power consumption.

graph of video card 3dMark2006 scores divided by peak energy watt use

No surprise that the latest and greatest video cards end up on top; they probably use the newest manufacturing technology. The GeForce 7600 GT does astonishingly well here; it provides the 12th best 3DMark06 score of all the video cards listed, while only consuming a miserly 36 watts of power under full load. The GeForce 7900 GT is even better, consuming only 33% more power to produce a 42% higher 3DMark06 score.

I like the 7600 GT quite a lot, and I'd pick it for a well-balanced PC any day. It's fast, inexpensive, and efficient. It's even available in silent passively cooled versions. Here's one such model from Gigabyte:

geforce-7600gt-video-card-passive

Video cards tend to have small, whiny fans that can spin up to deafening levels under load. That's why passive cooling solutions are a nice option. But you need to be extra careful when choosing a passive solution. My work PC has a passively cooled GeForce 6600, which only dissipates 28 watts under load. But it still overheated and caused faults when running 3D screen savers. I had to retrofit a slow-moving fan on it to keep it stable. Make sure you have good case airflow if you're going the passive route!

* 3dMark2006 score for shader 2.0, at 1280x1024, with 4xAA and 16xAF

Discussion

The Power of "View Source"

The 1996 JavaWorld article Is JavaScript here to stay? is almost amusing in retrospect. John Lam recently observed that

JavaScript is the world's most ubiquitous computing runtime.

I think the answer is an emphatic yes.

JavaScript is currently undergoing a renaissance through AJAX. Sure, the AJAX-ified clones of Word and Excel are still pretty lame, but they're the first baby steps on the long road to rewriting every client application in the world in JavaScript. The line between client executable and web page gets blurrier every day.

The meteoric rise in popularity of Ruby has also renewed interest in dynamic languages. And JavaScript may be the most underappreciated dynamic language of all. Unfortunately, it's difficult to separate JavaScript from all its browser environment baggage and consider it purely as a language.

But these are both relatively recent developments. They're important milestones, but they're not the full story of JavaScript's success. Not by a long shot. I attribute most of JavaScript's enormous success to one long-standing menu item on every browser:

The View Source browser menu

The view source menu is the ultimate form of open source. It's impossible to obfuscate or hide JavaScript running in a browser. The code that powers any AJAX application is right there in plain sight, for everyone to see, copy, and use. A complete set of JavaScript source code for the latest AJAX apps is never more than one HTTP download away. They're literally giving away the source code for their application to every user.

Some people might see that as a huge business risk. I say if your business model is that dependent on clever, obfuscated source code tricks, it isn't much of a business model.

I've read several complaints that .NET code is too easy to decompile. Nonsense. It should be even easier to decompile. The real stroke of genius in JavaScript wasn't closures, or XmlHttpRequest; it was forcing people to share their source code with the community. How do you think anyone found out about XmlHttpRequest in the first place? Through reading the documentation?

The entire JavaScript development community is predicated on instant, ubiquitous access to source code. This leads to what I call "Code Darwinism": good techniques are seen immediately and reproduce promiscuously. Bad techniques never reproduce and die out.

Darwin Evolution - Skeletons

That's why I'm not afraid to bust out a copy of Reflector and perform a little ad-hoc "View Source". It's common practice to decompile binary .NET assemblies, for a whole host of entirely valid reasons:

  • You've encountered a possible bug in the code
  • You don't understand the code's behavior
  • You need to do something similar in your own code

Having the source code gives you the ability to fix your own problems-- or even someone else's problems. If you can see the source code, the binary is alive-- it can evolve.

And you can still license your software and make money, even if you're handing out the source code at the same time. According to DesaWare, one of the most compelling software sales pitches is the phrase "source code included":

Providing source code is the only answer -- it's a way to say to the customer that if worst comes to worst, they can be their own alternate source. Even Microsoft has demonstrated this by providing Windows Source to certain customers, like large governments, who have the leverage to demand it. And, yes, escrow services should be sufficient for this purpose, but for some reason most customers don't like that approach. Perhaps it's lack of confidence in the long-term viability of the escrow services themselves? Or perhaps lack of faith in their own institutional memory to recall that such escrow arrangements had been made.

There are some nice side benefits of having source code available: the ability to learn from someone else's code, and the possibility of customizing components to suit specific needs, but those are smaller issues. Security is always a concern, but it is only applicable to software that has the potential to elevate the privilege of a user -- something that applies to a relatively small number of software components.

So what about the great closed source vs. open source debate? I'm never one to shy away from controversy, but that's for another time and place. What we did by releasing our software was not open source by any stretch of the imagination. Our source code is licensed to individual developers for their own use -- not for distribution. Does a true open source model make sense for the component world? I don't know. What I do know is that source code availability provides a level of peace of mind for some developers that probably cannot be matched any other way.

We should do away with the pretense of hiding code. Let's not only acknowledge that decompiling .NET code is trivial, let's embrace the power of "view source" by shipping source code along with our binaries.

Discussion

Source Control: Anything But SourceSafe

Everyone agrees that source control is fundamental to the practice of modern software development. However, there are dozens of source control options to choose from. VSoft, the makers of FinalBuilder, just published the results of their annual customer survey. One of the questions it asked was which version control systems do you currently use, or plan to use, in the next 12 months?

Source control adoption graph, May 2005 to August 2006

The top 9 responses are reprinted here. I'm disheartened to see that Visual SourceSafe is still at the top of the list. If you are serious about the practice of software development, you should not be using SourceSafe. This isn't a new idea; plenty of other developers have been warning us away from SourceSafe for years:

There's simply no reason to use SourceSafe when there are so many inexpensive (and even free) alternatives that are vastly superior. The more customers I visit, and the more developers I talk to, the more I believe that SourceSafe poisons the minds of software developers. Note that I include our own shop, Vertigo Software, in this list.

  • SourceSafe gives you the illusion of safety and control, while exposing your project to risk.
  • SourceSafe teaches developers bad habits: avoid branching, exclusive locks, easy permanent deletions.

SourceSafe was a perfectly adequate source control system in the late 90's. Unfortunately, SourceSafe was never updated architecturally to reflect modern source control practices. Even the latest version, SourceSafe 2005, absolutely reeks of 1999. And, to be fair, some of the same criticisms apply to CVS. CVS is no longer a modern source control system, either; it doesn't even support the concept of atomic checkins.

One of my biggest hurdles has been unlearning all the bad things SourceSafe taught me about source control. Source control is the absolute bedrock of software engineering. It's as fundamental as it gets. If my knowledge in this area isn't deep, wide, and fundamentally sound, can I really call myself a software engineer?

So, how do we learn modern source control?

  1. Start with Eric Sink's Source Control HOWTO. Eric is self-admittedly biased because his company created SourceGear Vault, but he's up front about this. He has truly lived and breathed the topic of source control, and it shines through in his excellent writing.
  2. The online Subversion manual is well worth your time. The first few introductory chapters, starting with Chapter 2: Basic Concepts, are wonderful primers.
  3. Chris Birmele's paper on Branching and Merging is the best introduction I've found to this essential source control task. There are dozens of ways to branch, and no one correct way to do it. Get familar with your options so you know what the tradeoffs are with each one.

Visual SourceSafe was most Microsoft developers' first introduction to any kind of source control at all. That's great. But holding on to SourceSafe's archaic source control conventions is doing more damage than good these days. Make the switch to Team System, Subversion, or any other modern source control system of your choice.

But whatever you do, please don't use Visual SourceSafe. Think of the children.

Discussion