Coding Horror

programming and human factors

Building Tiny, Ultra Low Power PCs

In previous posts, I've talked about building your own desktop PC, and building your own home theater PC. I'm still very much in love with that little HTPC I built. Not only does it have a modern dual-core CPU, and fantastic high-definition capable integrated video -- it's an outstanding general purpose media sharing server, too. But the real punchline is that I eventually got that box down to an insanely low 44 watts at idle. That's in the ballpark for a powerful laptop, and far better than your garden variety desktop PC, which will draw somewhere between 100 to 200 watts of power.

44 watts is impressive, but what if you want to build a PC that uses even less power -- radically less?

That's when you turn to something like AMD's Geode platform in the Nano-ITX form factor. It uses five watts of power at idle. That's almost ten times less than my HTPC build I was so proud of!

Nano-ITX motherboard

This is the JetWay J8F9 AMD Geode LX800 motherboard. I can't say "this is actual size" with a straight face without knowing the size and aspect ratio of your monitor, but it's probably darn close. The actual dimensions are just under five inches on each side. It may not look like much, but consider the specs:

  • 500 Mhz AMD x86 Geode LX 800 CPU
  • 200 pin SO-DIMM memory slot, 1 GB DDR-400 max
  • Two ATA-100 drive connections
  • mini-PCI expansion slot
  • CompactFlash memory card slot
  • onboard audio / VGA / fast ethernet / USB

This thing is, for all intents and purposes, a complete, standalone x86 PC that fits in the palm of your hand and sips five watts of power. Well, assuming you have an enormous hand.

You will need memory and a storage device, of course. You could pick up a laptop hard drive, but another clever thing about this board is that it allows you to use a cheap CompactFlash card as your storage medium -- for the optimal low power, no moving parts install.

  1. AMD Geode LX 800 Nano ITX Motherboard/CPU Combo $154
  2. 512MB 200-pin SO-DIMM DDR-400 $20
  3. 4GB compact flash card $14
  4. 12vdc AC/DC external wall wart $18

So we can put together our own tiny utility PC for right at 200 bucks. Not bad. Unbox it, snap in the memory and CF card, plug in the wall wart, and you're ready to install and boot your operating system of choice. It's that simple.

Naturally, you won't get barn-burning performance, but if you remember the Pentium II 300 Mhz systems of yesteryear, you'll know what to expect. You may recall those now-ancient boxes were still able to do some pretty amazing things in their day. I would not build an ultra-lower power PC assuming it will be tolerable for day-to-day web browsing and email reading, unless you're comfortable using text mode or command-line interfaces exclusively.

This must be a market segment JetWay specializes in; they have a surprisingly large number of Mini-ITX motherboards to choose from. I don't think you'll find anything more power-efficient than the Geode LX 800 model, though, but there are some lesser expensive choices that get close. Lots of variety!

If the 5" x 5" profile of the Nano-ITX is far too large for your tastes, how do you feel about Pico-ITX? It's even smaller at 10cm x 7.2cm.

picoitx-epia-px10000.jpg

I've been following the ultra low power, tiny form factor PC segment for quite a few years now. With the emergence of Intel's Atom and "netbooks" like the ASUS Eee, it's a segment that is dangerously close to becoming mainstream. If you're interested, mini-itx.com is still one of the best sources of hands-on reviews, information, and community projects. It's fun stuff.

What could you do with a tiny, highly efficient x86 PC that boots up in under a minute?

Discussion

Web Development as Tag Soup

As we work with ASP.NET MVC on Stack Overflow, I find myself violently thrust back into the bad old days of tag soup that I remember from my tenure as a classic ASP developer in the late 90's. If you're not careful bordering on manically fastidious in constructing your Views, you'll end up with a giant mish-mash of HTML, Javascript, and server-side code. Classic tag soup; difficult to read, difficult to maintain.

I don't mean tag soup in the sense of badly formed HTML, or the malformed world we live in. I mean tag soup in the sense of mixing HTML markup and server-side code. Now you can double your pleasure: badly formed HTML, meet badly written code.

The tag soup problem seems to be endemic to all modern web development stacks. I see that Ruby on Rails apps have the same problem; here's a slice of representative RHTML from Typo, a Ruby blogging engine.

Ruby RHTML markup and code

Do you find this readable? Can you see where the code begins and the markup ends? Are you confident you could change the code structure without breaking the HTML, or change the HTML structure without breaking the code?

Sometimes editing this stuff makes me feel like I'm playing Operation. I have to ever so carefully maneuver my metal tweezers into one tiny slice of code or HTML and make my changes without touching the edges and setting off that blasted electrical buzzer.

Operation game

I'm not trying to single out Rails or Typo here; I could easily show you a ASP.NET MVC view that's just as confusing (or as "clear", if you think that's perfectly readable, I guess). Tag soup is everywhere; take a look at the Python Django framework templates:

<h1>Archive for {{ year }}</h1>
{% for date in days %}
{% ifchanged %}<h3>{{ date|date:"F" }}</h3>{% endifchanged %}
<a href="{{ date|date:"M/d"|lower }}/">{{ date|date:"j" }}</a>
{% endfor %}

Perhaps when it comes to mixing HTML and server-side code, some form of soup is unavoidable, a necessary evil. The soup can be quite palatable; maybe even delicious. It's certainly possible to write good tag soup and bad tag soup.

But I have to wonder: is there a better way? Is there something beyond RHTML, Views, and Templates? What examples would you point to of web development stacks that avoided degenerating into yet more hazardous, difficult to maintain tag soup? Is there anything truly better on the horizon?

Or is this year's newer, fancier, even-more-delicious iteration of tag soup as good as it ever gets for web development?

Discussion

Dealing With Bad Apples

Robert Miesen sent in this story of a project pathology:

I was part of a team writing an web-based job application and screening system (a job kiosk the customer called it) and my team and our customer signed on to implementing this job kiosk using Windows, Apache, PHP5, and the ZendFramework -- everyone except one of our team members, who I will refer to as "Joe". Joe kept advocating the use of JavaScript throughout the technology deliberation phase, even though the customer made it quite clear that he expected the vast majority of the job kiosk to be implemented using a server-side technology and all the validation should be done using server-side technology.

The fact that the customer signed off on this, however, did nothing to deter Joe from advocating JavaScript -- abrasively. Every time our project hit a bump in the road, Joe would go off on some tirade on how much easier our lives would be if we were only writing this job kiosk in JavaScript. Joe would constantly bicker about how we were all doing this all wrong because we weren't doing it in JavaScript, not even bother to learn the technologies we were actually using, and, whenever fellow teammates would try and gently bring him back into the fold (usually via email), Joe would just flame the poor guy. At the height of Joe's pro-JavaScript bigotry, he would regularly belt off comments like, "Well, if we had only done it in JavaScript," to such an extent that the team would have been better off if he had just quit (or was reassigned or fired.)

After reading this story, I had to resist the urge to lean forward, hand placed thoughtfully under my chin, brow furrowed, and ask -- have you tried JavaScript?

Robert thought this story was a cautionary tale about technology dependence, but I see something else: a problem team member, a classic bad apple.

an apple goes bad

I'm sure "Joe" had the best of intentions, but at the point where you're actively campaigning against the project, and working against your teammates -- you're a liability to the project.

The cost of problem personnel on a project is severe, as noted in Chapter 12 of McConnell's Rapid Development: Taming Wild Software Schedules.

If you tolerate even one developer whom the other developers think is a problem, you'll hurt the morale of the good developers. You are implying that not only do you expect your team members to give their all; you expect them to do it when their co-workers are working against them.

In a review of 32 management teams, Larson and LaFasto found that the most consistent and intense complaint from team members was that their team leaders were unwilling to confront and resolve problems associated with poor performance by individual team members. (Larson and LaFasto 1989). They report that, "more than any other single aspect of team leadership, members are disturbed by leaders who are unwilling to deal directly and effectively with self-serving or noncontributing team members." They go on to to say that this is a significant management blind spot because managers nearly always think their teams are running more smoothly than their team members do.

How do we identify problem personnel? It's not difficult as you might think. I had a friend of mine once describe someone on his team as -- and this is a direct quote -- "a cancer". At the point which you, or anyone else on your team, are using words like cancer to describe a teammate, you have a serious project pathology. You don't have to be friends with everyone on your team, although it certainly helps, but a level of basic personal and professional respect is mandatory for any team to function normally.

Steve outlines a few warning signs that you're dealing with a bad apple on your team:

  1. They cover up their ignorance rather than trying to learn from their teammates. "I don't know how to explain my design; I just know that it works." or "My code is too complicated to test." (These are both actual quotes.)

  2. They have an excessive desire for privacy. "I don't need anyone to review my code."

  3. They are territorial. "No one else can fix the bugs in my code. I'm too busy to fix them right now, but I'll get to them next week."

  4. They grumble about team decisions and continue to revisit old discussions long after the team has moved on. "I still think we ought to go back and change the design we were talking about last month. The one we picked isn't going to work."

  5. Other team members all make wisecracks or complain about the same person regularly. Software developers often won't complain directly, so you have to ask if there's a problem when you hear many wisecracks.

  6. They don't pitch in on team activities. On one project I worked on, two days before our first major deadline, a developer asked for the day off. The reason? He wanted to spend the day at a men's clothing sale in a nearby city -- a clear sign he hadn't integrated with the team.

Let me be quite clear on this point: if your team leader or manager isn't dealing with the bad apples on your project, she isn't doing her job.

You should never be afraid to remove -- or even fire -- people who do not have the best interests of the team at heart. You can develop skill, but you can't develop a positive attitude. The longer these disruptive personalities stick around on a project, the worse their effects get. They'll slowly spread poison throughout your project, in the form of code, relationships, and contacts.

Removing someone from a team is painful; it's not fun for anyone. But realizing you should have removed someone six months ago is far more painful.

Discussion

The Ultimate Software Gold Plating

Some developers love to gold plate their software. There are various shades of .. er, gold, I guess, but it's usually considered wasteful to fritter away time gold plating old code in the face of new features that need to be implemented, or old bugs that could be squashed.

Developers are fascinated by new technology and are sometimes anxious to try out new features of their language or environment or to create their own implementation of a slick feature they saw in another product -- whether or not it's required in their product. The effort required to design, implement, test, document, and support features that are not required lengthens the schedule.

But gold plating your code isn't all bad. Perhaps the most remarkable tale of successful developer gold plating I've ever read is the one Blake Patterson outlines:

Not long ago I purchased a new-in-box Atari Jaguar, complete with Jeff Minter's psychedelic sequel to Tempest, Tempest 2000. It's an amazing game that's been ported to many other platforms, but the consensus is that none are as solid as the Jaguar original. Having played several of the ports, I'd have to agree.

tempest 2000

An interesting thing about "the world's first 64-bit console" -- its controller was, as the Brits would say, fairly pants. It was large, sported a calculator-button array for game overlays (like the Intellivision controller), had no shoulder buttons, and featured only a D-pad for directional control. (ed: certainly one of the weirdest members of the game console controller family tree, to be sure)

atari jaguar controller

As the arcade original is controlled with a rotary spinner knob, the D-pad falls rather short of providing ideal game control.

tempest spinner

But, of course, being such a savvy chap, Jeff Minter realized this.

Jeff wrote in support for an analog rotary controller ... that did not exist. Neither Atari nor third party manufacturers produced such a controller in the Jaguar's heyday. Jeff, as I understand it, hacked his own together by wiring an Atari paddle controller into a Jaguar controller. In the years since the Jaguar's passing, a few small operations have offered modified Jaguar controllers with spinners wired into them for purchase.

Jeff Minter's an interesting historical figure in the computer gaming community, as the author of several 8-bit computer era game classics. I've talked about his long-standing interest in audio visualization here once before. He's still creating games today; his latest is the Xbox Live downloadable title Space Giraffe. Jeff has a blog that he updates fairly regularly.

Still, I'm amazed that Jeff added code to a commercially shipped console game to support a completely optional homebrew spinner controller of his own creation. That's the very definition of "not required". This code lied dormant in the game until a handful of enthusiasts, fourteen years later, cobbled together custom controllers to play the game as it was originally intended by the author.

If that isn't the ultimate case of gold plating your software, I don't know what is. My hat is off to you, Mr. Minter.

Discussion

Maybe Normalizing Isn't Normal

One of the items we're struggling with now on Stack Overflow is how to maintain near-instantaneous performance levels in a relational database as the amount of data increases. More specifically, how to scale our tagging system. Traditional database design principles tell you that well-designed databases are always normalized, but I'm not so sure.

Dare Obasanjo had an excellent post When Not to Normalize your SQL Database wherein he helpfully provides a sample database schema for a generic social networking site. Here's what it would look like if we designed it in the accepted normalized fashion:

social network database example, normalized

Normalization certainly delivers in terms of limiting duplication. Every entity is represented once, and only once -- so there's almost no risk of inconsistencies in the data. But this design also requires a whopping six joins to retrieve a single user's information.

select * from Users u
inner join UserPhoneNumbers upn
on u.user_id = upn.user_id
inner join UserScreenNames usn
on u.user_id = usn.user_id
inner join UserAffiliations ua
on u.user_id = ua.user_id
inner join Affiliations a
on a.affiliation_id = ua.affiliation_id
inner join UserWorkHistory uwh
on u.user_id = uwh.user_id
inner join Affiliations wa
on uwh.affiliation_id = wa.affiliation_id

(Update: this isn't intended as a real query; it's only here to visually illustrate the fact that you need six joins -- or six individual queries, if that's your cup of tea -- to get all the information back about the user.)

Those six joins aren't doing anything to help your system's performance, either. Full-blown normalization isn't merely difficult to understand and hard to work with -- it can also be quite slow.

As Dare points out, the obvious solution is to denormalize -- to collapse a lot of the data into a single Users table.

Social database example, denormalized

This works -- queries are now blindingly simple (select * from users), and probably blindingly fast, as well. But you'll have a bunch of gaping blank holes in your data, along with a slew of awkwardly named field arrays. And all those pesky data integrity problems the database used to enforce for you? Those are all your job now. Congratulations on your demotion!

Both solutions have their pros and cons. So let me put the question to you: which is better -- a normalized database, or a denormalized database?

Trick question! The answer is that it doesn't matter! Until you have millions and millions of rows of data, that is. Everything is fast for small n. Even a modest PC by today's standards -- let's say a dual-core box with 4 gigabytes of memory -- will give you near-identical performance in either case for anything but the very largest of databases. Assuming your team can write reasonably well-tuned queries, of course.

There's no shortage of fascinating database war stories from companies that made it big. I do worry that these war stories carry an implied tone of "I lost 200 pounds and so could you!"; please assume the tiny-asterisk disclaimer results may not be typical is in full effect while reading them. Here's a series that Tim O'Reilly compiled:

There's also the High Scalability blog, which has its own set of database war stories:

First, a reality check. It's partially an act of hubris to imagine your app as the next Flickr, YouTube, or Twitter. As Ted Dziuba so aptly said, scalability is not your problem, getting people to give a shit is. So when it comes to database design, do measure performance, but try to err heavily on the side of sane, simple design. Pick whatever database schema you feel is easiest to understand and work with on a daily basis. It doesn't have to be all or nothing as I've pictured above; you can partially denormalize where it makes sense to do so, and stay fully normalized in other areas where it doesn't.

Despite copious evidence that normalization rarely scales, I find that many software engineers will zealously hold on to total database normalization on principle alone, long after it has ceased to make sense.

When growing Cofax at Knight Ridder, we hit a nasty bump in the road after adding our 17th newspaper to the system. Performance wasn't what it used to be and there were times when services were unresponsive.

A project was started to resolve the issue, to look for 'the smoking gun'. The thought being that the database, being as well designed as it was, could not be of issue, even with our classic symptom being rapidly growing numbers of db connections right before a crash. So we concentrated on optimizing the application stack.

I disagreed and waged a number of arguments that it was our database that needed attention. We first needed to tune queries and indexes, and be willing to, if required, pre-calculate data upon writes and avoid joins by developing a set of denormalized tables. It was a hard pill for me to swallow since I was the original database designer. Turned out it was harder for everyone else! Consultants were called in. They declared the db design to be just right - that the problem must have been the application.

After two months of the team pushing numerous releases thought to resolve the issue, to no avail, we came back to my original arguments.

Pat Helland notes that people normalize because their professors told them to. I'm a bit more pragmatic; I think you should normalize when the data tells you to:

  1. Normalization makes sense to your team.
  2. Normalization provides better performance. (You're automatically measuring all the queries that flow through your software, right?)
  3. Normalization prevents an onerous amount of duplication or avoids risk of synchronization problems that your problem domain or users are particularly sensitive to.
  4. Normalization allows you to write simpler queries and code.

Never, never should you normalize a database out of some vague sense of duty to the ghosts of Boyce-Codd. Normalization is not magical fairy dust you sprinkle over your database to cure all ills; it often creates as many problems as it solves. Fear not the specter of denormalization. Duplicated data and synchronization problems are often overstated and relatively easy to work around with cron jobs. Disks and memory are cheap and getting cheaper every nanosecond. Measure performance on your system and decide for yourself what works, free of predispositions and bias.

As the old adage goes, normalize until it hurts, denormalize until it works.

Discussion