Coding Horror

programming and human factors

Chickens, Pigs, and Really Inappropriate Terminology

Here's a description of the daily Scrum meeting in the Scrum process:

During the month-long sprints, the team holds daily meetings-- the daily Scrum. Meetings are typically held in the same location and at the same time each day. Ideally the daily Scrums are held in the morning, as they help set the context for the coming day's work. Each participant in the Daily Scrum is known as either a chicken or a pig, depending on his involvement in the project.

Scrum has some serious naming problems, starting with Scrum itself. Here's another one. Chickens? Pigs? The whole thing is completely lost on me. Evidently it's based on a joke from Schwaber and Beedle's Agile Software Development with Scrum:

Chicken:
Let's start a restaurant!

Pig:
What would we call it?

Chicken:
Ham n' Eggs!

Pig: No thanks. I'd be committed, but you'd only be involved!

In other words, pigs sacrifice their lives for the project, whereas chickens only have to give up their eggs. It's amusing, I suppose, but just try explaining it to the people coming to your Daily Scrum meetings.

Pig vs. Chicken

I agree that everyone participating in the project should have "skin in the game". But not literally. Pride in your project is one thing, but implying that you'd give your very life to see the project succeed is just a little too macho for my tastes. And it gets worse. Jeff Sutherland, one of the co-creators of Scrum, explains that the chicken term is meant to be derogatory:

The real issue is who is committed to the project and accountable for deliverables. They get to talk at the daily meeting. They are the pigs and their butts are on the line. We could call them contributors if we don't like pigs.

People who are not committed to the project and are not accountable for deliverables at the meeting do not get to talk [at the daily meeting]. They are excess overhead for the meeting. They might be called eavesdroppers if you don't like chickens. Whatever we call them it should have a negative connotation because they tend to sap productivity. They are really suckers or parasites that live off the work of others. Sorry to be politically incorrect but others should come up with a euphemism that conveys the right balance between being "nice" and communicating clearly that eavesdroppers must minimize their impact on the productivity of the team.

If you look at most corporate meetings you will see 50-80% excess overhead. These are the meetings that Scrum eliminates on day 1 if done properly.

Most of excess overhead will claim they need to know what is going on because it impacts their work in some way. They don't need to know what is going on in the Scrum. They need to be able to see a visual representation off the backlog that is updated daily, preferably automatically on the web. At the end of the Sprint, they get to go to a demo where they can see exactly what went on, can provide their input, and can influence the next Sprint. This is where they can provide a real contribution.

I see where Jeff is coming from here. I really do. I have a deep respect for project managers* who nobly throw themselves on meeting grenades so the team can actually get work done. The number one goal of any competent PM is to shield their team from as much of this organizational overhead as possible. But the use of derogatory in-joke terminology harms the cause by making it harder for outsiders to take Scrum seriously. And I wonder: how do you diplomatically break the news to a chicken that thinks it's a pig?

Luckily, the very same wiki page provides some alternative terminology that better communicates what's going on in the daily Scrum meeting:

  • Players, Spectators
  • Contributors, Observers
  • Committed, Interested
  • Forwards, Backs (continuing with the rugby theme)
  • Active, Passive

Although I don't agree with all of it, there are some solid software development principles in Scrum. It's a shame that stupid stuff like chickens and pigs get in the way.

* or, in the spectacularly bad parlance of Scrum, ScrumMasters.

Discussion

Opting Out of Linked In

From the Wikipedia entry on Linked In:

It is not possible to remove yourself from LinkedIn. Instead, you have to file a customer support ticket.

This blurb neatly summarizes everything that's wrong with the Linked In service.

I've been a member of Linked In for almost two years now. I dutifully entered my credentials and kept them up to date. The only other interaction I've had with the service since then has been a continual stream of link requests. I'm selective about who I approve, limiting it to people I've only met in real life. And the net benefit of this selectivity? As far as I can tell, zilch. Nada. Nothing. I did get a cold call from a headhunter once based on my Linked In profile, but I don't consider that a benefit.

Has this service ever been useful to anyone? I'm telling you, Linked In is the digital equivalent of a chain letter. If you really want to contact a friend of a friend (of a friend), just pick up the phone or send an email. If the only way you can reach someone is through this nutty online social pyramid scheme, you don't deserve to be taken seriously. And I can guarantee that you won't be.

Linked Out

Consider carefully: who really benefits from your participation in Linked In? I'll tell you who benefits: Linked In.

If you can't immediately point to a few direct benefits you personally get from participating in Linked In, then why do it? Why build Linked In's marketing database with your valuable time and information?

From this point on, I'm opting out of linked in. Like Russell Beattie, I've found that there really is no there there. If you're a member of Linked In and you're not seeing direct personal benefits, I urge you to close your Linked In account as well. It's high time we put an end to this glorified chain letter of a service.

Discussion

The Field of Dreams Strategy

We have a tendency to fetishize audience metrics in the IT industry. Presenters stress out about about their feedback ratings and measure themselves by how many attendees they can attract for a presentation. Bloggers obsessively track their backlinks, pagerank, and traffic numbers. I see it a lot, and it's strange to me. I don't chase those numbers. I couldn't even tell you how many readers I have, or what my presentation ratings were. I don't mean to sound glib, but I don't care. Audience metrics aren't the reason I write, and they aren't the reason I present. They're incidental.

Conan O'Brien made an interesting observation in a recent interview when asked about his audience:

There's a temptation to overthink the whole thing. I've had a Field Of Dreams philosophy to this: If you build it, they will come. I still have no idea.

Baseball Field

I don't look at research. I don't look at who's watching, or when they're watching. I've never been interested in any of that. I'm interested in doing what I think is funny. For the last 13 years, that seems to have worked for me. If I go to 11:30 and do what I think is funny, and someone comes and tells me it isn't getting enough people in the tent, I'd say, "Well, that's all I can do." If I'm looking at spreadsheets and time-lapse studies of viewing patterns, I think I'm wasting my time. What I should be worried about the first night I host The Tonight Show is, "How can I make this a funny show?" The second night, "All right, let's make another funny show doing some different stuff." You do it one show at a time. And if you're lucky, eight years later, you've alienated a nation.

Similarly, Rob Caron once commented:

The day I care about keeping my blog readers happy is the day I'll stop blogging. Who needs the added stress?

There's certainly value in audience metrics. But it's easy to overanalyze, too. Instead of obsessing over who does and doesn't link to you, concentrate on writing a blog entry you'd like to read. Instead of worrying about audience feedback, focus on delivering a presentation you'd like to attend.

You should trust your gut more than any metrics. Build it, and they will come.

Discussion

Chess: Computer v. Human

I recently visited the Computer History Museum in nearby San Jose, which has a new exhibit on the history of computer chess. Despite my total lack of interest in chess as a game, computer chess has a special significance in the field of computer science. Chess remains the most visible and public benchmark of the relentless increase in computer speed over the last 50 years.

chessboard.png

There are two general strategies available to computer chess programs:

  1. Brute force search or Type A. All possible positions are examined.
  2. Strategic AI or Type B. Only good positions are examined.

As it turns out, computers have a hard time with the concept of "good". That's why the history of computer chess is dominated by Type A programs. The most famous Type A chess computer is probably IBM's Deep Blue, which went through a number of incarnations before it defeated a reigning world chess champion in 1997.

I recently built myself a new PC based on the latest Intel Core 2 Duo chip. According to the Fritz Chess Benchmark, my current home PC is capable of evaluating approximately 4.45 million chess positions per second. The figure is actually expressed as 4452 kilonodes per second (kN/s), a common unit of measurement for chess engines.

4.45 million chess positions per second sounds impressive, until you compare that with the Deep Blue timeline:

Year Positions/sec
1985 50k
1987 500k
1988 720k
1989 2 million
1991 7 million
1996 100 million
1997 200 million

The fastest desktop PCs are more than 15 years behind Deep Blue in computer chess. Of course, Deep Blue was built using large arrays of custom hardware designed for the sole purpose of playing chess, so it's a little unfair to directly compare it to a general-purpose, commodity CPU.

Chess is an inherently parallelizable problem. The dual and quad-core CPUs on the Fritz Chess benchmark results page almost exactly double the results of single-core CPUs of the same speed. You could certainly string together a bunch of these fast commodity desktops and build your way up to Deep Blue numbers. This fascinating ExtremeTech article on building desktop chess computers indicates it would take 24 dual, dual-core 2.2 GHz Opteron machines to match Deep Blue. Or at least it would have in January 2006 when that article was written. But something more significant than commodity hardware scaling is going on here --Type B chess programs are finally emerging:

Despite its vastly inferior brute force, the Deep Blitz machine could already be a match for Deep Blue because of improvements in chess software. Deep Fritz is able to evaluate lines of play to a similar depth because it successfully narrows its search only to the strongest lines of play.

The data suggest that Deep Blue spent a lot of time evaluating bad moves but overcame this weakness through brute force. In a match between Deep Blue and the Deep Blitz machine running Deep Fritz or Deep Shredder, it seems unclear which machine would win. Obviously, Kasparov did not evaluate 200 million chess positions per second when he defeated Deep Blue in game 1 of the 1997 match, thus the 200 million positions per second number is not a requirement to play chess at the word championship level. It seems likely that Deep Fritz, which is more efficient at filtering out weak moves, is a far more 'intelligent' chess program compared with Deep Blue's software.

The latest computer chess ratings are determined solely by computer vs. computer play. Bram Cohen thinks the derived ratings from computer play are enough to crown the computer chess programs champs over human grandmasters, too:

What's the best tournament chess game of all time? If by 'best' one means 'best played' then I'm afraid the answer is Zappa vs. Fruit. In this most recent world computer tournament, Zappa scored an astounding 10.5 out of 11, a better performance than any human has ever had in a human world championship, and against a stronger field than any human world champion has ever faced. Fruit came in a clear second, so this is the only tournament game we have between the two strongest chess players ever created. Of course, you'll soon be able to buy the commercial version of Zappa and have it play against itself, resulting in a string of games most of which are better than any game ever played between two humans. Welcome to chess in the 21st century.

Some humorous notes: Zappa and Fruit were both written by lone grad students in under two years. Dark horses obliterating the field is a common thing in AI. Zappa's lone draw was ironically against the program which lost every other game in the entire tournament.

Now that computers are clearly better than humans at chess, the question arises, can computers attempt to guess the strength of a game's play based on the moves in that game? And can we use that method to evaluate 'classic' games? Do we really want to?

I think this is a dubious claim, particularly since it's not based on any actual data from computer versus human games. Although Deep Blue did beat Kasparov in 1997, there's ample evidence that Kasparov was the better player and he psyched himself out during the match. All subsequent rematches between human world champions and computer chess programs have resulted in ties-- including two with Kasparov himself in 2003. Statistician Jeff Sonas believes computers will never consistently defeat the best human players:

Computers are becoming more and more dominant against everyone but the top 200 players in the world. That is leading to an overall performance rating for computers that is getting higher and higher. However, the players in the top-200 are holding their ground even against the latest and greatest computers. Perhaps that group will soon shrink down to only the top-100, or the top-50, but not inevitably, and not irreversibly. As you can see from my previous graphics, there is no sign that the top-200 players are losing ground at all against the top computers.

The top 20 humans (the 2700+ crowd) are managing a long string of drawn matches against computers, and the rest of the top-200 is averaging the same 35% to 40% score that they did a few years ago. So, amazing as it may seem, I don't see any evidence that the top computers are suddenly going to take over the chess world. Of course the top computers are improving, mostly through hardware and software upgrades, but somehow the top humans are improving just as fast, in other ways.

We are at a unique point in chess history, an unprecedented state of dynamic balance. The top computers have caught up with the top grandmasters, and right now I'm not convinced that the computers will zoom past the grandmasters. Everything depends on whether computers or grandmasters can improve faster, starting today. It may even be that the top humans can figure out how to improve their own anti-computer play, to the point that they will pull ahead again. Perhaps Garry Kasparov can lead the way once more.

I tend to agree. We may have reached an inflection point. The problem space of chess is so astonishingly large that incremental increases in hardware speed and algorithms are unlikely to result in meaningful gains from here on out.

Computer chess programs may have long ago crushed the rest of us in their inevitable Moore's Law march, but the best 200 chess players in the world are still holding their ground.

Discussion

Blog Advertising: Yea or Nay

I've recently been approached by several different people to inquire about advertising on my blog.

It doesn't cost me anything to run this blog. I used to host it myself on my cable modem, and my employer, Vertigo Software, generously donated hosting when I outgrew the limited upstream bandwidth of a cable modem.

I do have a bit of advertising on the blog already, through my Amazon affiliate links. That seemed like a natural fit for my recommended reading list when I originally put it together. But there's never a visible advertisement. The affiliate links are indistinguishable from a normal link to a book on Amazon, which is usually quite useful.

I can understand wanting to recoup hosting costs-- if I had any-- but Scott Hanselman asks: what about the cost of your time writing all those blog entries?

Neil Young: Sponsored by Nobody

I'm not opposed to advertising. I won't pretend that I don't like money, particularly here in the United States where money is synonymous with freedom.

But advertising responsibly is difficult.

  • Stand behind the products you're indirectly selling. They should be products or services you yourself recommend. Some of the more selective blogs join targeted ad networks with products they hand-pick, such as the deck.

  • Limit the number of ads you take. Using the Ronco spray-on monetization plan and filling your page with as many types of advertising and affiliate programs as possible smacks of desperation. Even worse, it makes your website look like a tacky Nascar joke.

    Closeup of advertising decals on NASCAR vehicle

  • Realize that advertising changes the nature of your blog. The first ads you take convert your blog from a non-profit to a commercial venture. It's no longer a hobby; you're being paid to blog. It's work. And unless you're only accepting only random ads, there are also new avenues for conflicts of interest.

At least for my blog, I don't think the benefits of advertising outweigh the negatives. I like the idea that every time I write an entry, I did so purely for my own reasons, whatever they are, and not because I needed to drive ad revenue.

Discussion