Coding Horror

programming and human factors

Welcome Back Comments

I apologize for the scarcity of updates lately. There have been two things in the way:

  1. Continuing fallout from International Backup Awareness Day, which meant all updates to Coding Horror from that point onward were hand-edited text files. Which, believe me, isn't nearly as sexy as it … uh … doesn't sound.

  2. I am presenting and conducting a workshop at Webstock 2010 in New Zealand. This is a two week trip I'm taking with the whole family, including our little buddy Rock Hard Awesome, so the preparations have been more intense than usual.

On top of all that, according to the program, I just found that my presentation involves interpretive dance, too. Man. I wish someone had told me! My moves are so rusty, they've barely improved from Electric Boogaloo. But hey, at least I don't have to sing Andrews Sister songs like poor Brian Fling.

And then, of course, there's that crazy Stack Overflow thing I'm always yammering on about. Very busy there, our team is expanding, and we have big plans for this year, too.

But, there is hope!

Thanks to the fine folks at Six Apart – and more specifically the herculean efforts of one Michael SippeyCoding Horror is now hosted in the TypePad ecosystem. Which means, at least in theory, better "cloud" type reliability in the future. (cough)

One accidental bit of collateral damage was that comments, by necessity, were disabled during this two month period. At first, I was relieved. This may seem a bit hypocritical, since I originally wrote A Blog Without Comments is Not a Blog. And I still believe it too. But as I prophetically noted in the very same post:

I am sympathetic to issues of scale. Comments don't scale worth a damn. If you have thousands of readers and hundreds of comments for every post, you should disable comments and switch to forums, probably moderated forums at that. But the number of bloggers who have that level of readership is so small as to be practically nil. And when you get there, believe me, you'll know. Until then, you should enable comments.

I guess you can put this in the "nice problems to have" category, but let me tell you, it's not so nice of a problem when it's on your plate. At a large enough scale, comments require active moderation or they rapidly go sour. People get mean, the crazies come out in full force, and the comments start to resemble an out of control trailer park reality show brawl. It's fun, I suppose, but in a way that drives out all the sane people. Left unchecked, the best you can hope for is to end up head resident at the sanitarium. And that's a hell of a way to go out.

Howlers

(the above is from Mike Reed's amazing Flame Warriors series, by the way. Well worth your time if you haven't seen it already.)

The degeneration of comments was a shame, because it undermined my claim that comments are awesome.

It's an open secret amongst bloggers that the blog comments are often better than the original blog post, and it's because the community collectively knows far more than you or I will ever know.

The best part of a blog post often begins where the blog post ends. If you are offended by that, I humbly submit you don't understand why blogs work.

Why would I have bothered to found Stack Overflow with Joel Spolsky if I didn't believe in the power of community – that none of us is as dumb as all of us? Honestly, a lot of the design of Stack Overflow comes from my personal observations about how blog comments work. But my creaky old Coding Horror comments offered none of the fancy voting and moderation facilities that make Stack Overflow work. And without ample free personal time and attention from me to weed the comment garden, the comments got out of control.

Most of all, I blame myself.

I got some amazing emails in lieu of comments on my last few blog posts, and it positively kills me that these emails were only seen by two sets of eyes instead of the thousands they deserve. That's a big part of why I hate email silos. And really, email in general.

But there was another unanticipated side effect of having comments disabled that Stephane Charette pointed out to me in email.

Here is an interesting "silver lining" to the crash you had. Without
comments, it forces us, your faithful readers, to think more about what you have to say.

In a way, things are back to how your blog used
to be. In recent years, the huge influx of comments means that we – or just I? – end up spending 1/4 of my time reading what you wrote, and then merging in what everyone else wrote. Depending on how I feel about the topic and your approach to the issue, the weight values may be very different than 50/50. But regardless, I always have to consider when clicking on my Coding Horror bookmark: "Is now the right time to check if he has a new entry? Do I have enough time to read through a hundred comments? Should I wait until later tonight when the kids are in bed to go read his latest article?"

I never thought about it until recently. Your crash is what brought this up to light. Like tonight, when I saw your new headline in my iGoogle page, I didn't have to consider whether or not it was the right time. I read the article, and then thought for myself. I didn't let other people's comments steer my thoughts. How nice!

I'm not certain why it works like this. Often, the sheer number of comments distracts from what you wrote, but for some reason, it is impossible not to at the very least scroll through what people say. In a way, your blog has ended up like a slashdot article, with a paragraph or two of content at the top, and then everyone wanting to insert their $0.02.

Thinking for yourself. Now there's a novel idea. In the reverberating echo chamber that is the internet, I think we would all do well to remind ourselves of that periodically.

He's also right that the psychic burden of all those comments was weighing not just on readers, but on me, the writer, too. That's why I had a false sense of freedom when comments were disabled. You mean I can say whatever I want, and nobody can contradict me underneath my very own post? Revolutionary!

There are some absolute gems of insight and observation in comments, but sometimes extracting them was too much like pulling teeth. At the same time, I felt obligated to read all the comments on every post of mine. If I was asking people to read the random words I'm spewing all over the internet, how could I not extend my commenters the same courtesy? That's just rude.

It seems the only thing worse than comments being on was comments being off. It started to feel empty. As if I was in an enormous room, presenting to an eerily mute audience.

So, while I am very glad to have comments back, and I welcome dialog with the community, there will be … changes. For the benefit of everyone's mental health.

  1. No more anonymous comments. While I would prefer to allow anonymous comments, it is clear that at this scale I don't have time to deal properly with anonymous comments. If you want to say something, you'll need to authenticate. If what you have to say isn't worth authenticating to post, it's probably best for both of us if you keep it to yourself anyway.

The good news is that the TypePad commenting system supports a veritable laundry list of authentication mechanisms -- OpenID (naturally), Twitter, Facebook, Google, Yahoo, and many others. So authenticating to post a comment should only present a mild, but necessary, barrier to conversation.

  1. Comment moderation will be more stringent. If you don't have something useful and reasonably constructive to say in your comment, it will be removed without hesitation. You can be as critical of me (or, better still, my arguments and ideas) as you like, but you must convince me that you're contributing to the conversation and not just yelling at me or anyone else.

I'm not looking for sycophants, but shrill argument is every bit as bad. When you comment here, try to show the class something interesting they can use. That's all I'm asking.

It feels good to be back. Thanks to Six Apart for making it happen.

And, most of all, thanks to you for reading.

Discussion

Cultivate Teams, Not Ideas

How much is a good idea worth? According to Derek Sivers, not much:

It's so funny when I hear people being so protective of ideas. (People who want me to sign an NDA to tell me the simplest idea.) To me, ideas are worth nothing unless executed. They are just a multiplier. Execution is worth millions.

To make a business, you need to multiply the two. The most brilliant idea, with no execution, is worth $20. The most brilliant idea takes great execution to be worth $20,000,000. That's why I don't want to hear people's ideas. I'm not interested until I see their execution.

I was reminded of Mr. Sivers article when this email made the rounds earlier this month:

I feel that this story is important to tell you because Kickstarter.com copied us. I tried for 4 years to get people to take Fundable seriously, traveling across the country, even giving a presentation to FBFund, Facebook's fund to stimulate development of new apps. It was a series of rejections for 4 years. I really felt that I presented myself professionally in every business situation and I dressed appropriately and practiced my presentations. That was not
enough. The idiots wanted us to show them charts with massive profits and widespread public acceptance so that they didn't have to take any risks.

All it took was 5 super-connected people at Kickstarter (especially Andy Baio) to take a concept we worked hard to refine, tweak it with Amazon Payments, and then take credit. You could say that that's capitalism, but I still think you should acknowledge people that you take inspiration from. I do. I owe the concept of Fundable to many things, including living in cooperative student housing and studying Political Science at Michigan. Rational choice theory, tragedy of the commons, and collective action are a few political science concepts that are relevant to Fundable.

Yes, Fundable had some technical and customer service problems. That's because we had no money to revise it. I had plans to scrap the entire CMS and start from scratch with a new design. We were just so burned out that motivation was hard to come by. What was the point if we weren't making enough money to live on after 4 years?

The disconnect between idea and execution here is so vast it's hard to understand why the author himself can't see it.

I wouldn't call ideas worthless, per se, but it's clear that ideas alone are a hollow sort of currency. Success is rarely determined by the quality of your ideas. But it is frequently determined by the quality of your execution. So instead of worrying about whether the Next Big Idea you're all working on is sufficiently brilliant, worry about how well you're executing.

The criticism that all you need is "super-connected people" to be successful was also leveled at Stack Overflow. In an email to me last year, Andy Baio – ironically, the very person being cited in the email – said:

I very much enjoyed the Hacker News conversation about cloning the site in a weekend. My favorite comments were from the people that believe Stack Overflow is only successful because of the Cult of Atwood & Spolsky. Amazing.

I don't care how internet famous you are; nobody gets a pass on execution. Sure, you may have a few more eyeballs at the beginning, but if you don't build something useful, the world will eventually just shrug its collective shoulders and move along to more useful things.

One of my all time favorite software quotes is from Wil Shipley:

This is all your app is: a collection of tiny details.

In software development, execution is staying on top of all the tiny details that make up your app. If you're not constantly obsessing over every aspect of your application, relentlessly polishing and improving every little part of it – no matter how trivial – you're not executing. At least, not well.

And unless you work alone, which is a rarity these days, your ability to stay on top of the collection of tiny details that makes up your app will hinge entirely on whether or not you can build a great team. They are the building block of any successful endeavor. This talk by Ed Catmull is almost exclusively focused on how Pixar learned, through trial and error, to build teams that can execute.

It's a fascinating talk, full of some great insights, and you should watch the whole thing. In it, Mr. Catmull amplifies Mr. Sivers' sentiment:

If you give a good idea to a mediocre group, they'll screw it up. If you give a mediocre idea to a good group, they'll fix it. Or they'll throw it away and come up with something else.

Execution isn't merely a multiplier. It's far more powerful. How your team executes has the power to transform your idea from gold into lead, or from lead into gold. That's why, when building Stack Overflow, I was so fortunate to not only work with Joel Spolsky, but also to cherry-pick two of the best developers I had ever worked with in my previous jobs and drag them along with me. Kicking and screaming if necessary.

If I had to point to the one thing that made our project successful, it was not the idea behind it, our internet fame, the tools we chose, or the funding we had (precious little, for the record).

It was our team.

The value of my advice is debatable. But you would do well to heed the advice of Mr. Sivers and Mr. Catmull. If you want to be successful, stop worrying about the great ideas, and concentrate on cultivating great teams.

Discussion

The Great Newline Schism

Have you ever opened a simple little ASCII text file to see it inexplicably displayed as onegiantunbrokenline?

text file opened in notepad

Opening the file in a different, smarter text editor results in the file displayed properly in multiple paragraphs.

text file opened in notepad2

The answer to this puzzle lies in our old friend, invisible characters that we can't see but that are totally not out to get us. Well, except when they are.

The invisible problem characters in this case are newlines.

Did you ever wonder what was at the end of your lines? As a programmer, I knew there were end of line characters, but I honestly never thought much about them. They just … worked. But newlines aren't a universally accepted standard; they are different depending who you ask, and what platform they happen to be computing on:

DOS / WindowsCR LFrn0x0D 0x0A
Mac (early)CRr0x0D
UnixLFn0x0A

The Carriage Return (CR) and Line Feed (LF) terms derive from manual typewriters, and old printers based on typewriter-like mechanisms (typically referred to as "Daisywheel" printers).

manual typewriter

On a typewriter, pressing Line Feed causes the carriage roller to push up one line -- without changing the position of the carriage itself -- while the Carriage Return lever slides the carriage back to the beginning of the line. In all honesty, I'm not quite old enough to have used electric typewriters, so I have a dim recollection, at best, of the entire process. The distinction between CR and LF does seem kind of pointless -- why would you want to move to the beginning of a line without also advancing to the next line? This is another analog artifact, as Wikipedia explains:

On printers, teletypes, and computer terminals that were not capable of displaying graphics, the carriage return was used without moving to the next line to allow characters to be placed on top of existing characters to produce character graphics, underlines, and crossed out text.

So far we've got:

  • Confusing terms based on archaic hardware that is no longer in use, and is confounding to new users who have no point of reference for said terms;
  • Completely arbitrary platform "standards" for what is exactly the same function.

Pretty much business as usual in computing. If you're curious, as I was, about the historical basis for these decisions, Wikipedia delivers all the newline trivia you could possibly want, and more:

The sequence CR+LF was in common use on many early computer systems that had adopted teletype machines, typically an ASR33, as a console device, because this sequence was required to position those printers at the start of a new line. On these systems, text was often routinely composed to be compatible with these printers, since the concept of device drivers hiding such hardware details from the application was not yet well developed; applications had to talk directly to the teletype machine and follow its conventions. The separation of the two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in one-character time. That is why the sequence was always sent with the CR first. In fact, it was often necessary to send extra characters (extraneous CRs or NULs, which are ignored) to give the print head time to move to the left margin. Even after teletypes were replaced by computer terminals with higher baud rates, many operating systems still supported automatic sending of these fill characters, for compatibility with cheaper terminals that required multiple character times to scroll the display.

CP/M's use of CR+LF made sense for using computer terminals via serial lines. MS-DOS adopted CP/M's CR+LF, and this convention was inherited by Windows.

This exciting difference in how newlines work means you can expect to see one of three (or more, as we'll find out later) newline characters in those "simple" ASCII text files.

animated line endings comparison

If you're fortunate, you'll pick a fairly intelligent editor that can detect and properly display the line endings of whatever text files you open. If you're less fortunate, you'll see onegiantunbrokenline, or a bunch of extra ^M characters in the file.

Even worse, it's possible to mix all three of these line endings in the same file. Innocently copy and paste a comment or code snippet from a file with a different set of line endings, then save it. Bam, you've got a file with multiple line endings. That you can't see. I've accidentally done it myself. (Note that this depends on your choice of text editor; some will auto-normalize line endings to match the current file's settings upon paste.)

This is complicated by the fact that some editors, even editors that should know better, like Visual Studio, have no mode that shows end of line markers. That's why, when attempting to open a file that has multiple line endings, Visual Studio will politely ask you if it can normalize the file to one set of line endings.

Visual Studio - Inconsistent Line Endings dialog

This Visual Studio dialog presents the following five (!) possible set of line endings for the file:

  1. Windows (CR LF)
  2. Macintosh (CR)
  3. Unix (LF)
  4. Unicode Line Separator (LS)
  5. Unicode Paragraph Separator (PS)

The last two are new to me. I'm not sure under what circumstances you would want those Unicode newline markers.

Even if you rule out unicode and stick to old-school ASCII, like most Facebook relationships … it's complicated. I find it fascinating that the mundane ASCII newline has so much ancient computing lore behind it, and that it still regularly bites us in unexpected places.

If you work with text files in any capacity -- and what programmer doesn't -- you should know that not all newlines are created equally. The Great Newline Schism is something you need to be aware of. Make sure your tools can show you not just those pesky invisible white space characters, but line endings as well.

Discussion

A Democracy of Netbooks

As a long time reader of Joey DeVilla's excellent blog, Global Nerdy, I take exception to his post Fast Food, Apple Pies, and Why Netbooks Suck:

The end result, to my mind, is a device that occupies an uncomfortable, middle ground between laptops and smartphones that tries to please everyone and pleases no one. Consider the factors:

  • Size: A bit too large to go into your pocket; a bit too small for regular day-to-day work.
  • Power: Slightly more capable than a smartphone; slightly less capable than a laptop.
  • Price: Slightly higher than a higher-end smartphone but lacking a phone's capability and portability; slightly lower than a lower-end notebook but lacking a notebook's speed and storage.
To summarize: Slightly bigger and pricier than a phone, but can't phone. Slightly smaller and cheaper than a laptop, but not that much smaller or cheaper. To adapt a phrase I used in an article I wrote yesterday, netbooks are like laptops, but lamer.

This is so wrongheaded I am not sure where to begin. I happen to agree with Dave Winer's definition of "netbook":

  1. Small size.
  2. Low price.
  3. Battery life of 4+ hours. Battery can be replaced by user.
  4. Rugged.
  5. Built-in wifi, 3 USB ports, SD card reader.
  6. Runs my software.
  7. Runs any software I want; no platform vendor to decide what's appropriate.
  8. Competition. Users have choice and can switch vendors at any time.

Netbooks are the endpoint of four decades of computing -- the final, ubiquitous manifestation of "A PC on every desk and in every home". But netbooks are more than just PCs. If the internet is the ultimate force of democratization in the world, then netbooks are the instrument by which that democracy will be achieved.

No monthly fees and contracts.

No gatekeepers.

Nobody telling you what you can and can't do with your hardware, or on their network.

To dismiss netbooks as like laptops, but lamer is to completely miss the importance of this pivotal moment in computing -- when pervasive internet and the mass production of inexpensive portable computers finally intersected. I'm talking about unlimited access to the complete sum of human knowledge, and free, unfettered communication with anyone on earth. For everyone.

It's true that smartphones are slowly becoming little PCs, but they will never be free PCs. They will forever be locked behind an imposing series of gatekeepers and toll roads and walled gardens. Anyone with a $199 netbook and access to the internet can make free Skype videophone calls to anywhere on Earth, for as long as they want. Meanwhile, sending a single text message on a smartphone costs 4 times as much as transmitting data to the Hubble space telescope.

I don't care how "smart" your smartphone is, it will never escape those corporate shackles. Smartphones are simply not free enough to deliver the type of democratic transformation that netbooks -- mobile PCs cheap enough and fast enough and good enough for everyone to afford -- absolutely will.

That's why I love netbooks. In all their cheap, crappy glory. And you should too. Because they're instruments of user power.

The truly significant thing is this -- the users took over.

Let me say that again: The users took over.

I always say this is the lesson of the tech industry, but the people in the tech industry never believe it, but this is the loop. In the late 70s and early 80s the minicomputer and mainframe guys said the same kinds of things about Apple IIs and IBM PCs that Michael Dell is saying about netbooks. It happens over and over again, I've recited the loops so many times that every reader of this column can recite them from memory. All that has to be said is that it happened again.

Once out, the genie never goes back in the bottle.

Netbooks aren't an alternative to notebook computers. They are the new computers.

Cheap and crappy? Maybe those early models were, but having purchased a new netbook for $439 shipped, it is difficult for me to imagine the average user ever paying more than $500 for a laptop.

acer aspire 1410

For the price, this is an astonishingly capable PC:

  • Dual Core 1.2 GHz Intel CULV Celeron processor
  • 2 GB RAM
  • Windows 7 Home Premium
  • 11.6" screen with 1366 x 768 resolution
  • Thin (1") and light (3.5 lbs)
  • Good battery life (5 hours)
  • 3 USB ports, WiFi, webcam, gigabit ethernet

Windows 7 is a fine OS, but this machine would surely be cheaper without the Microsoft Tax, too.

The Acer Aspire 1410 isn't just an adequate netbook, it's a damn good computer. At these specifications, it is a huge step up from those early netbook models in every way. But don't take my word for it; read the reviews at netbooked and Liliputing. (Caveat emptor -- there are lots of 1410 models, and the newer dual core CPU version is the one you want.)

Of particular note is the CPU. While the Intel Atom is a technological coup, I don't feel current Atom CPUs deliver quite enough performance for a modern, JavaScript-heavy, video-intensive internet experience. It is quite clear that Intel is intentionally hobbling newer iterations of the Atom CPU in the name of market segregation, and to prevent too much netbook price erosion.

That's why the current Intel CULV CPUs are far more attractive options -- they're dramatically faster, and have become power-efficient marvels. I hooked up my watt meter to this Aspire 1410 and I was surprised to find it consume between 13 and 16 watts of power in typical use -- while my wife was browsing the web in Firefox, over a wireless connection, with multiple tabs open. I fired up Prime95 torture test to force the CPU to 100% load, and measured 21 watts with one CPU core fully loaded, and 26 watts when both were. These are wall measurements which reflect power conversion inefficiencies of at least 20%, so real consumption was between 10 and 20 watts. I was wondering why it ran so cool; now I know. It barely uses enough power to generate any heat!

Modern netbooks are not cheap and crappy. They're remarkable computers in their own right, and they're getting better every day. Which makes me wonder:

A recurring question among Apple watchers for decades has been, “When is Apple going to introduce a low-cost computer?

Steve Jobs answered that decades-old complaint by stating, "We don't know how to build a sub-$500 computer that is not a piece of junk."

They may be pieces of junk to Mr. Jobs, but to me, these modest little boxes are marvels -- inspiring evidence of the inexorable march of powerful, open computing technology to everyman and everywhere.

We have produced a democracy of netbooks. And the geek in me can't wait to see what happens next.

Discussion

Responsible Open Source Code Parenting

I'm a big fan of John Gruber's Markdown. When it comes to humane markup languages for the web, I don't think anyone's quite nailed it like Mr. Gruber. His philosophy was clear from the outset:

Markdown is intended to be as easy-to-read and easy-to-write as is feasible.

Readability, however, is emphasized above all else. A Markdown-formatted document should be publishable as-is, as plain text, without looking like it's been marked up with tags or formatting instructions. While Markdown's syntax has been influenced by several existing text-to-HTML filters – including Setext, atx, Textile, reStructuredText, Grutatext, and EtText – the single biggest source of inspiration for Markdown's syntax is the format of plain text email.

If you're an ASCII-head of any kind, you will feel immediately at home in Markdown. It was so obviously designed by someone who has done a lot of writing online, as it apes common plaintext conventions that we've collectively been using for decades now. It's certainly far more intuitive than the alternatives I've researched.

With a year and a half of real world Markdown experience under our belts on Stack Overflow, we've been quite happy. I'd say that Markdown is the worst form of markup except for all the other forms of markup that I've tried. Of course, tastes vary, and there are plenty of viable alternatives, but I'd promote Markdown without hesitation as one of the best – if not the best – humane markup options out there.

Not that Markdown is perfect, mind you. After exposing it to a large audience, both Stack Overflow and GitHub independently discovered that Markdown had three particular characteristics that confused a lot of users:

  1. URLs are never hyperlinked without placing them in some kind of explicit markup.
  2. The underscore [_] can be used to delimit bold and italic, but also works for intra-word emphasis. While a typical use like "_italic_" is clear, there are disturbing and unexpected side effects in phrases such as "some_file_name" and "file_one and file_two".
  3. It is paragraph and not line oriented. Returns are not automatically converted to linebreaks. Instead, paragraphs are detected as one or more consecutive lines of text, separated by one or more blank lines.

Items #1 and #2 are so fundamental and universal that I think they deserve to be changed in the Markdown specification itself. There was so much confusion around unexpected intra-word emphasis and the failure to auto-hyperlink URLs that we changed these Markdown rules before we even came out of private beta. Item #3, the conversion of returns to linebreaks, is somewhat more debatable. I'm on the fence on that one, but I do believe it's significant enough to warrant an explicit choice either way. It should be a standard configurable option in every Markdown implementation that you can switch on or off depending on the intended audience.

Markdown was originally introduced in 2004, and since then it has gained quite a bit of traction on the web. I mean, it's no MediaWiki (thank God), but it's in active use on a bunch of websites, some of them quite popular. And for such a popular form of markup, it's a bit odd that the last official update to the specification and reference implementation was in late 2004.

Which leads me to the biggest problem with Markdown: John Gruber.

I don't mean this as a personal criticism. John's a fantastic writer and Markdown has a (mostly) solid specification, with a strong vision statement. But the fact that there has been no improvement whatsoever to the specification or reference implementation for five years is … kind of a problem.

There are some fairly severe bugs in that now-ancient 2004 Markdown 1.0.1 Perl implementation. Bugs that John has already fixed in eight 1.0.2 betas that have somehow never seen the light of day. Sure, if you know the right Google incantations you can dig up the unreleased 1.0.2b8 archive, surreptitiously posted May 2007, and start prying out the bugfixes by hand. That's what I've had to do to fix bugs in our open sourced C# Markdown implementation, which was naturally based on that fateful (and technically only) 1.0.1 release.

I'd also expect a reference implementation to come with some basic test suites or sample input/output files so I can tell if I've implemented it correctly. No such luck; the official archives from Gruber's site include the naked Perl file along with a readme and license. The word "test" does not appear in either. I had to do a ton more searching to finally dig up MDTest 1.1. I can't quite tell where the tests came from, but they seem to be maintained by Michel Fortin, the author of the primary PHP Markdown implementation.

But John Gruber created Markdown. He came up with the concept and the initial implementation. He is, in every sense of the word, the parent of Markdown. It's his baby.

Henry aka 'Rock Hard Awesome' taking a bath

As Markdown's "parent", John has a few key responsibilities in shepherding his baby to maturity. Namely, to lead. To set direction. Beyond that initial 2004 push, he's done precious little of either. John is running this particular open source project the way Steve Jobs runs Apple – by sheer force of individual ego. And that sucks.

Since then, all I can find is sporadic activity on obscure mailing lists and a bit of passive-aggressive interaction with the community.

On 15 Mar 2008, at 02:55, John Gruber wrote:
I despise what you've done with Text::Markdown, which is to more or less make it an alias for MultiMarkdown, almost every part of which I disagree with in terms of syntax additions.
Wow, that's pretty strong language. I'm glad I'm provoking strong opinions, and it's nice to see you actively contributing to Markdown's direction ;)

Personally, I don't actually like (or use) the MultiMarkdown extensions. As noted several times on list, I do not consider what I've done to in any way be a good solution technically / internally in it's current form, and as such Markdown.pl is still a better 'reference' implementation.

However I find it somewhat ironic that you can criticise an active effort to actually move Markdown forwards (who's current flaws have been publicly acknowledged), when it passes more of your test suite than your effort does, and when you haven't even been bothered to update your own website about the project since 2004, despite having updated the code which can be found on your site (if you dig) much more recently than this.

I despise copy-pasted code, and forks for no (real) reason - seeing another two dead copies of the same code on CPAN made me sad, and so I've done something to take the situation forwards. Maybe if you'd put the effort into maintaining a community and taking Markdown.pl forwards at any time within the last 4 years, you wouldn't be in a situation where people have taken 'your baby' and perverted it to a point that you despise. If starting with Markdown.pl and going forwards with that had been an option, then that would have been my preferred route - but I didn't see any value in producing what would have been a fifth perl Markdown implementation.

It's almost at the point where John Gruber, the very person who brought us Markdown, is the biggest obstacle preventing Markdown from moving forward and maturing. It saddens me greatly to see such negligent open source code parenting. Why work against the community when you can work with it? It doesn't have to be this way. And it shouldn't be.

I think the most fundamental problem with Markdown, in retrospect, is that the official home of the project is a static set of web pages on John's site. Gruber could have hosted the Markdown project in a way that's more amenable to open source collaboration than a bunch of static HTML. I'm pretty sure SourceForge was around in late 2004, and there are lots of options for proper open source project hosting today – GitHub, Google Code, CodePlex, and so forth. What's stopping him from setting up shop on any of those sites with Markdown, right now, today? Markdown is Gruber's baby, without a doubt, but it's also bigger than any one person. It's open source. It belongs to the community, too.

Right now we have the worst of both worlds. Lack of leadership from the top, and a bunch of fragmented, poorly coordinated community efforts to advance Markdown, none of which are officially canon. This isn't merely incovenient for anyone trying to find accurate information about Markdown; it's actually harming the project's future. Maybe it's true that you can't kill an open source project, but bad parenting is surely enough to cause it to grow up stunted and maybe even a little maladjusted.

I mean no disrespect. I wouldn't bring this up if I didn't care, if I didn't think the project and John Gruber were both eminently worthy. Markdown is a small but important part of the open source fabric of the web, and the project deserves better stewardship. While the community can do a lot with the (many) open source orphan code babies out there, they have a much, much brighter future when their parents take responsibility for them.

Discussion