A Group Is Its Own Worst Enemy
Dare Obasanjo recently wrote about the failure of Kuro5hin, which was originally designed to address perceived problems with the slashdot model:
[Kuro5hin allowed] all users to create stories, vote on the stories and to rate comments. There were a couple of other features that distinguished the K5 community such as diaries but the democratic aspect around choosing what was valuable content was key. K5 was a grand experiment to see if one could build a better Slashdot and for a while it worked, although the cracks had already begun to show within the first year.
Five years later, I still read Slashdot every day but only check K5 out every couple of months out of morbid curiosity. The democracy of K5 caused two things to happen that tended to drive away the original audience. The first was that the focus of the site ended up not being about technology mainly because it is harder for people to write technology articles than write about everyday topics that are nearer and dearer to their hearts. Another was that there was a steady influx of malicious users who eventually drove away a significant proportion of K5's original community.
Besides the malicious users one of the other interesting problems we had on K5 was that the number of people who actually did things like rate comments was very small relative to the number of users on the site. Anytime proposals came up for ways to fix these issues, there would often be someone who disregarded the idea by stating that we were "seeking a technical solution to a social problem". This interaction between technology and social behavior was the first time I really thought about social software.
This is, unfortunately, a pattern I've also observed in the various online communities I've participated in. And it's a very old pattern indeed. Clay Shirky's essential A Group Is Its Own Worst Enemy dates this phenomenon all the way back to 1978:
In the Seventies, a BBS called Communitree launched, one of the very early dial-up BBSes. This was launched when people didn't own computers, institutions owned computers. Communitree was founded on the principles of open access and free dialogue. "Communitree" – the name just says "California in the Seventies." And the notion was, effectively, throw off structure and new and beautiful patterns will arise.
And, indeed, as anyone who has put discussion software into groups that were previously disconnected has seen, that does happen. Incredible things happen. The early days of Echo, the early days of usenet, the early days of Lucasfilms Habitat, over and over again, you see all this incredible upwelling of people who suddenly are connected in ways they weren't before.
And then, as time sets in, difficulties emerge. In this case, one of the difficulties was occasioned by the fact that one of the institutions that got hold of some modems was a high school. And who, in 1978, was hanging out in the room with the computer and the modems in it, but the boys of that high school. And the boys weren't terribly interested in sophisticated adult conversation. They were interested in fart jokes. They were interested in salacious talk. They were interested in running amok and posting four-letter words and nyah-nyah-nyah, all over the bulletin board.
And the adults who had set up Communitree were horrified, and overrun by these students. The place that was founded on open access had too much open access, too much openness. They couldn't defend themselves against their own users. The place that was founded on free speech had too much freedom. They had no way of saying "No, that's not the kind of free speech we meant." But that was a requirement. In order to defend themselves against being overrun, that was something that they needed to have that they didn't have, and as a result, they simply shut the site down.
Now you could ask whether or not the founders' inability to defend themselves from this onslaught, from being overrun, was a technical or a social problem. Did the software not allow the problem to be solved? Or was it the social configuration of the group that founded it, where they simply couldn't stomach the idea of adding censorship to protect their system? In a way, it doesn't matter, because technical and social issues are deeply intertwined. There's no way to completely separate them.
As a community grows, these types of rules – neither social nor technical, but a hybrid of both – become critical to the survival of the community. If moderators fail to step in, the damage can be fatal:
Geoff Cohen has a great observation about this. He said "The likelihood that any unmoderated group will eventually get into a flame-war about whether or not to have a moderator approaches one as time increases."* As a group commits to its existence as a group, and begins to think that the group is good or important, the chance that they will begin to call for additional structure, in order to defend themselves from themselves, gets very, very high.
I've seen it play out exactly like this, with reluctant moderators whose hands are forced due to outcry from the users. All of Clay's articles are worth reading; I'd follow up with Communities, Audiences, and Scale which is particularly relevant to blogs and other community driven websites – it proposes that social software, as we typically think of it, may not scale after all.
Which reminds me of a quote from Scrubs:
Cox: Thanks to your little gesture, she (Dr. Clock) actually believes that the Earth is full of people who are deep down filled with kindness and caring!
Kelso: Well that's absurd. People are bastard coated bastards with bastard filling.
Cox: Exactly!
If you're building software with social components, plan for the worst kinds of behavior from your users from the start. At least lay the groundwork for technological and social controls to handle those inevitable issues, or you'll eventually regret it.
* Not related to Godwin's law.