Coding Horror

programming and human factors

Setting up Subversion on Windows

When it comes to readily available, free source control, I don't think you can do better than Subversion at the moment. I'm not necessarily advocating Subversion; there are plenty of other great source control systems out there -- but few can match the ubiquity and relative simplicity of Subversion. Beyond that, source control is source control, as long as you're not using Visual SourceSafe. And did I mention that Subversion is ... free?

Allow me to illustrate how straightforward it is to get a small Subversion server and client going on Windows. It'll take all of 30 minutes, tops, I promise. And that's assuming you read slowly.

The first thing we'll do is download the latest Subversion Windows binary installer. At the time of writing, that's 1.46. I recommend overriding the default install path and going with something shorter:

c:svn

Note that the installer adds c:svnbin to your path, so you can launch a command prompt and start working with it immediately. Let's create our first source repository, which is effectively a system path.

svnadmin create "c:svnrepository"

Within that newly created folder, uncomment the following lines in the conf/svnserve.conf file by removing the pound character from the start of each line:

anon-access = none
auth-access = write
password-db = passwd

Next, add some users to the conf/passwd file. You can uncomment the default harry and sally users to play with, or add your own:

harry = harryssecret
sally = sallyssecret

As of Subversion 1.4, you can easily install Subversion as a Windows service, so it's always available. Just issue the following command:

sc create svnserver binpath= "c:svnbinsvnserve.exe --service -r c:svnrepository"
displayname= "Subversion" depend= Tcpip start= auto

It's set to auto-start so it will start up automatically when the server is rebooted, but it's not running yet. Let's fix that:

net start svnserver

Note that the service is running under the Local System account. Normally, this is OK, but if you plan to implement any Subversion hook scripts later, you may want to switch the service identity to an Administrator account with more permissions. This is easy enough to do through the traditional Windows services GUI.

Subversion service screenshot

Now let's verify that things are working locally by adding a root-level folder in source control for our new project, aptly named myproject.

set SVN_EDITOR=c:windowssystem32notepad.exe
svn mkdir svn://localhost/myproject

It's a little weird when running locally on the server, as Subversion will pop up a copy of Notepad with a place for us to enter commit comments. Every good programmer always comments their source control actions, right?

Subversion local commit, notepad comment

Enter whatever comment you like, then save and close Notepad. You'll be prompted for credentials at this point; ignore the prompt for Administrator credentials and press enter. Use the credentials you set up earlier in the conf/passwd file. If everything goes to plan, you should be rewarded with a "committed revision 1" message.

svn mkdir svn://localhost/myproject
Authentication realm: <svn://localhost:3690>
Password for 'Administrator': [enter]
Authentication realm: <svn://localhost:3690>
Username: sally
Password for 'sally': ************
Committed revision 1.

Congratulations! You just checked your first change into source control!

We specified svn:// as the prefix to our source control path, which means we're using the native Subversion protocol. The Subversion protocol operates on TCP port 3690, so be sure to poke an appropriate hole in your server's firewall, otherwise clients won't be able to connect.

Now that the server's good to go, let's turn our attention to the client. Most people use TortoiseSVN to interact with Subversion. Download the latest 32-bit or 64-bit Windows client (1.4.8.12137 as of this writing) and install it. The installer will tell you to reboot, but you don't have to.

Now create a project folder somewhere on your drive. I used c:myproject. Tortoise isn't a program so much as a shell extension. To interact with it, you right click in Explorer. Once you've created the project folder, right click in it and select "SVN Checkout..."

Tortoise 'SVN Checkout...'

Type svn://servername/myproject/ for the repository URL and click OK.

Tortoise checkout dialog

Tortoise now associates the c:myproject folder with the svn://servername/myproject path in source control. Anything you do on your local filesystem path (well, most things-- there are some edge conditions that can get weird) can be checked back in to source control.

There's a standard convention in Subversion to start with the "TTB folders" at the root of any project:

Because Subversion uses regular directory copies for branching and tagging (see Chapter 4, Branching and Merging), the Subversion community recommends that you choose a repository location for each project root -- the "top-most" directory which contains data related to that project -- and then create three subdirectories beneath that root: trunk, meaning the directory under which the main project development occurs; branches, which is a directory in which to create various named branches of the main development line; tags, which is a collection of tree snapshots that are created, and perhaps destroyed, but never changed.

Of course, none of this means your developers will actually understand branching and merging, but as responsible Subversion users, let's dutifully add the TTB folders to our project. Note that we can batch up as many changes as we want and check them all in atomically as one unit. Once we're done, right click the folder and select "SVN Commit..."

Tortoise 'SVN Commit...'

In the commit dialog, indicate that yes, we do want to check in these files, and we always enter a checkin comment-- right? right?

Tortoise Commit dialog

You'll have to enter your server credentials here, but Tortoise will offer to conveniently cache them for you. Once the commit completes, note that the files show up in the shell with source control icon overlays:

Tortoise folder with source control icon overlays

And now we're done. Well, almost. There are a few settings in Tortoise you need to pay special attention to. Right click and select "TortoiseSVN, Settings".

  1. See that hidden ".svn" folder? These folders are where Subversion puts its hidden metadata schmutz so it can keep track of what you're doing in the local filesystem and resolve those changes with the server. The default naming convention of these folders unfortunately conflicts with some fundamental ASP.NET assumptions. If you're an ASP.NET 1.x developer, you need to switch the hidden folders from ".svn" to "_svn" format, which is on the General options page. This hack is no longer necessary in ASP.NET 2.0 or newer.

  2. I'll never understand why, but by default, Tortoise tries to apply source control overlays across every single folder and drive on your system. This can lead to some odd, frustrating file locking problems. Much better to let Tortoise know that it should only work its shell magic on specific folders. Set this via "Icon Overlays"; look for the exclude and include paths. I set the exclude path to everything, and the include path to only my project folder(s).

    tortoise exclude and include paths

Unfortunately, since Tortoise is a shell extension, setting changes may mean you need to reboot. You can try terminating and restarting explorer.exe, but I've had mixed results with that.

And with that, we're done. You've successfully set up a Subversion server and client. A modern client-server source control system inside 30 minutes -- not bad at all. As usual, this is only intended as the gentlest of introductions; I encourage you to check out the excellent Subversion documentation for more depth.

I find Subversion to be an excellent, modern source control system. Any minor deficiencies it has (and there are a few, to be clear) are more than made up by its ubiquity, relative simplicity, and robust community support. In the interests of equal time, however, I should mention that some influential developers -- most notably Linus Torvalds -- hate Subversion and view it as an actual evil. There's an emerging class of distributed revision control that could eventually supercede existing all the centralized source control systems like Subversion, Vault, Team System, and Perforce.

I'm skeptical. I've met precious few developers that really understood the versioning concepts in the simple centralized source control model. I have only the vaguest of hopes that these developers will be able to wrap their brains around the vastly more complicated and powerful model of distributed source control. It took fifteen years for centralized source control usage to become mainstream, so a little patience is always advisable.

Discussion

Mousing Surface Theory

This post, and its comments, were updated in 2015 to reflect current choices and opinions.

Hi there. I want to talk to you about ducts.*

Sorry, when I said ducts, I meant mousepads.

As I have a long-standing mouse fetish, you might not be surprised to learn that I also fetishize the humble mousepad, as well. It's all perfectly healthy. Really.

Let's start with the obvious: do you even need a mousepad? It's a fair question. Are you using a traditional mouse? Maybe you're using a trackball, trackpad, trackpoint, or something else with the word "track" in it. If so, then thanks for reading this far. Come back for my next post.

For the rest of us using standard computer mice, consider the following questions:

  1. Is your mousing surface uneven?
  2. Does your mousing surface have an inconsistent texture?
  3. Does your mousing surface interfere with the optical LED or laser sensors in modern mice?
  4. Are you concerned that your present mousing surface will be damaged or marred from extended mousing?
  5. Do you struggle to find enough room to move your mouse?

If you answered "yes" to any of these questions, you should probably have a mousepad. The average desktop often does not provide a consistent mousing surface; a well-designed mousepad does. That is its purpose – to stake out a consistent, reliable, and durable mousing surface on your desktop.

Believe me, I'd love to be a minimalist and go without any kind of mousepad, but I always end up needing one. I started wearing a permanent mark in my beloved Ikea Jerker desk with my mousing here at home, for example. I've also found that extended mousing leaves behind an unpleasant-but-cleanable residue, and I'd rather clean the mousepad than my desk.

Now that we've established the need for a mousing surface, it's time to decide exactly what you want:

  • a wrist rest?
  • raised and thick or low-profile and thin?
  • smooth or textured?
  • metal, glass, cloth, or plastic?
  • small, medium, large, or obscenely large in size?
  • square, rectangular, circular, or some other shape?

And that's before we get into issues of color and style. If you consider the above questions, you can narrow it down substantially. I do have two general recommendations, however.

I'm a big fan of the Razer Vespula. If you're OK with a relatively large mousepad, this 10.2" x 12.6" model is one of my favorites. It's built on a low-profile hard plastic base to resist bending, with soft rubber feet on all sides, as well as a thin rubber mat you can place underneath so there's no slippage.

Razer Vespula

It's also reversible: one side is "speed" (smooth), the other "control" (textured). And it bundles an optional wrist rest sized to nestle perfectly against the bottom of the pad.

I bought my Vespula way back in 2011 and it's still going strong. Many years later, I believe the general idea of a reversible, double sided mousepad with a thin metal or hard plastic core is close to the best of all worlds. With that in mind, I can also recommend the Corsair MM600 and the Perixx DX-5000XL. They are all big, though. Imagine an iPad or something a bit larger sitting next to your keyboard.

If you're looking for something more basic, I can also recommend XTrac, specifically their hard surface mousepads. They're very thin, rubber backed, and come in a variety of sizes. At one point I had it literally glued to my desk with removable spray adhesive.

xtrac mousepads

They're all thin, but the Logic "skin" models are super thin. Extreme thinness can make the XTrac a natural extension of your desk.

XTrac mouse surface thickness comparison

I don't recommend the cloth / fabric branch of the XTrac family tree – or any mousing surface, for that matter.

I am not proposing either of the above as the final mousing surface solution, but I have used both extensively. There are plenty of other great choices; I've heard people say very nice things about the unusual circular WOW!PAD, for example.

I could also talk about how I regularly lubricate my mice feet and mousepads, but then I'd worry that people might think I've gone too far with my mouse fetish.

XTrac Mad Wax

Oops.

It's my hope that after reading this, you'll be able to tell a well-designed, quality mousing surface from those cheap, floppy, disintegrating fabric things that are mousepads in name only.

* Do your ducts seem old-fashioned? Out of date? Central Services' new duct designs are now available in hundreds of different colors to suit your individual tastes. Hurry now, while stocks last, to your nearest Central Services showroom. Designer colors to suit your demanding tastes.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Discussion

UI-First Software Development

We're currently in the midst of building the new web property I alluded to in a previous post. Before I write a single line of code, I want to have a pretty clear idea of what the user interface will look like first. I'm in complete agreement with Rick Schaut here:

When you're working on end-user software, and it doesn't matter if you're working on a web app, adding a feature to an existing application, or working on a plug-in for some other application, you need to design the UI first.

This is hard for a couple of reasons. The first is that most programmers, particularly those who've been trained through University-level computer science courses, learned how to program by first writing code that was intended to be run via the command line. As a consequence, we learned how to implement efficient algorithms for common computer science problems, but we never learned how to design a good UI.

Of course, UI is hard, far harder than coding for developers. It's tempting to skip the tough part and do what comes naturally -- start banging away in a code window with no real thought given to how the user will interact with the features you're building.

Remember, to the end user, the interface is the application. Doesn't it make sense to think about that before firing up the compiler?

It's certainly true that there are limitations on how the UI can be built based on the technology you're using. Just because some pixels can be arranged a certain way in Photoshop doesn't mean that can magically be turned into a compiling, shippable product in any sane timeframe. To ameliorate that problem, take advantage of visual design patterns. If you're building a GUI application, use a palette of widgets common to your GUI. If you're building a web application, use a palette of HTML, CSS, and DOM elements from all over the web. Let the palette enforce your technology constraints.

It shouldn't be difficult to sit down with a few basic tools and slap together a rough mockup of how the user interface will look. However, it is extremely important at this point to stay out of technical development environments when mocking your user interface, or the temptation to turn the model into the product may be too strong for your team to resist. Try to avoid the prototype pitfall.

So how do we prototype the UI without relying on our development tools? One way is simple paper prototyping.

paper prototype

The book Paper Prototyping: The Fast and Easy way to Design and Refine User Interfaces is an excellent introduction to paper prototyping. You can interactively browse sections of this book at Amazon, through Google Books, and the book's own dedicated web site.

There's a certain timelessness to paper prototyping that holds a deep appeal, as Jacob Nielsen points out:

Paper prototyping has a second benefit, besides its impact on your current design project's quality. It will also benefit your career. Consider all the other books you've read about computers, Web design, and similar topics. How much of what you learned will still be useful in ten years? In twenty years? In the immortal words of my old boss, Scott McNealy, technology has the shelf life of a banana.

In contrast, the paper prototyping technique has a shelf life closer to that of, say, paper. Once you've learned paper prototyping, you can use it in every project you do for the rest of your career. I have no idea what user interface technologies will be popular in twenty years, but I do know that I'll have to subject those designs to usability evaluation, and that paper prototyping will be a valuable technique for running early studies.

Paper prototypes are usually pitched in terms of doing low-fi usability studies, and rightly so. But I find a paper prototype tremendously helpful even if I'm the only one that ever sees it. I need to create an image in my mind of what I'm building, as it will be seen by the world, before I start pouring the concrete to make it real.

If you need any more convincing that paper prototyping is an incredibly valuable tool-- even for mere developers-- consider the advice of Jared Spool's company, User Interface Engineering:

I also recommend reading through Common Concerns about Paper Prototyping if you're still on the fence.

But what happens when you outgrow paper prototying? Jensen Harris, one of the principal UI designers on the Office 2007 team, first introduced me to PowerPoint prototyping:

We use PowerPoint as kind of a better version of [Office 2007] paper prototypes. This technique has several advantages: prototypes can be made to feel somewhat interactive, because the content is electronic it can be modified more easily than paper, and (best of all) the usability participant uses the mouse and is on the computer, so it feels natural to them.

Of course, it doesn't have to be PowerPoint. Use whatever tool you like, as long as it's not a development tool. You don't want something too powerful. What you want is mild interactivity while remaining simple and straightforward for quick iterative changes. That's the logical next step up from paper prototyping.

PowerPoint prototype example

It's a lot easier to share this digital artifact on a distributed team than it is to share a bunch of physical paper. If you're curious about the nuts and bolts of PowerPoint prototyping, dig in:

The pursuit of UI-First software development is more important than any particular tool. Use paper, use PowerPoint, use Keynote, use whatever makes sense to you. As long as you avoid, in the words of Manuel Clement, pouring concrete too early.

How does your team practice UI-First software development?

Discussion

Core War: Two Programs Enter, One Program Leaves

Our old pal A. K. Dewdney first introduced the world to Core War in a series of Scientific American articles starting in 1984. (Full page scans of the articles, including the illustrations, are also available.)

Core War was inspired by a story I heard some years ago about a mischievous programmer at a large corporate research laboratory I shall designate X. The programmer wrote an assembly-language program called Creeper that would duplicate itself every time it was run. It could also spread from one computer to another in the network of the X corporation. The program had no function other than to perpetuate itself. Before long there were so many copies of Creeper that more useful programs and data were being crowded out. The growing infestation was not brought under control until someone thought of fighting fire with fire. A second self-duplicating program called Reaper was written. Its purpose was to destroy copies of Creeper until it could find no more and then to destroy itself. Reaper did its job, and things were soon back to normal at the X lab.

(The story of Creeper and Reaper seems to be based on a compounding of two actual programs. One program was a computer game called Darwin, invented by M. Douglas McIlroy of AT&T Bell Laboratories. The other was called Worm and was written by John F. Shoch of the Xerox Palo Alto Research Center. Both programs are some years old, allowing ample time for rumors to blossom.)

Core War, surprisingly, is still around. The current hub appears to be at corewar.co.uk. You can download simulators for a variety of operating systems there. Here's how a Core War battle works:

Core War has four main components: a memory array of 8,000 addresses, the assembly language Redcode, an executive program called MARS (an acronym for Memory Array Redcode Simulator) and the set of contending battle programs. Two battle programs are entered into the memory array at randomly chosen positions; neither program knows where the other one is. MARS executes the programs in a simple version of time-sharing, a technique for allocation the resources of a computer among numerous users. The two programs take turns: a single instruction of the first program is executed, then a single instruction of the second, and so on.

What a battle program does during the execution cycles allotted to it is entirely up to the programmer. The aim, of course, is to destroy the other program by ruining its instructions. A defensive strategy is also possible: a program might undertake to repair any damage it has received or to move out of the way when it comes under attack. The battle ends when MARS comes to an instruction in one of the programs that cannot be executed. The program with the faulty instruction -- which presumably is a casualty of war -- is declared the loser.

Let's see it in action using one of the simulators. What you're watching here is a round-robin tournament between the Imp [yellow], Mice [blue], Midget [white], and Piper [green] programs.

Core Wars,  animated

The winner is Piper [green], with 2 wins, 0 losses, and 1 tie.

These programs are written in an assembly-like dialect known as Redcode. Here's the source code for Midget:

;redcode
;name Midget
;author Chip Wendell
;strategy stone (bomber)
;history Third place at the 1986 ICWS tournament
Bomb	dat	#0,	#-980
Spacer	equ	28
Start	mov	Bomb,	@Bomb
sub	#Spacer,Bomb
jmp	Start,	#0
end	Start

The Redcode instruction set is deliberately simple. There are two variants, ICWS-88 with 10 instructions and 4 addressing modes, and ICWS-94 with 19 instructions and 8 addressing modes.

DAT data DJN decrement and jump if not zero
MOV move / copy SPL split
ADD add CMP compare
SUB subtract SEQ skip if equal
MUL multiply SNE skip if not equal
DIV divide SLT skip if lower than
MOD modulus LDP load from private space
JMP jump STP save to private space
JMZ jump if zero NOP no operation
JMN jump if not zero    

It's structured so that there is no "killer app"; three broad strategies are possible, each with its own strengths and weaknesses.

  1. Paper or Replicator

    Try to fill the core with copies of your program, so you are harder to kill.

  2. Rock or Bomber

    Attack by writing illegal instructions throughout the core-- but not on your own program's memory.

  3. Scissors or Scanner

    Attempt to identify enemy programs lurking in the core, then target writes to eliminate them.

Of course, combinations of the above strategies are possible as well. As you might imagine after 25 years of battlefield evolution, some modern Core War programs are quite baroque by now.

It's not particularly useful, but it is a programming game, after all. It's also a fascinating bit of computer science history. If you're interested in participating in the venerable sport of Core War, it's still very much alive and kicking. The top 10 links for Core War newbies is a great place to get started.

Discussion

Let That Be a Lesson To You, Son: Never Upgrade.

(Update: This piece originally ran on April Fools' day; although the content of the post is not an April Fools' joke, the retro styling definitely was. View a screenshot of how this post looked on April 1, 2008)

I occasionally follow Jamie Zawinski's blog. Jamie's an interesting guy. In the process of researching an earlier post, I discovered that he played a significant role in unearthing the classic Worse is Better paper:

About a year later [1991] we hired a young kid from Pittsburgh named Jamie Zawinski. He was not much more than 20 years old and came highly recommended by Scott Fahlman. We called him "The Kid." He was a lot of fun to have around: not a bad hacker and definitely in a demographic we didn't have much of at Lucid. He wanted to find out about the people at the company, particularly me since I had been the one to take a risk on him, including moving him to the West Coast. His way of finding out was to look through my computer directories - none of them were protected. He found the EuroPAL paper, and found the part about worse is better. He connected these ideas to those of Richard Stallman, whom I knew fairly well since I had been a spokesman for the League for Programming Freedom for a number of years. JWZ excerpted the worse-is-better sections and sent them to his friends at CMU, who sent them to their friends at Bell Labs, who sent them to their friends everywhere.

Or, perhaps you've read the classic Teach Yourself Programming in Ten Years? That was written by Peter Norvig, who is now the director of research at Google. It refers to Mr. Zawinski thusly:

One of the best programmers I ever hired had only a High School degree; he's produced a lot of great software, has his own news group, and made enough in stock options to buy his own nightclub.

I think you'll agree that it's fair to call Jamie Zawinski a world class software engineer. Jamie's blog documents, in great detail, how he runs his DNA Lounge club in San Francisco. It's a great read, full of fascinating, often geeky backstage details. The DNA Lounge is powered by open source software, including various flavors of Linux. Sometimes this can be painful. In 2006, Jamie ran into serious problems with the Linux sound architecture:

You may have noticed that the audio archives have only had one channel for the last few weeks. You would probably assume that's a simple matter of replacing a cable; turns out, not. As far as we can tell, the audio going into the computer is stereo, and somewhere in there, it drops (most of) the right channel. So, bad connector, right? No, we've tried four different sound cards, and checked the mixer settings. At this point it seems like the last time we (accidentally) upgraded ALSA, it introduced some software bug that is making one channel go away. I can't even fathom how such a bug could exist, but that's Linux for you.

We seem to have solved the "missing right channel" problem. It was, in fact, a software problem. We were running Fedora 4, and when we installed the latest patches on March 31, that's when the right channel vanished. We tried downgrading to the version of the kernel and ALSA as of three months ago, and that didn't fix it. But, Jonathan took all the sound cards home and tried them in his machine, and they all worked fine there. He was running Fedora 5. So we upgraded to that, and the problem went away.

That's right: upgrading to the latest FC4: breaks the world. Giving up on FC4 and going to FC5: un-breaks it. Nicely done, guys.

For years I've had it drummed into my head that you always have to keep your systems patched, if you aren't running the latest security fixes, the script kiddies will eat you alive, running a six month old OS is like leaving your front door wide open, blah blah blah. Well you know what? F**k that noise. I'm done upgrading anything ever. The next time I get this s**t into a state that seems even remotely stable, I'm never touching it again. If we get hacked, oh well. I have backups. It has got to be less work to recover from than constantly dealing with this kind of nonsense.

The DNA lounge provides streaming audio and video webcasts of whatever is going on any time the club is open. So problems like this are especially troubling -- Jamie's business depends on this stuff working.

I was particularly disturbed to find this recent entry:

I spent a solid four days trying to upgrade the kiosks from Red Hat 9 + LTSP 4.3 (vintage 2003) to... something newer. In this case, Ubuntu 10.7 + LTSP 5, since it seems like that's what the cool kids are running these days. Why would I do such a thing? Well, one reason is that the Firefox 3 beta would neither install nor compile on RH9 (missing libraries), and another was that the kiosks are a little crashy (they reboot themselves pretty regularly for no adequately explored reason), and also, it's "just kinda old", which some people will tell you might mean, maybe, kinda, less secure. So I figured I'd give it a shot.

Well, since this is not my first rodeo, when I say "upgrade" what I really mean is "do a fresh install on a spare drive."

So, after four days of this nonsense, I gave up, and just put the old drive back in. "Nonsense" in this case is defined as: the upgrade made the machines be even crashier than before (they can barely stay up for an hour) and it's a far worse kind of crashy: it's the kind of crashy where you have to press the shiny red button to make them come back to life, instead of them being able to do that themselves.

So, f**k it. They'll be running a 2003 version of Linux forever, because I frankly have better things to do with my time.

I can't fault Jamie's approach. A clean install of an operating system on a new hard drive -- for kiosks running controlled hardware, no less -- that's as good as it gets.

Apparently, Linux is so complex that even a world class software engineer can't always get it to work.

I find it highly disturbing that a software engineer of Jamie's caliber would give up on upgrading software. Jamie lives and breathes Linux. It is his platform of choice. If he throws in the towel on Linux upgrades, then what possible hope do us mere mortals have?

Discussion