Coding Horror

programming and human factors

The Day Performance Didn't Matter Any More

OSNews published a nine-language performance roundup in early 2004. The results are summarized here:

intlongdoubletrigI/O
Visual C++9.618.86.43.510.548.8
Visual C#9.723.917.74.19.965.3
gcc C9.828.89.514.910.073.0
Visual Basic9.823.717.74.130.785.9
Visual J#9.623.917.54.235.190.4
Java 1.3.114.529.619.022.112.397.6
Java 1.4.29.320.26.557.110.1103.1
Python/Psyco29.7615.4100.413.110.5769.1
Python322.4891.9405.747.111.91679.0

It's not a very practical benchmark, but it does tell us a few things. It's no surprise that C++ is at the head of the pack. But the others aren't terribly far behind. What I find really interesting, though, is how most of the languages clump together in the middle. There's no significant performance difference between Java and .NET if you throw out the weirdly anomalous trig results.

However, there is one language definitely bringing up the rear – Python. That's because it's an interpreted language. This is explained in Code Complete:

Interpreted languages tend to exact significant performance penalties because they must process each programming-language instruction before creating and executing machine code. In the performance benchmarking I performed for this chapter and chapter 26, I observed these approximate relationships in performance among different languages:
Language Type of Language Execution Time Relative to C++
C++ Compiled 1:1
Visual Basic Compiled 1:1
C# Compiled 1:1
Java Byte code 1.5:1
PHP Interpreted > 100:1
Python Interpreted > 100:1

Clearly, the performance penalty for interpreted languages is extreme. How extreme? If you have to ask, you probably can't afford it.

One of the biggest stigmas for early Visual Basic developers was that our code wasn't compiled. It was interpreted. Interpreted executables were yet another reason so-called
"professional" developers didn't take VB seriously. It was too slow. This finally changed when we got compiled executables in 1997 with VB 5.

The most commonly used interpreted language today, however, is JavaScript. And JavaScript is the very backbone of Web 2.0. How is this feasible if JavaScript is a hundred times slower than Java? Consider this ancient 1996 JavaScript benchmark page
:

1996 2006
primes 0.15 0.02 8x
pgap 3.13 0.06 52x
sieve 5.05 0.02 252x
fib(20) 2.15 0.03 72x
tak 10.44 0.08 131x
mb100 8.4 0.2 42x

In ten years, JavaScript performance has improved a hundredfold. But so what, right? Computers get faster every year. Well, our computers are now so fast that – with very few exceptions – we don't care how much interpreted code costs any more.

What many pundits don't realize is that the viability of interpreted JavaScript for mainstream applications is a relatively recent development. Consider this JavaScript benchmark of "ridiculously long algorithms and looping statements". The top three results are all of 2003 vintage:

AMD 1900+ 1.6 GHz 12.25 sec
P4 Mobile 2.2 GHz 15.48 sec
P4 Celeron 1.4 GHz 17.43 sec

The slowest computer I own, a 1.2 GHz Pentium M laptop purchased in 2003, completes this test in 13.64 seconds. The one I'm currently typing on completes it in around 7 seconds. So even in the last three years, we've almost doubled the speed of JavaScript.

I don't expect this trend of doubling performance to continue. I think JavaScript is about as fast as it can get now without some kind of really advanced dynamic compilation scheme. If you browse the results of BenchJS, a more recent JavaScript test suite, I think you'll agree that they've plateaued. We might reduce that from 6 seconds to 4 seconds over the next two years, but that's minor compared to the 100x speedup we've already had.

Written by Jeff Atwood

Indoor enthusiast. Co-founder of Stack Overflow and Discourse. Disclaimer: I have no idea what I'm talking about. Find me here: https://infosec.exchange/@codinghorror