Interview with Donald Knuth

InformIT has a very good interview with Donald Knuth (by way of Artima).

A couple of things resonated with me:

As to your real question, the idea of immediate compilation and “unit tests” appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works and what doesn’t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be “mocked up.”

I understand the need for unit tests, I use them all the time, around half the code I have written so far on my current project is test code. But I spend most of my time working on the actual code, thinking though the structure, the flow and the data, and only a small amount of time on test code. I usually work on the big picture first and then focus on the details. I also spend a lot of time ‘refactoring’ (my definition of ‘refactoring’ is the restructure of code due to learning things that were not apparent when you started). I rarely jump into coding because more often than not it leads to dead-ends. Not that dead-ends are a bad thing, but you want to avoid them.

Another thing I do is not code. I spend about six hours a day coding on average, the rest of the time is spent thinking and reading other people’s code.

I also must confess to a strong bias against the fashion for reusable code. To me, “re-editable code” is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.

I am not completely sure what to make of this. I think the tension here is between the generic and the specific. You can build generic code which will work for lots of applications but that code will never be optimized for any specific application, or you can build very specific code which will be highly optimized for a single application. I like the idea of reusable code mainly because I am lazy and want to get the best return on my investment, on the other hand I have been known to spend days on very small chunks of code to make it perform as fast as possible.

Perhaps this falls in with the tenet that there is no such thing as portable code, only code that is ported.

I don’t want to duck your question entirely. I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won’t be surprised at all if the whole multithreading idea turns out to be a flop, worse than the “Titanium” approach that was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write.

And this I took issue with. The reality of chip design is that we are hitting some very real limits with speed, size, power usage and heat dissipation. I think that multicore chips are a very good way to speed up processing within those limits. While it is true that most applications don’t parallelize well, and that writing parallelized code is hard, multicore chips work very well for multiprocessing which is what most operating systems do these days.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: