Remote control

I came across this interesting little rumor about the next versions of iTunes and the iPhone/iTouch software. It seems that you will be able to use the iPhone/iTouch as a remote control for iTunes to control playback.

Also use the new Remote application for iPhone or iPod touch to control iTunes playback from anywhere in your home — a free download from the App Store.

I think this is a great idea as I have my music all stored on my desktop computer but I like to play back music through my AppleTV located in my living room (away from my desktop computer). Currently I have to control that either by streaming from my laptop, not ideal, or by using the remote desktop feature to access my desktop computer from my laptop. In either case I need to use my laptop. It would be really nice to use my iPhone to do that instead.

Yes, I could copy my music to my AppleTV, or allow my AppleTV to access my music library remotely but I have good reasons not to want to do that. Mainly because the syncing controls on iTunes are pretty basic.

What I would really like to see is a better way to enter text on my AppleTV, using a bluetooth keyboard for example, using the remote to spell out movie titles really sucks, especially since I can’t rent HD movies from my computer.

Superfluous designable.nib files

Apparently Apple left some superfluous designable.nib files in their apps in Mac OS X 10.5:

While Apple may likely be expanding the use of background file compression to save space in Snow Leopard, today’s Mac OS X Leopard is unnecessarily overweight due to an error Apple made when packaging the system, according to a developer who asked to remain anonymous. Leopard apps all contain superfluous designable.nib files that should have been removed in the Golden Master. “Mail alone has around 1400 of these files, taking up almost 200 MB of disk space,” he noted.

I have to wonder why those files were left there, surely they would have been omitted from the target as part of a production build, but it may be easy to overlook that. I also wonder if they were left in Mac OS x 10.4.x? And I also wonder why Apple has not deleted in any of the 10.5 updates we have had to date.

If it is Apple’s intention to create a single code base for Mac OS X for all their devices (I recall that this was announced at WWDC ’08) then it would make sense to slim things down as much as possible so that they can target devices with a little as 4GB on them (such as one of the original iPhones.)

Actually I went into a secondary Mac OS X 10.5 installation I have and removed all the designable.nib files from the Application folder with no ill effects. Of course the Application folder is ~40GB so losing ~500MB did not really slim things that much.

Packing bits

I was very interested to read this post by Kristian Nielsen on BIT-aligned storage.

He is comparing the time it takes to get numbers from memory, actually from memory and summing them, with the numbers in their native format, packed to a fixed number of bits and packed to a variable number of bits. Interestingly there was little difference between native format and packed to a fixed number of bits, which really makes sense to me, I have found that bit-shifting is a very cheap operation for modern CPUs.

This was also interesting to me because you usually need to store lots of numbers when you are doing full text indexing. While I have not found the need to pack numbers in memory, I always do so when storing them to disk and transmitting them over the network.

This has two benefits:

The first is that packed numbers are much quicker to read from disk (and write but that is less important) than unpacked numbers, especially if you have small numbers (which you will have if you use deltas as much as possible). This is really not new, I did my first tests on a Centris 610 running A/UX.

The second benefit is that you store the numbers in an architecture neutral format because you don’t have to deal with big-endian vs. little-endian differences. While this is less of an issue today because the i386/AMD64/EMT64 platform is pretty much ubiquitous, it was much more of an issue a decade ago.

The storage mechanism is pretty simple, you grab the lower 7 bits, store those and set the high bit if there are more bits to read, in code:

/* Masks for compressed numbers storage */
#define UTL_NUM_COMPRESSED_CONTINUE_BIT				(0x80)
#define UTL_NUM_COMPRESSED_DATA_MASK				(0x7F)
#define UTL_NUM_COMPRESSED_DATA_BITS				(7)


/* Macro to get the number of bytes occupied by a compressed 32 bit integer.
** The integer to evaluate should be set in uiMacroValue and the number of 
** bytes is placed in uiMacroByteCount
*/
#define UTL_NUM_GET_COMPRESSED_UINT_SIZE(uiMacroValue, uiMacroByteCount) \
	{	\
		if ( uiMacroValue < (1UL << 7) )	\
			uiMacroByteCount = 1;	\
		else if ( uiMacroValue < (1UL << 14) )	\
			uiMacroByteCount = 2;	\
		else if ( uiMacroValue < (1UL << 21) )	\
			uiMacroByteCount = 3;	\
		else if ( uiMacroValue < (1UL << 28) )	\
			uiMacroByteCount = 4;	\
		else	\
			uiMacroByteCount = 5;	\
	}

/* Macro for reading a compressed integer from memory. The integer is 
** read starting at pucMacroPtr and is stored in uiMacroValue
*/
#define UTL_NUM_READ_COMPRESSED_UINT(uiMacroValue, pucMacroPtr) \
	{	unsigned char ucMacroByte = '';	\
		ASSERT(pucMacroPtr != NULL);	\
		uiMacroValue = 0;	\
		do { 	\
			uiMacroValue = uiMacroValue << UTL_NUM_COMPRESSED_DATA_BITS;	\
			ucMacroByte = *(pucMacroPtr++);	\
			uiMacroValue += (ucMacroByte & UTL_NUM_COMPRESSED_DATA_MASK); 	\
		}	\
		while ( ucMacroByte & UTL_NUM_COMPRESSED_CONTINUE_BIT );	\
	}



/* Macro for compressing an unsigned integer and writing it to memory */
#define UTL_NUM_READ_COMPRESSED_UINT(uiMacroValue, pucMacroPtr) \
	{	\
		ASSERT(pucMacroPtr != NULL);	\
		for ( uiMacroValue = 0; uiMacroValue <= 0);	\
		ASSERT(pucMacroPtr != NULL);	\
		UTL_NUM_GET_COMPRESSED_UINT_SIZE(uiMacroLocalValue, uiMacroByteCount); \
		for ( uiMacroBytesLeft = uiMacroByteCount; uiMacroBytesLeft > 0; uiMacroBytesLeft-- ) { 	\
			ucMacroByte = (unsigned char)(uiMacroLocalValue & UTL_NUM_COMPRESSED_DATA_MASK);	\
			if ( uiMacroBytesLeft != uiMacroByteCount )	{ \
				ucMacroByte |= UTL_NUM_COMPRESSED_CONTINUE_BIT;	\
			}	\
			pucMacroPtr[uiMacroBytesLeft - 1] = ucMacroByte;	\
			uiMacroLocalValue >>= UTL_NUM_COMPRESSED_DATA_BITS;	\
		}	\
		pucMacroPtr += uiMacroByteCount; 	\
		ASSERT(uiMacroLocalValue == 0);	\
	}
 

I use macros here because they get called a *lot* and so are faster than functions (even inline ones I think.)

Turning scripts into applications

I came across a neat little tool for Mac OS X called Platypus which turns scripts into applications.

From the web site:

Platypus is a developer tool for the Mac OS X operating system. It can be used to create native, flawlessly integrated Mac OS X applications from interpreted scripts such as shell scripts or Perl and Python programs. This is done by wrapping the script in an application bundle directory structure along with an executable binary that runs the script. Platypus thus makes it possible for you to share your scripts and programs with those unfamiliar with the command line interface, without any knowledge of the Mac OS X APIs — a few clicks and you will have your own Mac OS X graphical program. Creating installers, maintenance applications, login items, launchers, automations and droplets is very easy using Platypus.

A nice alternative to running scripts off the command line.

Isopods


This fish is interesting, but what is more interesting is the isopod clinging to it just below its eye. It looks like a parasite but in fact it is just hitching a ride, for life.

As juvenile isopod are free floating but as they mature they will ‘catch’ a fish, hook themselves into place and spend the rest of their lives there, feeding of whatever comes their way. Sometimes fish will have on on each side of their heads. Isopods also mate and spawn while on the fish but I am not sure about that.

While there is nothing parasitic about this relationship, there is nothing symbiotic either, the isopod is catching a free ride for life.

The interesting thing is that I had not seen them at all before diving this one site in Turks & Caicos and then saw quite a few. I caught another fish with one which you can see here.

Make it go faster!!

I have been enjoying listening to the StackOverflow podcasts by Jeff Atwood and Joel Spolsky. As a developer it is always good to get different (and sometimes controversial perspectives on things).

The last podcast, their 10th episode, covered a wide range of topics, the last one of which was about Ruby and its suitability for enterprise usage. The show notes on this subject are:

On Ruby performance, scaling, “enterpriseyness” and whether or not this is even the right question to ask. Shouldn’t we be thinking of this in terms of the solution first, and the language as a side-effect of that?

Which I think is right on, you need to look at the task you want to accomplish and choose the right tool for the job. The problem is that some people get ‘wedded’ to a single language and use that for every problem they encounter, winding up with some very good implementation as well as some very bad ones as well.

I tend to approach the language choice using a variety of parameters which really boil down to the suitability of the language for the task, looking at how well the language deal with the problem being solved (such as Perl for text processing), scalability & performance, maintainability.

One comment Jeff Atwood made about scalability was about looking at the number of machines you need to scale up your operation. If you can only run one process per machine, scaling by an order of magnitude will be a lot more painful than if you can run 10 processes per machine, which I think is right on the money as it were. It maybe much more cost effective to spend more time upfront developing your app in a language such as C (or derivative) than putting something together quickly in Perl or Ruby and having to buy much more hardware later on.

Twitter advice

I know that opinions are a penny a dozen on the internet and I generally don’t pay any attention to them unless they are informed opinions, and I generally don’t comment on them. But this post by Michael Arrington on TechCrunch about Twitter bugged me.

Here’s why:

Experts I’ve spoken with say these are reasonable precautions to take, although they question why more slave servers weren’t set up in the past (”it takes ten minutes,” said one anonymous source). But as a Twitter user, I’m glad to see they’re preparing for the surge.

Nothing ever “takes ten minutes”. You can’t just pick a server and bung it into production as a slave server without thinking about what you are going to do with it and hence what load you are going to place on it. Typically ‘lesser’ machines are co-opted into being slave servers and people expect them to be able to replicate, keep up with replication, and take a read load. Oh and not need any admin either. You can “take ten minutes” to do this and have the system bite back at the worse possible time, or you can think about your needs, do the right thing and have the system work properly for a long time, your pick.

The smartest thing Twitter could have done would be to hire former Chief Architect Blaine Cook back as a consultant to keep an eye on things for the day (he seems to be the only person that can keep his crazy architecture actually live). But from what we’ve heard that hasn’t happened.

No, the smartest thing Twitter could have done (and most probably did) would have been to make sure that there is redundancy in their engineering team so that things don’t come to a grinding halt if someone leave, goes on vacation, is fired, or gets hit by that bus-driving psycho. Having critical knowledge locked up in a single person is bad for that person, bad for their co-workers, bad for the company and bad for the shareholders.

Follow

Get every new post delivered to your Inbox.