Scaling Hadoop

This blog post on the Yahoo developer network about scaling Hadoop is very interesting.

Interesting that they were using only 1.6 % of the 14.25 PB capacity of the HDFS Cluster. Even more interesting is that they already have 226 dead nodes (and 4049 live nodes,) which is a 5% attrition rate so far, whatever ‘far’ means in this case, I would be interested to know.

In other Hadoop news, the “Hadoop + Python = Happy” project looks interesting. Basically it is a framework for writing map-reduce programs for Hadoop using Jython (basically Python.)


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: