On building a small cluster
Treethinkers reader Nick left a comment on one of my earlier posts asking for some details about the cluster that I built for my lab. I’ll do that with this post. I’ll start by outlining some information about the cluster, list the specific parts I used (although note that this was two years ago, so good choices would likely be different today), and then give a couple of general thoughts on building and maintaining your own cluster.
Our cluster is a small machine intended to crunch through moderate numbers of phylogenetic analyses and to serve as a resource for projects where it’s convenient to have more administrative access than you typically have on large shared clusters. It comprises 4 compute machines and a head node. Each compute machine has two 6-core Xeons, 500Gb of storage, and 24 Gb of memory. Because these processors are threaded, each chip with 6 physical cores has 12 threads available, meaning the 4 compute machines have 96 threads available. I built it using pretty standard commodity parts available from your favorite internet based vendor. Many of these parts are tailored to the gaming market, which is actually a little annoying…lots of fancy LEDs lighting everything up. I built the head node from a cheap barebones PC that I bought from Newegg. It provides a lot of storage and has plenty of power for compiling, transfers, and other maintenance tasks. This cluster is far from being blazing fast, but it’s a good workhorse for us that is roughly on par with 4 high end mac pros from a couple of years ago. It’s small enough to not cause any problems with cooling and it can run on a single 20 amp breaker. In short, I built it trying to find a balance between processing power and difficulty in setup and maintenance.
- Shuttle SH67H3 Barebones PC
- Intel Core i3-2100 Sandy Bridge 3.1GHz Processor
- Crucial Ballistix Sport 16GB (2 x 8GB) 240-Pin DDR3 SDRAM
- 2TB Hard Drive
- ASUS 24X DVD Burner
- TP-LINK TG-3468 Network Adapter
Compute nodes (the part list below is for a single node, we built four of these)
- EVGA SR-2 Intel HPTX Motherboard (note that this board is HUGE and won’t fit in most normal cases)
- 2 x Intel Xeon E5645 Westmere-EP 2.4GhZ Server Processor (I was simply looking for a sweet spot between price and performance in choosing this chip…I’m sure the best choice would be different now)
- 2 x Intel BXRTS2011LD Liquid Coolers
- 12 x G Skill 2GB 240-Pin DDR3 SDRAM (for 24GB total)
- WD Caviar Blue 500GB 7200RPM Hard Drive
- EVGA GeForce GTX 550 Ti Video Card
- Rosewill Bronze Series RBR1000-M 1000W Power Supply
- ASUS 24X DVD Burner
- Rosewill Blackhawk Gaming Case (the cheapest of the very small number of HPTX cases that I could find at the time)
- NetGear 8 Port Gigabit Switch
- a bunch of cat 6 cable for the internal network
- some spare backup power supplies that I had laying around
- Synology DS413 with 4 x 3TB Hard Drives in Raid 5 (for 9TB useable storage)
The cost of each of the above isn’t really relevant anymore, since two years have elapsed since I bought everything. The total cost was well below $10k though, and that should remain true today if one were to design an updated machine.
There’s nothing really to say here. Physical assembly is exactly the same as assembling any other computer, just more of them and you pop in an extra ethernet card to the head node to set up an internal network. The biggest challenge here was actually just carving out enough time where I could focus on it without interruption (which was doable, just not a straightforward task in the life of a starting professor). In the end, I just found a quiet Sunday and Monday that I could clear my schedule and worked on it until I had 5 computers and a giant pile of empty boxes and bubble wrap.
We run Rocks and did a standard installation that mostly follows the documentation. Rocks is great in that it makes much of the configuration and administration very simple. The only thing about installing it that varies much from a typical linux installation is that you’ll want to enable each compute node to PXE boot (boot from the network) before setting up each machine. There are a long list of things that I would like to tweak about our set up, but they aren’t a priority at the moment.
Building a cluster from parts has some clear tradeoffs. I’m happy that we chose this route in my lab, it’s worked well for us, but I don’t think that it’s always a good choice.
1. Building your own cluster is much much less expensive than buying, say, a similar number of Mac Pros, or buying a turnkey cluster from a commercial vendor. This is attractive if you’re on a tight budget and just need the machines. That said, compute time is often easily accessible on other resources for free (e.g., CIPRES, XSEDE, or local university clusters). In the context of starting up a lab, buying versus building machines doesn’t represent a lot of money either way and the cost is relatively small relative to the cost of buying all the thermocyclers, centrifuges, and freezers that you might want in a standard molecular biology lab. If your limitations are primarily financial, it’s likely that getting time on an existing cluster is the best route to go.
2. There is nothing about building a cluster that saves time. Problems arise that need to be dealt with and there is no professional administrator that will help you out. In my experience, this hasn’t been onerous, and most of the problems that arise are small and easily solved. It typically only causes stress when some problem arises at an inopportune time (e.g., trying to finish preliminary results before a grant deadline).
3. Those negatives aside, this is a fun thing to do if you enjoy tinkering.
4. Along these lines, it’s also a useful educational experience. After all, any problem that arises is just another learning opportunity. This is probably the single biggest reason that I’m glad we have our own in house machine. It’s useful for students (and me) to try out new things, test code and pipelines, and generally play around without the worry of causing problems on a larger shared resource.
Back during my undergrad, my department bought a small single capillary sequencer (an ABI 310 for those that remember those days). We had no core facility to run it, and consequently, I was responsible for all of my own sequencing, which means I learned that machine inside and out. How to set it up, how to tear it down and clean it, how to dilute the BigDye (and how far you could push this), etc. This turned out to be immensely useful later on in graduate school, when I was trying to get some problematic sequencing reactions to work at a core facility. Having that knowledge about what was going on in the sequencing reaction and the sequencer itself helped me to ask the right questions, and narrow down the problems quickly. I view having our cluster in the same way. Once you’ve set up, say, SGE yourself, it becomes a bit easier to intuit where things are going wrong with your analyses on larger shared clusters.
5. Would I do it again? Yes, it is an interesting and useful resource to have around. It’s value as a learning/training tool for people working in my lab is something that I find particularly compelling (although, I suspect my students might say otherwise after they have spent a week fighting with MPI). Even though we have access to much larger shared computing resources now (and have shifted the bulk of our analyses over to them), the cluster still gets a lot of use. It has also served as a nice resource for others in the department that need to do some quick assemblies or analyses.
To sum up, I would basically only recommend that others go this route if they are already interested and want to. There really isn’t any reason that building your own machine is a better choice than finding computation time elsewhere, unless it is something that you would find satisfying to do on its own.
Our lab also has a mixture of in house cluster (Mac Pros + HTCondor) and shared computing resources. I agree with your points above about potential benefits and costs. A couple of other advantages of having some in house resources I’ve found: 1) your people are going to have a machine anyway for email, writing, etc. You can get a higher end one and put it into a cluster. I reserve a few cores for the user and let the others be used for computing. 2) noob protection. Everyone makes mistakes when starting out: I broke my advisor’s cluster, twice, back in the old days when MrBayes would by default ask “Continue run: Y/N?” and repeat itself if you didn’t answer — this made the log files of many parallel runs expand to fill all available hard drive space (there were no user space quotas). I now know (usually) not to have mistakes like that, and for shared clusters there are usually quotas and other safeguards, but there are still ways to have issues: a new user could burn through your lab’s shared HPC runtime quota quickly accidentally, for example. A lab cluster can be a safe space to make mistakes while learning.
Thanks for the thoughts Brian. Setting up Condor is high on the list of things that I would like to do!