Tyan's PSC comes packed with forty cores, Infiniband, and much more inside the cube than at first meets the eye.
For those sitting on the edge of their seats waiting for more quad-core mojo, youâ€™ll need to wait another month. What could be so important as to divert me from pushing the multi-core cluster envelope? How about if a truck pulled up to your house the Friday before Christmas and delivered a new Tyan Personal Supercomputer (PSC, http://www.tyanpsc.com), complete with forty cores, Infiniband, and lots of other goodies. Of course, everything else takes a back seat.
The first thing to do is thank the wife. â€œHoney, you shouldnâ€™t have! I thought we agreed â€˜No clusters over ten cores for Christmas this year.â€™â€ Coming back to my senses, I realized that the good people at Tyan promised me an evaluation version of the PSC. Darn. Eventually, Iâ€™ll have to send it back. And Iâ€™ll have to settle for new clothes (again).
Whatâ€™s In The Box?
After I removed the system from its crate, I had a black box shown in Figure One. The PSC measures 21â€ x 14â€ x 28â€ (52.7 cm x 35.6 cm x 70 cm). Although the system is â€œpersonalâ€, it does weigh in at 150 pounds. The chassis has wheels and two large handles on the top in case you need move it. Two people are required to lift the system, however.
Thereâ€™s much more inside this cube than meets the eye. Letâ€™s start with the computational parts.
There are five dual socket motherboards (Tyan Tempest S53720) in the chassis. Four compute nodes are vertically aligned to provide dense packing, and one head node is aligned horizontally so that the user can have access to the PCI slots. The design allows the head node to be outfitted with extra hardware, as in many bigger clustered systems. Using quad-core Xeons (5300 series), the PSC can fit up to forty cores and up to 60 GB of RAM (using 12 GB of FBD DDR II 533/667 per motherboard). In addition to the motherboards, there are two Ethernet switches, an optional Infiniband switch, and a KVM console. The entire unit uses three 600 Watt power supplies that can all be run off the same circuit. One of the supplies is dedicated to the head node. In addition, the system is extremely quiet.
The first thing that strikes most people is the clean front panel of the PSC. A close-up is shown in Figure Two. The thing to keep remembering is a head node plus compute nodes. There are two places to connect a monitor, on the left side there are video and USB ports for the head node. On the right side there is a multiplexed USB and video ports for connecting to the compute nodes. The four switches in the middle indicate if the node is powered and allow you to toggle the KVM between the nodes.
Under the left-side head node ports, there are vertical power/reset/HDD panels for each node, including the head node. There is also a head node DVD drive under the control panels. To the right are the hard drive bays. There are seven drive bays that can assigned to any motherboard by simply routing a cable. If each compute node has a drive, then the head node has three remaining bays to use. Using a disk-less compute node configuration (like the Warewulf toolkit), you could give all the bays to the head node, where a nice fat storage array could be built.
At the bottom is a door that opens to two motherboard trays. Each tray holds two motherboards (facing each other). Figure Three shows one of the trays partially removed. Complete removal requires disconnecting all the cables. Each tray has two large (and quiet) fans pushing air through the system. The fans only operate if the motherboard has been powered up. If you look closely, the motherboard trays also have USB connectors in the corners. I assume these are connected to the motherboards an provide additional USB connectivity. (I did not test this capability).
Hand Me the Screwdriver
If youâ€™re a certified cluster geek, like me, you canâ€™t satisfied sitting on the outside looking in. So, after investigating the front panel, what is the next thing to do? Power the system up? Not just yet. Itâ€™s screwdriver time.
A visit inside and to see how things are connected is always interesting. Figures Four and Five are right and left side views of the PSC with the side covers removed. The first thing you may notice is lots of cables â€” which is understandable when you think about how much â€œconnectivityâ€ is in this box. Figure Six is a view looking in from the back. You can see the top level with the drive bays and the head node motherboard. Though not visible, a KVM is tucked above the drive bays. You can also see the head node has â€œhead roomâ€ above the motherboard for adding PC cards. The compute nodes and power supplies are under the head node â€œpenthouse.â€
Figure Five is the left side view looking toward the back of the PSC. Again, you can see the head node motherboard on top, power supplies, and cables. If you look closely at Figure Five, you should see a small shelf above the first power supply and fan. There are actually two shelves that hold two Ethernet switches and an optional Infiniband switch. Access to the switches is through the back panel, shown in Figure Six. The two small horizontal panels can be removed to expose the switches.
Also notice the fans in Figure Six. Recall that there are four front fans in addition to the eight rear fans (including the power supply fans), making a total of twelve fans for the entire PSC. The amount of noise generated by the fans is actual quite small (big fans are quieter than small fans). The designers realized that the PSC was not going to be in a server room, but probably in a lab or office environment where noise matters.
As mentioned, the PSC has quite a bit of connectivity inside the box. Thereâ€™s a KVM network supporting the four compute nodes, and there are two possible networking options, Gigabit Ethernet and Infiniband.
Each motherboard has three Ethernet connections (two Gigabit Ethernet and one Fast Ethermet). The head node uses the Fast Ethermet connection to connect to the local LA,N which can be seen as the blueish-green connector on the upper left corner in Figure Six. The other two on-board Gigabit Ethernet connections are used to create two separate networks. Each network has its own switch and external connection on the back of the PSC (the white connectors shown in Figure Six). Presumable, the two Ethernet connections can be used to separate the messaging network from the administration network (monitoring and NFS). Of course, it may be possible to bond the networks as well.
The other networking option is Infiniband. Though you cannot see it from the pictures, each motherboard has a single port Mellanox Infiniband HCA installed. The PSC also has room for a small, eight-port Infiniband switch above the Ethernet switches. The inclusion of an Infiniband option is important because multi-core now puts much more contention on the interconnect than in the past. Recall each motherboard may have up to eight cores and each core may need to talk to other cores in the PSC. Depending on the application, the Gigabit Ethernet network may become a bottleneck if it must support eight conversations at the same time. In addition, if one chooses the Infiniband option, the second Gigabit Ethernet network is not installed.
At some point, you stop and wonder, â€œHow much does this cube of cores cost?â€ An important but vague question. The cost can vary depending upon what you put in the box.
The base model PSC includes ten 2.33 GHz, dual-core Woodcrest Xeons (20 cores total), 2 GB FBDIMM RAM per motherboard (10 GB total), an 80 GB SATA II HDD per motherboard, KVM, and Gigabit Ethernet networking for $16,500. If you calculate a per node cost, itâ€™s $3,300 per node, which is in the expected price range for a of a rack mount cluster. Considering all you need to do is remove the system from its container the system and plug it into a wall outlet, thatâ€™s a pretty good price.
The base price can be increased several ways. First, the amount of memory and drive size can be increased up to 8 GB RAM/250GB HDD per node. The Infiniband option is reasonably priced at $3,650.00 (or $730 per port). The price can be further increased $2,345 by adding Windows Compute Cluster Server 2003 (WCCS). There is no pricing on the quad-core (Clovertown) system that I am testing. And to maintain the low power signature, the processor speed cannot be increased for the time being. If you max out your twenty core cores (46.6 GHz of processing power) let me know.
Space does not permit me to talk about performance. There are plenty of benchmarks out there for Woodcrest processors, so you can easily get an idea of how the PSC should perform. I will offer some benchmarks in the future as well.
As for software, all of the popular, freely-available Linux cluster packages should work without any problems. Of course, thereâ€™s also the WCCS option, if you choose to go that route. I installed Fedora Core 6 without any issues. You should be aware that some of the newer Intel Ethernet chip-sets, like those found on the PSC, need updated drivers, and thus older kernels may not recognize the Ethernet ports.
Finally, I believe this is just the beginning of things to come. There will certainly be AMD-based systems as well as other options over the next year. I will becoming back to the PSC (and other such systems) real soon as the personal supercomputer market matures. If I start now, Iâ€™ll have plenty of time to get one on my Christmas list for next year.
Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62