Hey everyone, some of you know me from the Freebuild section of the forums. I haven't been terribly active around here, but I thought you guys might find this interesting (and it also doubles as an excuse! haha).

Quote:
For the past 7-8 months I have been working on a project which would make the first and only portable laptop cluster computer (more about cluster computers here: https://en.wikipedia.org/wiki/Computer_cluster) out of 16 laptops which were donated to the SUNY Geneseo Distributed Systems Lab by NYISO (where I worked over the summer). This week we finished the project, and got to display it at SUNY Geneseo's "GREAT Day", which is a day dedicated to the display of student research and projects. I was increasingly proud of my project throughout the day as more and more students, faculty and professors inquired about and investigated the wall. The president of the college also took interest in the wall, and even brought it up during his GREAT Day closing speech. I wish to share what I've accomplished with all of you.

The cluster is designed to coordinate all of it's computers in the accomplishment of a task. The cluster has 32 CPUs, and all of them can be orchestrated to act in unison. This means that if you were to run some program on this cluster it would complete 32 times faster than it would on a single machine with a single CPU. This offers vast possibilities in data processing, and some of our other clusters are already doing this type of thing for several departments at Geneseo. Overall the cluster has an estimated 205 Gigaflops of computing power.

As well, the cluster has a video wall aspect to it. These screens are attached to one laptop in the cluster, which allows us to display a single image across the whole video wall, or we can split up the screens however we please and display one thing on half of the screens, and another on the other half, etc.

There are several advantages to my design over the standard cluster computer design. First and foremost is the portability of the entire package. It's designed to be (relatively) light weight and small. It is designed with wheels for maximum portability, and the whole cluster can be moved by one person with ease. A standard cluster (not to mention one with a video wall attached) will never be moved once it's installed. This allows us to bring the cluster to whoever wants to use it, rather than them coming to us. This way also allows us to use it's video wall best because the wall can be put in a public place to attract interest to the user's project.
Secondly is the (again, relatively) environmentally friendly nature of the cluster. Whereas a desktop machine or server (which is typically what clusters are made from) have a 500 watt (or more) power supply, each laptop has a 90 watt power supply. That is 8000 watts for a standard cluster versus 1440 watts for this cluster, a significant decrease. The power usage is so low, in fact, that we have only 1 power plug to plug into a standard wall outlet in order to power the entire cluster, including peripheral devices (network switch, etc).
The laptop cluster is also extremely light on cooling requirements. Normally a cluster room requires some sort of High Volume Air Conditioning, as well as technicians to care for such systems, etc. With this laptop cluster, so long as it is not in a air-tight closet, it will probably not generate enough heat to warrant air conditioning. When we measured the temperature around the cluster it was only about 4 degrees above room temperature.

I'm very excited about this cluster and it's future; we already have the SUNY Geneseo library asking to use it for displays in the library lobby, and some art professors here have expressed interest in using the cluster to display art in a gallery on campus.

I've attached some photos of the cluster from GREAT Day and one from it in the lab, next to our other video wall cluster. As well, I attached my poster that I had printed for GREAT Day. I have also been accepted to and will be taking the poster to the Consortium for Computing Sciences in Colleges - Northeast conference on the weekend of April 27th.

Thank you for reading, let me know if you have any questions, and please feel free to forward this to anyone you think might be interested.


Here are the pics:






What software packages are you using for task-management/scheduling across the cluster?
That's pretty cool!
This looks like it was a fun project to work on; thanks for sharing it with us. I work in distributed systems myself, so I have a few comments and questions about it; please excuse any implied criticism.

1) You make server vs. laptop arguments in terms of power usage; if I'm reading correctly, you calculate that a 32-core server cluster would use 8kW, versus 1.44kW for your 32-core laptop cluster?I'd sy that 300W-500W might be a reasonable estimate for a server PSU, but you need to keep in mind that this isn't per-core power. For example, I work with a cluster of twenty machines with one to two quad-core CPUs each. In fact, our most powerful servers are 1/2 U servers with two quadcore hyperthreaded CPUs each, for 16 logical cores each or 32 logical cores (16 * 1.2 work-wise cores) in a 1U space, powered by somewhere between 800W and 1kW of power supply. I think that's something to keep in mind as far as the power comparison

2) Have you tried doing a GFlop per kW comparison?

3) Out of curiosity, why did you choose to power all the screens from one laptop instead of having each one handle a screen? I would think you could limit extra required hardware that way.

Again, great job!
Elf: We don't really have much in the way of task management or scheduling for the cluster. We did, but it was broken with recent updates. One of my next projects is to play with ROCKs and other cluster management tools.

comicIDIOT: Thanks!

Kerm: 1) I make a desktop vs laptop argument, since we mostly use custom built desktops. Clearly servers use less power than either of those.
Of course the power isn't dependent on cores. I calculated 500W (for a desktop machine)*16 nodes = 8000W. Obviously this isn't a scientific number in any way. Sadly none of our machines have two CPUs.
2) Not at all, I have no benchmarked the actual GFlop power or the true kW usage, only done some rough calculations. I'm trying to find the equipment currently for a kW measurement, and a benchmark is at the top of my to-do.
3) Each screen is attached to a laptop, not sure if that didn't come across correctly or what. Having one laptop handle all the screens would have been silly, and would have defeated the entire point of the video wall.
CyberPrime wrote:
Of course the power isn't dependent on cores. I calculated 500W (for a desktop machine)*16 nodes = 8000W. Obviously this isn't a scientific number in any way. Sadly none of our machines have two CPUs.
Fair enough. I believe that your power argument is unfair to server-based clusters, but that will come out in...
Quote:
2) Not at all, I have no benchmarked the actual GFlop power or the true kW usage, only done some rough calculations. I'm trying to find the equipment currently for a kW measurement, and a benchmark is at the top of my to-do.
...this.
Quote:
3) Each screen is attached to a laptop, not sure if that didn't come across correctly or what. Having one laptop handle all the screens would have been silly, and would have defeated the entire point of the video wall.
Ah, that makes sense, and I agree. I had been misled by this:
Quote:
As well, the cluster has a video wall aspect to it. These screens are attached to one laptop in the cluster
I guess you were saying that one laptop generates the image to be displayed, but they all work together on the actual displaying.
Quote:
This means that if you were to run some program on this cluster it would complete 32 times faster than it would on a single


Wrong.

The software only runs faster if you modify the source to take advantage of the additional 31 processors. It will then only run as fast as the amount of code you can parallelize.

Second, why the heck did you do this with laptops? You could have gotten double speed for the same price if you'd gone with desktop parts. (This would also have reduced the amount of parts for the computers to the bare essentials)
Or, just use a set of raspberry pis. Cheap and portable.

But that would take a while.
willrandship wrote:
Or, just use a set of raspberry pis. Cheap and portable.

But that would take a while.


Raspberry pi's would be a really cool thing to do, with about 0 practical applications. You have a nice slow processor, costing many times more what an equivalent array of processors would (Yes, I know they are cheap, but you have much inferior specs, which offset the cost). There's also the RAM problem. You want the largest amount of ram you can get, in order to minimize network calls.
well, I was referring to it with the assumption of having working OpenCL. The GPU runs wonderfully quickly compared to most mobile graphics.
seana11 wrote:
Second, why the heck did you do this with laptops? You could have gotten double speed for the same price if you'd gone with desktop parts. (This would also have reduced the amount of parts for the computers to the bare essentials)
Read the article, Captain Arrogant. The laptops were donated to the project.
I think the video wall is an awesome feature, by the way. How did you hook all the screens to one laptop? VNC? I wouldn't think the video card in one could handle them all.
willrandship wrote:
I think the video wall is an awesome feature, by the way. How did you hook all the screens to one laptop? VNC? I wouldn't think the video card in one could handle them all.


They aren't hooked to one laptop, it appears they're running in tandem via a network switch and probably some program either they wrote or used. One laptop probably serves as the master computer and the other are slaves (input) but it appears each laptop powers it's own screen with it's GPU.
That's pretty cool, I have to say. It was the line "These screens are attached to one laptop in the cluster" that made me think they were attached to one GPU.
comicIDIOT wrote:
willrandship wrote:
I think the video wall is an awesome feature, by the way. How did you hook all the screens to one laptop? VNC? I wouldn't think the video card in one could handle them all.


They aren't hooked to one laptop, it appears they're running in tandem via a network switch and probably some program either they wrote or used. One laptop probably serves as the master computer and the other are slaves (input) but it appears each laptop powers it's own screen with it's GPU.
*its *its. Yes, I said the exact same thing in my post; I think that that got overlooked. Smile I was confused by the exact same line as Willrandship in the first post.
It makes more sense to do some sort of network-based video control, but I'd imagine it would have terrible latency. Of course, you might have worse latency from one laptop trying to handle everything., especially if it's the master node as well.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement