7 September 2007, 00:12 by mark hoekstra

happy times! building server number two ^_^

Although my current server is still running very happily (uptime is now 462(!) days), I’ve been collecting parts for a second server ever since I got my current one. Or, better put, the current server (now something like two years ago) was part of a bigger deal and that came with lots of server/Xeon parts, but not one machine could be build from it… It took me almost three months of trading back then to build my current server out of it and I’ve been having (spare) parts laying around ever since. Around the same time I got the innards of my current workstation (also a dual Xeon and for the reason I already had the CPUs) and just today I finally got the last parts I needed for my second server, just before everything becomes obsolete! *^_^*

click to enlarge

Anyway, getting a second server hasn’t been high on the priority-list the last two years. I’ve just been collecting parts I’ve stumbled upon (for the right price).

So, what is it?

The chassis (the last thing I bought) is a Chenbro 21706 with an optional SCSI U320-backplane and single Fortron 650 watt PSU.

I’ve got four Maxtor Atlas 10K V disks, 73GB and even though they’re not new as in ‘just rolled out of the factory’, none of these drives has seen service before. They’re from different production dates/batches, all four of them so that makes them very nice for a RAID-array, right? (as a matter of fact, I only recently changed my mind about that, raid-arrays of identical disks from one batch is a baaaad idea ;-)) For that RAID-array I’ve got an Intel SRCU42L hardware RAID-controller, which I used in my workstation before, but there, I’ve now got two 15K SCSI-disks without RAID (36GB for the OS and 73GB for data).

The motherboard is an Intel SE7501WV2 which is actually (almost) the same board I’ve got in my current server, the FSB on this one is 533MHz while it’s 400MHz on my current server. There’s 4GB of memory on there and I’ve got two Intel Xeon SL73L 2.4GHz/533FSB CPUs.

...and that’s about it! ^_^

click to enlarge

From an upgrade point-of-view, this server will not be *that* much quicker than my current one (I guess), although disk I/O gets a nice upgrade (from two IDE-drives mirrored in softRAID to hardware RAID10 with four drives on SCSI U320). The memory gets doubled (from 2GB to 4GB) and the FSB goes from 400MHz to 533MHz. So, it’s already older (but proven) technology, that’s how I like it and that’s what I could get/afford. I love the whole package even though you could probably build a single Core2Duo system in a 1U enclosure with more oomph if you only look at raw calculations per second. Still, I wouldn’t trade this machine for something like that. Next to that, my current server will remain in service and this one get’s stacked on top of it! That’s the biggest upgrade of them all. *^_^*

(not exactly how Moore meant it, but ey! ^_^)

It’ll allow me all kinds of funky configurations. I’ll start off by putting this site as the only thing on this machine and from there on I’ll review my current server, maybe put some fresh disks in there and have that whole machine as a hot spare…

Even though my current server runs Gentoo (and probably will be for a while), this new server will start off running OpenBSD. (with the amount of t-shirts I’ve got from them, I don’t have any other choice! ^_^)

Ah well, we’ll see how it goes… four Xeons for powering one blog, madness!!! ^_^

(quad core xeon, geek technique style :D)

permalink - add to del.icio.us

  1. Han @ 8 September 2007, 17:24 :

    grr I’m not speaking to servers atm!

    Could you fit any more fans in that thing though! I could do with a few in my desktop actually its practically melting!

  2. Dave @ 9 September 2007, 07:16 :

    Congrats on the fine piece of hardware. I love to hang around on recycle day at my work. So, I think you might have the blower fans on the procs backwards. If I see what I saw in the pics, the proc fans exaust at the hard drive backplane exaust, that is, unless you have reversed the blower polarity, or reversed the hard drive backplane polarity. Front-to-back cooling, right?

  3. markie @ 9 September 2007, 12:29 :

    >So, I think you might have the blower fans on the procs backwards.

    Weeeell, I’m not so sure about that. When I first tested the motherboard with CPUs and fans without the case, I had them the other way around and I couldn’t feel any air coming out of those ‘exhausts’. Then I took a closer look and wasn’t at all sure air should be coming out of this. It’s like a rotary pump and the only intended way I can see this should work is taking air in at the exhaust and blowing it on the CPU, just like any other CPU-cooler out there.

    I’ve been googling for an answer but the only thing I can find is this picture

    Anyway, it’s my plan to keep an eye on that and I even doubt if I should have fans on the CPUs directly or build some airduct which directs the air coming from those 80mm fans over the heatsinks (sans coolers).

    We’ll see, this machine will be around my home for a while before I colocate it, so, plenty of time to test all that. ^_^

    (now if only the manufacturer would've been so smart to draw one arrow on that cooler... and oh, I had to knock of those 'legs' from the cooler also to make this fit... my motherboard doesn't have this big nocona-like holes in it (big, but not big enough) and the heatsink stayed a couple of mm above the CPU... sigh...)

  4. brainfart @ 9 September 2007, 14:32 :

    Nice in a way… but then again not so nice, from an ecological standpoint.
    How much do you pay for electricity each year? What do you do to make up for all that “wasted” energy? (I’m sure you don’t see it as waisted).
    Or is that not an issue to you?

  5. markie @ 9 September 2007, 14:45 :

    >from an ecological standpoint.
    >Or is that not an issue to you?

    Sure that’s an issue and also stuff I put my mind on it every now and then. This particular machine is going to be colocated so I’m not so worried about the power it uses at my home. (That’s just honest, right?)

    Green servers is a trend, for sure, but be careful not to exaggerate this issue all of a sudden. This is hardware which was top-op-the-bill two to three years ago, there are datacenters filled to the brim with this kind of technology and even though it consumes quite some electricity, in my case, everybody can see what I do with my gear. If you want to cut electricity, take a look for instance in a regular company, where almost no one is worried about electricity, leaving PCs on while they’re on holiday and that kind of madness (really, I’ve seen people do that because they didn’t know how to have an out-of-office-reply without having their PC turned on…).

    Anyway, my workstation is also this ‘old tech’ dual Xeon electricity guzzling monster and I’m not making this up, but I’ve been resistant in making it totally silent, cause then I would probably leave it turned on. Now, when I’m done working behind it, I turn it off, because when I don’t work behind my workstation, the noise irritates me…

    So, when will I be turning to green servers and workstations? When the first ones become obsolete, that’s when ^_^

  6. Dave @ 10 September 2007, 23:51 :

    True! I’ve seen servers with both ducts around the procs to channel air through them, and a 2-4 inch raised heat sink depending on the server height. I could only find this cooler: http://www.keenzo.com/pimg/Image/Products/500/CYCLONEBLOWER941.jpg

    ...and this one:


    Unknown if that’ll help at all :) When I built my own servers back in the day, I’ve bought the blowers from radio shack, and reversing the polarity caused the fan to have about the same volume.

    For an answer to Brainfart, I’m in a similar boat, I run a surplus Dell 2650 at home that contains several virtual machines (w2k3, XP, FreeBSD, and RHEL 4). True that it isn’t used most of the day, but it has a vm that logs the firewall messages 24/7 and requires a bit of HP to do it. It’s also for redundant disk storage and remote access. True it could be built on much “cheaper-to-run” hardware, but hardware that still has the HP to run several VM’s and enough disk storage for flat files.

    I tried to do my part and pull the second PW (only 500 watts each) and the second proc for less power draw.


  7. Sebastiaan @ 22 November 2007, 20:50 :

    Assuming you’ve already figured it out by now, but 100% of ‘those kind’ of cpu coolers use the ‘plastic opening’ as out. Besides that about any blower type fan uses that principle.
    But why on earth are you using 1U coolers in a 2U chassis that can easily keep the cpu’s cooled with regular 2U passive heatsinks? If one of the blowers on the CPU’s fails for some reason I see the system having a hard time to keep it cool with the regular airflow in the chassis not being directed over the heatsink in any way, and the blower-plastic part even hampering that effect some more. It’ll throttle the cpu down and keep running fine, but still not an ideal situation I’ld say.

    Besides that, nice server to add to the other one ;).

As mentioned in the Message from Mark's family this site has been made static. This means that it will be no longer possible to comment on his ideas and projects, but that we all can continue to cherish his creativity.

previous: itanium desk

next: fixing some romantic damage