Ross Vincent and the Thureon Armarac
Bill Detwiler has nothing to disclose. He doesn't hold investments in the technology companies he covers.
Bill Detwiler is Managing Editor Tech Pro Research and the host of Cracking Open, CNET and TechRepublic's popular online show. He was most recently Managing Editor for TechRepublic Pro. Prior to joining TechRepublic in 2000, Bill was an IT manager, database administrator, and desktop support specialist in the social research and energy industries. He has bachelor's and master's degrees from the University of Louisville, where he has also lectured on computer crime and crime prevention.
Have been in the recording studio industry for 30 yrs,where cabling has always been a nightmare,but with some simple modifications are now extremely efficient,noiseless and always well cooled.Since all recroding equipment is 19"rackmount and the cases sold sell for a fraction of the cost of the "server cases "calbe sepraters for any elecical supply house cool and fans (or ligqid cooled) you can run a 12 space server unit for around 1000.00cdn ,very clean, cool ,insullated,and small about 30 x 24 20. From firstname.lastname@example.org
Granted that 1/2U devices are lighter than 4/5U they can still be heavy. A HP DL380 with redundant power and full of disks can weigh over 50KG. How does the server mount into the vertiblade so that an engineer doesn't need muscles like Rambo?
I dont know about you but it seems a little strange to draw in hot air and dust from the top of the cabinet and push it down out of the bottom. Is this to heat the floor to keep the engineer stood at the kvm warm or was the demo model upside down?
is it just me, or wouldn't it make so much more sense to have the intake at the bottom and the output at the top.... since the server's heat will naturally go up, why fight it with this design? Seems counter-intuitive to me.
The locks at the front of the cabinet uses a wafer lock with the blind key code stamped on the face of the cylinder. The lock is held in place with 2 Torx security screws. I have 2 of these screwdrivers in my desk drawer. The locks for the clamshells are wafer tumbler drawer locks. Theres are only 243 possible different combinations possible for these locks, and lots of keys that can fit into the keyways. This cabinet should use high security cylinders, especially considering the asking price. It's not good physical security when you can defeat a lock with a couple of paper clips. Nice looking cabinet otherwise.
Let's see, they put the intake fans on the top and the exhaust fans on the bottom. Wouldn't it make more sense the other way? They are pushing against the natural convection flow, I would think they'd get better cooling to go with the flow.
I would *LOVE* to have a home version of this, but it would need adjustable internal wire shelving instead of racks, because my network and audio/video stuff at home is not rackmount, also the price point would need to be somewhat lower for the consumer market. Such a home version might be a *GREAT* place to put assorted network and entertainment electronics.
if the server equipment was inverted to draw cool air in at the bottom the access to the "front" of the servers would be more difficult for the engineer eg changing CDs, using USB ports, etc. We've designed this unit to be an enjoyable experience to install, configure and maintain and i'm afraid getting down on my knees to look at the indication lights doesn't add kindly to the end-user experience!
http://techrepublic.com.com/5208-6230-0.html?forumID=102&threadID=225050&messageID=2250487 there is a reason why...
In my opinion, heres why i think they made intake on top, outake on bottom. If this is for an industrial environment, chances are hygene may not be top knotch, or there may be other things in the area besides computer parts; e.g. at a factory, with dirty dusty grimey floors, having bottom intake would collect much more dust. While that may be a bit extreme, the same happens in an office. Another thing to note is the placement of severs in a building, they're not always the most well kept rooms in the world. At any rate, the top intake will keep less dust from the system, but will make the servers hotter. However, its customizible right? maybe you could change intake out take to how you want before ordering.
The pics in the gallery were base units. From what I gather on the company website at http://www.thureon.com these are highly configurable and customizable units. They even have an option for air conditioned cooling so Based on base cost, I'm sure that the locks can be upgraded as well, if the price is right.
Dont quite understand why someone would pay 10k for some sheet metal and molded plastic. We bought a used 9 foot fully enclosed rollable rack off Craigslist for $250 bucks. Cars cost 10k.
The Armarac has a three fold value proposition; 1. the Aramarac takes up 4sqft of 'floor' space - a normal half-height rack requires a room of 100sqft to secure. 2. a managed service provider can use the Armarac as a secure demarcation point for their equipment on a customer site. 3. a small business owner typically will never pay out for a 'server room' however still needs to keep his assets physically secure.
The hinged racks and integrated keyboard and monitor in the Thureon are nice, but I'm not sure it offers a lot of benefits over much cheaper wall-mount cabinets. CPI makes a nice 6U wall-mount cabinet ("ThinLine II") for under $400. We use them for networking equipment (UPS and switch) in remote buildings, but they could also hold a 1U or 2U server.
If you don't have a closet in a small office and need security it really may make sense. Costs for construction/remodel, cooling, rack, etc... it all adds up fast.
I too thought it was backwards. But I was informed by the HVAC people that most people are unaware that cold air sinks. If the cold air is being output from ceiling vents then it needs to be sucked into the units when it is coolest. They said that ideally, it would be best for the cold air from the ceiling to sink in front of *closed* racks, SUCKED THROUGH the units and then the rising heat to be vented to the return ducts *behind* the rack. However, this requires much preplanning by HVAC personnel familiar with the cooling requirements of high-density electronic components. A good server room needs to literally be designed from the ground (electrical grounding rods and ESD protection) -- up (eliminating heat build-up at the ceiling).
this is one of our most common questions - "why is the equipment upside down? doesn't heat rise?". The answer is simply engineer-friendly usability. It's too awkward to change CD's, use USB, or see the indication lights if the servers are inverted. Due to the forced ventilation fans both drawing in air at the top and drawing it out at the bottom (in conjunction with the equipments own fans) we easily overcome the inherent thermal rise characteristic.
Amazing - I had the same thought as I read this - Good example of book learning (engineering degree) without good real world experience and logical thinking.
Since my area of expertise is not engineering, I figured that there may be factors that I was unaware of as far the airflow in this enclosure. I called a friend who is an HVAC engineer to pick his brain a bit. He agrees that the ventilation system for this enclosure seems counter-productive. When he designs server rooms, the intake is at the bottom and the exhaust is at the top. Fans really get the air moving, but a chimney effect (convection sans-fans) is also effective. Standard blade server chassis do exhaust to the rear of the unit though, so from that perspective, the air is moving in the same direction. I'm continuing research - I'll keep you posted. http://www.gartnerwebdev.com
yeah it does seem like it would make more sense to move air the other way, but like someone said, servers almost always pull air in thru the front which is facing upwards... but i dont see it being much trouble to switch all the fans in the rack and servers to blow the other direction, just flip them over. thats probably what i would do. ;-)
I had exactly the same thought at first. But I think he did it that way because most servers pull air in through the from front and blow it out the back. When the servers are mounted vertically the air is pushed downward. Mounting the servers upside-down would make it tricky to view the status leds, insert dvds, etc.
I agree but I believe that they choose to do this because of the way that the servers sit in the enclosure. You wouldn't want the intake fans of the enclosure pushing cool air on the exhaust fans of the servers or what ever other equipment is in there.
your comment makes sense not only would air flow improve but it would draw the cooler air from the floor. Who needs heat pump!. Nice unit though
Wouldn't that make more sense in an industrial environment though? Yes, you pushing against natural convection, however your not sucking up as much dust and debris...
Ok, you answered my 1st question - one pic says there is an active cooling option. Also in server rooms cooling is often hot / cold aisle where cool comes from floor, but these are likely to be deployed NOT in server rooms but wiring closets where A/C if present is likely ceiling vent. My questions: 1) is there an interface to your envtl monitoring or anyone else's so you could get alerts if overheat, power out, etc? 2) can you display temp / power status / other faults in front? 3) maybe I missed it but is there a mini-ups available for this rack? Would be nice if there was a built in spot for one. server rooms won't always need one if they have larger off-rack one but this is less likely to be in a server room. 4) can it be wired for dual power path to the gear?
Go to Home Depot and buy yourself a wall cabinet. Then put a cooling intake vent on the bottom and a 120mm dia. vent on top for the fan. Then get some fancy screen for the intake. The exhaust doesn't matter since it is on top being 84" off of floor. I did this with a floor cabinet at my desk and there is not any overheating issues. You will have to turn your computer sideways for the wall cabinet is only 12" deep. Go about 21" wide so you only have one door. Go wider if you have more equipment and/or put some of that equipment on top of the shelf over the pc server. You could build it like I did if you enjoy woodworking. Suggested dimensions 21"x36"x12" if bought, or deeper if hand built.
well, they are sort of the same..., but then a home server is sort of the same as what these are designed to protect...
Well I'm not sure if they would like to work on a Home version. I believe this product was targeted more to businesses and not home users. Servers, switches are expensive in comparison to this wall mount. What I'm saying is that it would be easier for a company to afford this cost than a home user. To be honest even if there is a home version out, I think you would be wasting a lot of omney for something not suitable for home users.
I work in a very dusty environment, and the dust I see is never greatest on the bottom of anything. Everything gets circulated into the air and it all settles to the top of the closest object (in this case) the server.
I find this is an ongoing failing in the IT business. Physical security hardware isn't properly addressed. Locks have robust written standards, too. UL437 is a high security lock, cylinder and restricted key standard that's been around for 40 years. No lock is impenetrable, but there is reasonably priced product out there that makes the bad guys work harder. Going to a lock with keys that can't be cut by the corner store clerk represents a decent improvement in security. A pick and bump resistant cylinder represents a huge increase in security. Spending $250 on buying and installing UL-listed cylinders and keys for an $8K unit that houses critical business equipment is a small investment.
Not sure how a half-height rack with a locking cabinet would take up 100sqft of floor space. Seems more like 8sqft (2ft x 4ft). And then you have 20U to work with. To me, the main benefits of the Armarac over less expensive cabinets are: - The built-in KVM switch and console, which is worth close to $2000. (Don't know why they're so expensive, but it seems they are.) - Hanging the servers on a hinge makes them easier to access. - Separate access to a tape drive (not totally clear how that works but it's a good idea) It's probably possible to accomplish most of those things with cheaper components, but it wouldn't be clean an sleek as the Armarac. If you need the features and have the money, it might be worth it.
I checked out that CPI unit. Pretty cool site allows you to config and produce drawings for install. Not near as "Star Trekky" as the Thureon!
That would explain why every other rack in the industry blows the hot air out the top and back, and draws the sunk cool air from the front and bottom. Yes, cool air sinks, and if hot air is rising it will push the cool air aside like a bubble in water. Draw the cool air in from the bottom and blow the hot air out the top. These units -are- designed backwards. Period.
Except for all that warm air coming out the bottom that immediately turns around, floats up and then gets sucked in to the top of the box again... Are there lab-tests on the internal temperature of one of these with a full compliment of under-load equipment?
Of course, you could mount the servers with intake side down and then the airflow would go bottom to top. I don't look at LED lights much on the servers, mostly monitor them through KVM. If you need to see the LEDs, use a mirror.
As sean.esquinaldo@... indicated above, it is because of the way servers and switches are designed. Direct from Dale Holland, one of the folks at Thureon: "...The reason that the intake is from the top and exhaust from the bottom is that the servers and switches are also mounted top(front of server) to bottom(back of server). Servers require their air to come in from the front and their internal fans blow the hot air out the back of the server." So, it's official now - no more need to speculate :) http://www.gartnerwebdev.com
Actually right now I have most of my network gear in a wicker desk, which works fairly well. It's the kind of open wicker with the strands in a loose basketweave pattern so there are diamond-shaped openings maybe two centimeters wide by one centimeter high. Not only do the openings in the wicker allow ventilation, but also with a little wiggling I can pass most types of cords through the openings in the wicker. Not quite as elegant as a purpose-built cabinet with forced-air ventilation and filtering, but it gets the job done for now.
A home version of this would be nice. Currently it seems the best home server rack would be a 2 post rack with short depth servers. This product would be great if it offered adjustable fan speeds to lower sound output and include rj45 only network connections. I think with Products like Windows Home Server coming out later this year the home rack mount industry will finally realize there is a market for home users.
The key to our design is the forced ventilation through the entire Armarac, by having banks of fans top and bottom working with the equipments own internal fans we easily overcome the nominal thermal rise characteristic. However in a dusty or harsh environment we recommend the filtration kit which uses a particle filter under the top vent panel and changes the top fans out for a higher volume model. This combination creates a positive air pressure within the Armarac not no dust or grit can enter the enclosure without being caught by the filters. With this kit we recommend the environmental management control unit which reports on top and bottom temperature (and humidity) as well as pressure drop to alert the engineer when it's time to clean the filter (current model has SNMP and web interface).
The Armarac model on show at interop was the base model with 'standard' exterior locks. We have a 'lockless' option, which requires the remote management controller unit, that controls access to all doors internally using solenoids and SNMP. Armaracs are configurable to almost any environment (for example we're currently developing our 'extreme' outdoor edition).
...my bad for not spelling that out in my post. What I meant was -- in general the intake and exhaust areas of a data centers need to be physically isolated. With proper hot/cold isolation, the cold intake air would naturally sink all the way down to the floor in front of and then ingested into the servers. The hot exhaust air would then be blown out the back of the servers, then naturally rise to the top rear of the rack, and finally out to the return ducts. Attempting to push cold air UP is like suspending a ping pong ball in the exhaust of a vacuum -- you can only push it so high then it stops. To push it higher you need a more powerful ($$$) blower. Properly designed, there would be no "bubble in water" effect as the hot and cold sides of the servers are completely isolated. Question -- without hot/cold isolation, what prevents rising hot air emitted from servers at the bottom of a rack being ingested with the falling cool air (as it will "push the cool air aside") to the servers at the top of a rack? I'm sure cold air COULD be force-fed to rise high enough to reach the top of an almost 7-foot rack utilizing additional and/or more powerful blowers, but how much more noise (acoustical and electrical) would be introduced into the data center? You have to draw an engineering design line somewhere. Working with natural convection is far easier (and cheaper in the long run when properly designed) to do than fighting the natural laws of thermodynamics. Mother nature always wins. If you can't fight 'em -- join 'em. I'm looking at this from a brand-new data center point-of-view. I know that when retro-fitting old data centers you have to work with what you've got and make the best of it. You use the best engineering solutions available -- budget allowing. We need to step away from the idea of "that's the way we always done it" mentality. The days of the old data centers with a single mainframe are gone. Today a single rack can hold up 47 1U servers. Stacking 2,4,8, or more such racks into the same area utilized by the old mainframe will need an entirely different cooling solution then encountered years ago. A major paradigm shift is necessary. Heat removal from todays data centers require new engineering solutions best done before the first rack is assembled in the room. Thoughts, comments, put-downs, etc ... ???
They could have sucked cool air in from the bottom, and ducted it to the top, then ducted the warm air from the server exhaust back to the top for exhaust. Probably would add thousands to the cost of this 'affordable' box.
I have a base cabinet in a desk with the intake grill in the door and a open back. A wall cabinet would need the intake in the side and a fan in the top. So you don't need pictures. You need blueprints from CAD. Besides I don't think techrepublic has an area to upload them anyway.
But it's so pretty..! I wonder if they've got plans to make a sealed / NEMA / positive pressure type unit. I can see them making big gains in the oil and gas industry where I'm at.