Servers

The server crash that wasn't


The first sign of trouble was the paging and the frantic calls to my extension. "Tim, the server is down. Help!" Those are dreaded words that a network administrator never wants to hear. I run to the server room to check things out. I had been working on another server a few minutes earlier, copying some archive e-mail files off to an external USB drive. The application the users were complaining about was on a different server. This just does not make sense.

Being the professional that I am, I quickly start at the physical layer. The server is running; I can log in, but it's not seeing the rest of the network. Bad NIC? Bad switch port? Nah...just a loose network cable. This is not supposed to happen. Could vibration have knocked it out? No, I quickly realized that this particular cable was one that I had intended to replace a long time ago. It had a broken connector clip. I must have knocked it loose a few minutes earlier.

Quickly, I plugged it in and made a mental note to myself to replace that cable some evening when most of the users are off-line. For now, they are back up and running, or are they? The calls continue, "We're still getting the same Btrieve errors." We use an old application that runs on P-SQL, Pervasive Btrieve, for those who remember it from the old Novell days. It hates lost connections and throws up immediately. I'm hoping the database has not become corrupted.

I warn everybody that I'm going to reboot the application server and restart it. It doesn't matter much that I warn them as there is nothing they can do. They have already lost their connections. I can only hope they didn't lose any critical data. A few minutes later all is well. The fire is out. The users are happily working and nobody remembers that there was a problem. My heart slows down and I get back to other tasks. Disaster averted. No server crash.

Ultimately, the problem was my fault and should not have happened. A loose cable and a nudge while plugging in a USB device on the server below it set the wheels in motion. I've had worse things happen like losing two drives out of a RAID when a fan failed and monitoring was not turned on. How about you? What's your server crash or network failure horror story? I would love to read about it. A simple loss of a DSL line doesn't count. That happens too frequently.

75 comments
Double DeBo
Double DeBo

and my whole network came crashing down for over 12 hours. About 2 years ago we bought 3 new ProLiant Server, really nice ML370 G4's. Reduntant PSU, NICs, and all with 641 RAID Controllers. I setup everything as RAID 5 and 200GB Tape Backup. I thought that nothing could go wrong, boy was I wrong. About 3 months after I got the server all up and running I come in at 8am and I get bombarded with phone calls that no one could connect to the network and the main file shares are unavailable. I checked all the network connection, good to go there. Then I happen to catch a small yellow light out of the corner of my eye. Oh great a Hard Drive is out on the Master Controller and wouldn't you know it that is also the DHCP and DNS Servers. Lucky for me I had also promoted one of the other ProLiants to a DC, all I had to do was grab the master rolls from the MC and off I go, great, NOT. To make a long story short, once I got the new HDD I restarted for the RAID to rebuild and things just got worse from there. When the RAID was going bad it also correpted the registry hive for the admin account. I ended up having to redo the server from scratch but without the benefit of using the HP Smart Start Software, no biggie for me, I've done this before I thought. The worst thing of all is our nightly back ran every night, we had restored user files here and there so we assumed it was all good. WRONG, unknowing to our knowledge that every night when we were backing up files and the system state that the system state was corrupt. Come to find out the hard drive had been going out for month and we were never alerted by the monitoring software that we had a hardware failure until it was too late. Well, after 36 hours at work everything was finally back up and running correctly with "NO" data loss just a lot of hours by other employees. Too bad I'm salary, I would have made some great overtime from that onem, oh well life goes on.

banx
banx

A week into my new job I was in the server room with an old hand about to retire, who decided to pull out an old router sitting doing nothing. Immediately upon holding the device up, the whole network crashed! Our boss rushed in all hands and swear words, the phones were all ringing and this old network guy is standing there with a small router two cables looking sheepish and stating "but it wasn't connected". The boss just shouted put it back. On further investigation it transpired that when removing the old piece of kit, he had stood on the 4 way adapter switch powering the network routers, thus turning them off. Lesson learned - never leave a 4 way power adapter lying on the floor where someone can stand on the off button

Ray Collazo
Ray Collazo

My 60 year old Dad loves to come on by and visit me in my server room. Most of the time he sits there quietly on the other side of my desk, in the chair next to the server console and doesnt do a thing. (I can already hear it now from my colleagues: NO VISITORS IN THE SERVER ROOM! NOT EVEN YOUR PARENTS!) At the time, we were using these older-type Belkin UPSes with the rocker-switches on them, one which happened to be sitting right on the table where my Dad was sitting: This particular one supplied backup power to the companys main 10/100 switch and the DSL line. I had to leave the office for a bit, my Dad just waiting around, my subordinate working away on his stuff. I was in the office, doing some work on a printer I believe, when suddenly people started complaining about their computers coming up with database connection errors. At first I thought it was only one or two computers, and then suddenly Everyone in the office was wondering what was going on: They couldn't connect to the servers, nor could they get out to the Internet. I Immediately started working my way back to the office, my Dad is standing outside my door looking rather sheepish, my Subordinate quickly fills me in that he had turned off the Battery Backup. I asked him in as controlled a voice as I can manage as to what in the world was he doing, and he admitted to having been curious as to what this big red light was for, which he "Tapped with his finger" and it turned off. I sheperded him out the door and explained "Dad, I think you better be going." just as the pouring of calls started hitting my phone. Since then, all of our battery backups have been replaced with ones with momentary shutoff switches.

lferreira
lferreira

well, my story is a just a little bad. i'm working at an IT company, and one day a server from one of our clients crashed. not so big deal... i'm living in a country with an average temp. of 25??C, with a huge amount of dust and the server was running for 4 years. problably dust (i hope). o boy... i was wrong. the raid 5 with the information of dozens of companies own by the holding was not working. after 1 days finding why the hell it was not working, i got the strange idea of switching the PSU. well, there was good news. the linux raid5 was giving diferent errors. ... after several hours, another idea: "Let's put a linux live cd and try to configure the raid5 ... inserting the dvd drive and the live linux, the system did not start. (i start to think that some specific hardware was not alowing the live linux to boot). after that. i remove the dvd drive and start the system again. More bad new... it's not booting after seeing that, i grab the disks and assemble them in a test server that we have. the new system didn't start. i was wandering why the hell bad disks could still be working in that server? so i start checking every disk individualy, and eventually i got the bad one. no problem, raid 5 will still boot. after putting the 3 good disks on the new server, i start assembling the raid5 great... it didn't work. mdadm didn't start up the raid. starting to think... a couple of our hours later i got the idea of check if the 3 drives that work are in good condition. so, HDDRegenerator in all 3 drives... ... 1 week later, i got the final results. 2 in 3 drives were with a total of ~60k bad sectors (obviously, meanwhile i was doing images of the result HDD) after that, assemble the 3 images, run mdadm and voila. the data was there (almost everything) but the important was there) so, at the end, the problem was: 1 bad PSU 1 bad Motherboard 1 bad disk 2 disk in imminent failing state short story, they bought a new server we got to love linux for running in those conditions

fulton.dr
fulton.dr

I work for the Canadian Air Force, managing multiple servers some with a low security classification other with a much higher classification. I've had to deal with several incidents where a classified document was placed onto a lower classified system. When this occurs the lower classified system and all backup media now assumes the classification of the document. The newly classified server then has to be taken off line and secured with the tapes back to the date when the document was mistakenly placed onto the server. Now that the server is secured, the users need to veryfy and copy each piece of data to a CD. They then copy the verified data to a new server that has been built to take the place of the old server. The procedure takes several weeks, luckily from a tech perspective the involvementis minor and it's a good way of getting approval to buy new hardware.

iain.sloan
iain.sloan

Many years ago the company I them worked for had an old IBM server running OS/2. One day our esteemed IT Manager (my boss at the time) decided he was going to clear out some files from a folder on the drive. He sat down and proceded to type "del *.* /s" and hit enter. The words that then left my mouth will remain un-recorded as the command had been typed at the root of the drive and we were in the process of watching our data dissapear. After he lifted his head from his hands and the shocked expression had left his face, he asked if our backup worked the previous night. Not the way I like to test our backups. Good times... good times :P

joem
joem

2 weeks before I had just migrated our data onto a new server, which is our DC, AD, and data server. I took off to Canada 1 week after for a fishing trip and the day I got back I went into work and changed the backup tape. The next morning 15 min. after I got in, the server was down. Nothing when I pushed the power button. I did some quick trouble shooting and soon noticed that one of the capacitators on the MB was leaking :( I called our local warrenty repair center and thank God they had a motherboard for that exact model on hand :) they rushed the 65 miles to our location and by noon we were up and going again. I learned to stick around a little longer after a migration like that.

benp
benp

I am an admin at a little machine shop that does over $90M a year and growing, we have our troubles but nothing too bad lately. I also take care of a small doctors office that I used to work for. The owner called up and said our software isn't working. I asked her a few questions to get an idea of the problem she could print and surf the web so the server was running but she could not get to any mapped drives. So I remote the server and find that the D: drive has dissappeared. I was puzzeled, no drive D: in My Computer or Disk Management. Disk Management is showing 98GB free on the disk 0. Using Dell open management console I ran a "resync" on the raid 5, then a "rescan" and nothing. The resync took 6 hours to complete so it was after 5:00pm and I was done my real job and went back to work. I called Dell they had me reboot and check the raid controller it was fine, we checked recent windows updates nothing there to blaim. Dell basically said sorry it is gone we don't know why or how to get it back. Gee thanks....how much was that support contract on this server??? Looking at the tape backup we were going to loose about 2.5 weeks of stuff if I resorted to that. I told the owner what dell had to say, but I was not giving up so easy so I will go home and get my CD's and come back. I come back and run Active Disk Partition Recovery (3.0 I think) within 15 seconds it found the partition so I ran the preview and every thing was there. I exit the program saving the MFT table and reboot all is well again. I don't really know what happened to the D: drive, but I hope it never does that again.

rjcirtwell
rjcirtwell

My horror story (a.k.a "should have known better") happened about 4 months after starting work at my first network admin job. The small bank I work for had its own Exchange server that was running on pc class hardware. (a Dell GX260). Then came the great northeast blackout. I had checked to make sure I was getting backups but never tested the restore. Sure enough the hard drive died on the pc running Exchange 5.5. My first thought was "now they will find out that I don't know what I'm doing"! I allocated hardware, server class this time and connected to a UPS. Fired up the backup software and attempted the restore. When the restore didn't work I called support thinking I didn't follow a step. The support person walked me through it and said "That was suppose to work." About 6 hours later we discovered a flow in the restore process that required the programmers to re-write stuff. Took about 4 days and about 15 attempts but everything came back. I still have this job (love it) and I don't complain (much) when I do restores just as a test.

TheSwabbie
TheSwabbie

Yup - Been there, done it, bought the T-Shirt aint goin back. I dont think there's a Network Admin out there that hasnt done this at least ONE TIME in their career.

cmyers
cmyers

How about two Fiber cuts and a "LOW LIGHT" on the fiber that brought down my entire WAN for about 8 hours EACH time in the span of 1 WEEK!! So much for the new, improved and "reliable" backbone our ISP promised us when we migrated a few months earlier. The kicker is that I was at training in another state, so all I could do is watch the drama unfold on my blackberry. (Come to think of it, that might have been a stroke of luck!!) Ugh. That was in September, and I am still dealing with the ramifications...

ricbek
ricbek

One Saturday our accounts receivable person called me at home to tell me that our server was down. We have an SBS 2003 with about 20 users. I tried to VPN in but couldn't so I drove in to work to try to find out what was going on. When I got there I found that the server really was down and would only boot up about half the time. When it did come up it would go back down again fairly quickly. I took the server home with me and while I was trying to boot it up noticed that the TV and the stereo were both suffering from severe static. Turned out that the power supply had gone bad and was backfeeding into my electrical system. I replaced the power supply with a better one and have not had any problems since. If I hadn't taken the server home it might have taken considerably longer to diagnose the problem.

reisen55
reisen55

One fine day in 1999, as a system administrator, I came off the elevator and was greeted by an early morning employee who told me that the server had crashed. I ran into my office, threw stuff down, ran up 2 flights of stairs, ran into server room....and everything was fine. The employee forgot his password f'crissake. I told him NEVER do that to me again. However............. The BEST server crash story of them all: At MicraAge of Mahwah NJ we received a frantic call from Avon in Suffern that the network was 'gone' - not down, literally gone. So up we go. Novell 3.11 at that point in time. Server has nothing on it at all. Clean drive. Employee did the damage. The SysAdmin left to get coffee and left his ADMIN rights console up and running. He's out and the employee walks past his office, sees the terminal and had to delete folders on his drive. Ran DELTREE on the root drive of the Novell server. DELTREE did it's job perfectly well. All over Avon, 250 terminals went quiet.

kroser
kroser

while checking if a replaced tape drive was working, I decided to write the /etc/hosts file to the tape and then restore it. The write seemed to work ok. As I started restoring the file I realized that this is not a file to be messed with (The hosts file was pretty extensive on this box) so I hit "Ctrl C , rewound the tape, wrote a temporary file from /tmp and restored it OK. Soon afterwards, the sh@t hit the fan and all kinds of things started breaking. Printers did not work, terminals could not connect. What I didn't realize that when I hit "ctrl C" the hosts file had started being created again but had no content now. A mad rush to restore from a previous day's backup. All in all about 20 minutes of downtime but it almost cost me my job.

JohnOfStony
JohnOfStony

At 19:00 I received a phone call from a friend, Peter, asking how I was managing with no power. I replied that the power was on but I noted that he lives very close to where I work so I phoned the technical director to warn him that trouble might lie ahead, depending on how well the UPSs held up during the power outage. When the power came back on, nearly 4 hours later, all the computers at work had died due to UPS batteries running flat. I got a phone call explaining that one of the servers, (called "Main" for valid reasons) had permanently died. Our business is a 24/7 operation and while there is a quiet period during the evening, things start happening at around 01:30. The permanent death of "Main" was discovered at around 23:30 and there were three of us to figure out what to do. We found a PC that was doing very little and installed all the vital Main software and jobs onto it, not forgetting the backed up databases etc., renamed it "Main" and we finished getting everything up and running with less than 5 minutes to spare before scheduled tasks started running at 01:30. In the morning, no-one would have guessed at the solid efforts of the three of us during those 2 frantic hours to ensure that nothing was lost. That's the closest I've ever experienced to a major disaster.

n.lloydshrimpton
n.lloydshrimpton

Only a few years ago the IT Manager and NW Admin installed a new UPS and plugged it in and all was working fine. Only a few days latter I came into work and there was a flurry of activity (some of these guys had not moved as fast as this in years). All power to the server room was out and additional cables were being run to get some critical systems back up and running. As it turned out the UPS had been plugged into 15Amp wall sockets but was drawing up to 25+ Amps. All that was left of the wall socket was a pile of malten plastic. Very lucky it could have been a lot worse.

Trog Martian
Trog Martian

After 3 business DSL providers failed to provide decent up-time I finally convinced management it was time for a quality T1. Two weeks after an uneventful T1 installation I start receiving reports that some people cannot retrieve their (hosted) e-mail. Yet it's only SOME people. After much investigation, which includes manually Telneting to port 110, I realize that everyone having a problem has a file attachment in their next e-mail. Hmmm. Using a test account, I send myself various attachments. JPGs come in fine. As do Word docs, mp3s and PDFs. But as soon as the POP3 client tries to retrieve that Excel spreadsheet... Pow. No go. To make a long story short, after pulling out hairs, hex editting files and scrutinizing Base64 encoding I came to realize that Excel spreadsheet headers convert to large repeating sequences of characters. My guess was this resulted in some signaling problem on the T1 circuit that defied all normal bit-pattern testing. EVERYTHING worked fine except for the Base64 encoding of Excel spreadsheets. 0.o After MUCH arguing with the Telco, who could find NO problems, they finally sent me a grizzled old tech (white beard and all) who actually sat down and listened to the pattern of symptoms I described. He ran some local low-level tests on the circuit, identified the problem and switched us to an entirely different 25-pair trunk from the street to the smart-jack. Problem solved. That is one of the wilder ones I've had to diagnose in the past.

nickfreno
nickfreno

This is not the server crash but we had a computer sending out so many packets at one time it was killing our Internet. It would happen at random times throughout the day. We setup a proxy firewall and another Internet connection to solve the problem and later found the rogue computer on our network almost three months later.

jeffw
jeffw

We are a mid-sized police department with 1 full time and 2 part-time IT positions. Last year our IT Admin heard a strange noise coming from the server room. Upon entering, he found a water pipe had blown and was spraying water on our pdc, email server and 2 data servers. After powering them down, the next step was to stop the leak. It took about an hour to track the pipe and stop the water and about 3 hours to clean up. All servers has water standing in them as well as our tape back up with about 1 gallon in it. After contacting our insurance company, 4 new servers were ordered. We built the new servers and transferred all of the old data over and we only lost 6 small files! It took about 2 weeks to rebuild our network and get everyone back up and running smoothly. Word of advice, check you ceilings for water pipes! Our building was remodeled in 2004 and no pipes were to be installed in that room. The architect decided to run 2 pipes through there to save some money! We now have water deflectors in the server room.

hayesg
hayesg

I have 4 server racks, all with BEEFY APC UPS's... ALmost all the servers in the racks have at least dual power supplies. I (thinking I am smart) plugged on power supply from each server into two different Power Strips (so server 1 in rack 1 was attached to power strip one in rack one, and power strip one in rack 2). Even if I had a total UPS failure in a rack, I should have enough time to swap it out, etc... Th eplan was FLAWLESS! Or was it? I failed to do the simple math and look at the specs. Yes the UPS could handle up to 80amps of load at 120v, but that was spread across 4 breakers... I had ALL the servers all on single 20amp circuits... It all started one morning, when one server went offline. Didn't think TOO much of it, it was a small server that could stand to be down for a short amount of time while I got ready for work, and the 15 minute drive in. I got to work, and (unfortunately) forgot about the server. About 8:30, I remembered, and without thinking of my power design, went to power the server back on, which was in rack 2. When I did, all of a sudden rack 1 went dark. Then rack 3 went dark, and then rack 4. What the heck? How on earth did all the racks fail at once... that simply cant be... It took about 5 to 10 minutes to figure out what had happened... powering that server back on had spike the draw on the now burdened UPS to over the tripping threshold... that circuit chut down, transfering its load to rack 1, which shut down, and in turn transfered some of its load to rack 3, which shut down, and so on... So... learn from me... go look at your ups'es and make sure you are using multiple breakers... :^)

IrishMike
IrishMike

Well mine wasn't as bad as any of you guys but about 5 months ago, I noticed when i logged in that morning that I only had 1 email (i usually had 5-6 from the fireall and backup Exec) and I couldn't actually open it. And this was for the whole company! I rushed down to the exchange server and it looked fine. All services running ok except that in the event viewer there was an unknown error with our anti-virus. Now considering this is my FIRST ever IT job (was involved in aquariums before looking after the fish) and my IT manager ran away to ireland (stupidly long story!!!) it was completely left down to me. Our company uses emails a great deal and without it, we don't get any new jobs, which isn't great! Basically there was an error in the update, which stopped Exchange letting any real emails out. Soo to cut a long story short i had to wipe the updates off and then try to connect the server back to the anti-virus admin server and get the new fix in palce. once I managed that...the UPS started failing! Since we have a small system of only 6 servers and 2 of them were connected to the failed UPS, so i had to take them down (one was the SQl server and the other was the AD!) and get them working again. not bad considering I had only been in IT for about 3 months before hand, with only a ComTIA A+ and Network+ to speak of

mike_patburgess
mike_patburgess

I was the last engineer out of many to be called in to resolve a server issue that effected trans-Atlantic communications. The problem used to happen at a certain time of day and at night. The customer and their "high priced consultant" had a series of listings, dumps, etc, for me to view mentioning something about a protocol problem that was affecting the communications. Pushing all of the recycleable stuff asside I asked to be shown the system and the comm lines. Well, it turns out that the two high speed lines were just a twisted pair tied to a block on a post in the center of the room. When the cleaner came in twice a week to damp mop the raised floor in the computer room her mop used to sweep against the wires which were very loose on the tie down posts. Tightened them up and the problem went away yet the customer was very disturbed, saying that it can't be that simple. "Well, you can call me back if the problem resurfaces and we will take a different approach," I said. He never called back.

The Listed 'G MAN'
The Listed 'G MAN'

Check your critical systems manually. Don't just rely on the monitoring software!

franco.pinasco
franco.pinasco

If you have a system that has so many failed components I would ask myself why? as its very unlikely that all those fail at the same time... Maybe the failed PSU made the other components to break? Maybe the electricity is not stabilised? grounded? Check if the output of your UPS/battery is ok. You wouldn't want any other server having the same issues !

GSG
GSG

You're Lucky they had one in stock. I had a RAID controller die on a Saturday, and none were in stock. I had to buy a seat on an airplane for it since we couldn't overnight it. I have a picture of a flight attendant strapping the RAID controller in a seat and coming by asking if it wants Coffee, Beer, Wine, or a Soda.

iain.sloan
iain.sloan

I feel your pain my friend. We had a company doing some cable installs where I work and they assured us they knew where everything was buried and would hand dig 10ft either side to be safe, they didn't. They managed to cut my T1's (4 of them) and two separate fiber runs and the main incoming cable tv line for the facility. Then told us they were not liable to pay for the repair. They found it highly amusing when dealing with my guys on site. I arrived and they weren't so funny. :)

judd.hodgson
judd.hodgson

An outlet rated for 15AMPs for the usual two vertical parallel blades on an AC plug. A 25 AMP cord plug should be one vertical and one horizontal and I believe that 30AMP is two horizontal blades. Whoever installed the plug did not wire to NEC codes.

GSG
GSG

This isn't a server crash, but every day I'd get a call about some special equipment that had lost it's programming. This equipment had batteries inside that would last about 8 hours, but after that, the programmed keys and shortcuts were gone. Well, the doctors were complaining, and it went all the way to administration who was going to buy all new equipment at $3k per unit. Come to find out, housekeeping was unplugging them every night on every unit to plug in a sweeper and wasn't plugging them back in. They'd be unplugged until a doctor needed to use it, then he'd plug it in, and no shortcut keys because it'd been unplugged for 12 hours.

NickNielsen
NickNielsen

On the source end of the adapter switch. :^0

lferreira
lferreira

the funny thing is that the server didn't have a ups. but it was still always working. the problem was that they didn't care for that until it was too late. the bad psu start to destroy all other parts slowly. the amazing thing is that the OS (Linux) was always correcting all the problems and booting properly. by the time of the problem it was already too late. after that (the disaster) the have put an UPS in the server, but they didn't put a grounded plug. after seeing that, i warn them, but it's still the same, and a couple of days ago, last time i'have been there, they remove the ups from the server becouse there was another computer that needed an UPS. talking with the president of the company, he said that he will talk to someone to put it in there. (until now no response)... my hands are clean in this matter this bring another thing to my head. why the bosses always think the now what's best for them, and when things like this happens, we are the ones to blame... a reflection on this matter is needed... (or a head shot in the bosses :-) )

fulton.dr
fulton.dr

99.99% of the documents on the server are unclassified. The users still need the data. Therefore they verify each document before it goes onto CD. The file names are also recorded.

NickNielsen
NickNielsen

New water lines were being installed in the neighborhood next to mine but there was a buried 200-pair telephone cable in the way. The telephone company came out and flagged the cable route at 5-foot intervals. Needless to say, the backhoe operator cut the cable. When the telephone tech showed up, the project supervisor tried to tell him the lines weren't properly marked. The telephone man pointed to the line of flags marching across the job site [u]and[/u] to the flag sitting in a piece of turf in the backfill pile and asked "What's that?"

NickNielsen
NickNielsen

All UPS providers provide the appropriate NEMA connector on the power cord for their products. APC, for example, provides NEMA 5-15P as the input power connector for all UPS up through 1500VA. (http://tinyurl.com/g5vnw) The input power connector for their 120VAC input 2200 and 3000VA UPSs is the NEMA L5-30P twist-lock. (http://tinyurl.com/2h8nee) Unless, of course, some dummy cut the L5-30P off and put on a 5-15P, but that would void all kinds of warranties, and we all know [u]nobody[/u] would do that! ;) Edit: links

ganyssa
ganyssa

every night for weeks we lost a server in one of our out-lying offices. After going over all of the logs, and checking everything we could think of, we finally realized that the new cleaning crew was unplugging the UPS so they could vacuum in the back hallway.

NickNielsen
NickNielsen

See "Boxed Out" above if you already haven't. We use UPS in four different sizes. I carry one of each in parts stock and [b]always[/b] carry spare batteries for each with me. Edit: I've found all kinds of things connected to server or network UPSs: power strips, battery chargers, stereos, you name it, I've found it. Even a T630 laser printer on a 350VA SmartUPS! :0 I've also found things [u]not[/u] connected to UPS that should have been. I've stopped being amazed by how badly things can get f*cked up by the ignorant.

The Listed 'G MAN'
The Listed 'G MAN'

One 4 way adapter are more likely to fail as a total unit (all 4 connections go at once - UPS or not) rather than 1 single adapter switch on a UPS. Mind you, if the UPS fails anyway.... replace it with your backup unit(s). I have 3 just waiting!

The Listed 'G MAN'
The Listed 'G MAN'

it was your line: "Now that the server is secured, the users need to veryfy and copy" Which made me think that you went to all the trouble of securing the server only to let users copy things from it to CD and perhaps walk out the door.

gforsythe7
gforsythe7

I don't think they CUT it, they just got one of those NEMA 5-15p to NEMAL5-30P adapters. My dad has a similar one for his 5th wheel camper. 8)

GSG
GSG

Our HVAC unit in the server room had some issues, and we found out we needed replacement parts after ambient temps in the server room hit 105 degrees. I'm sure you know what happens at that temp. Unbeknownst to IT, the part comes in, and maintenance takes the HVAC down to replace the part. The temperature quickly climbs to 105 again, and all the servers just stop working. If they would have told us, we could have performed graceful shutdowns and kept a couple of key clinical systems up and running.