Emerging Tech

The worst server room decisions ever made by management

In past blogs, I've shared about my experiences with my company's relocation and associated server room move. Today, I'm listing some of the worst server room decisions made by management that I, myself, have encountered over the years.

In past blogs, I've shared about my experiences with my company's relocation and associated server room move. It appears that a fair number of TechRepublic members have their share of enriching (or incredulous) anecdotes as well.

That feedback has inspired me, so I'm listing some of the worst server room decisions made by management that I, myself, have encountered over the years. Feel free to chip in with your own stories!

Attempting to get eight hours of run-time on UPS alone (Investing in the unnecessary)

An annual electrical maintenance resulted in a total shutdown of power for about eight hours on a Sunday. Unknown to IT, one of the departments actually had a staffer coming back every Sunday in order to do an online electronic filing of certain shipping documents.

Never mind that nobody in that department saw the notices of the impending power shutdown on every elevator door over the entire week, or noticed the company-wide e-mail blast. The lone staffer arrived as usual that fateful afternoon to a darkened office. Obviously, he failed to do the requisite filing, resulting in a compound fine being imposed on the company.

The company's General Manager was upset and asked why the uninterrupted power supply (UPS) investment still resulted in non-functional servers. When it was pointed out that the power in the current UPS could only last the one dozen servers for between 15 to 20 minutes, the order was given (over my objections) to purchase sufficient UPS capacity to last through a "full day" of power outages.

Preliminary estimates with engineers from APC indicated that we needed two 42-U racks packed with UPS and extender batteries to be able to meet the desired runtime. The cost? $20,000.

The idea was given up only when I realized that a fully running server room without a powered air-con or ventilation is not a very good idea. To spare myself an urgent visit to Toni's View from the Cubicle blog for tips on getting a new job, I doubled the estimate to accommodate the air-conditioning - at which point we also ran out of space in the server room. Thankfully, the directive was scrapped after that.

If you have to ask: yes, a diesel generator was totally out of the question since the server room was located squat in the middle of an office complex.

Refusal to buy server racks (Penny wise, pound foolish)

Unless you've been working in MNCs all your life, you've probably encountered this one before: Management refusing to purchase proper server racks.

Now, a certain reluctance to splurge a few multiples of grand on a high-end kit is understandable. But the situation becomes a little intolerable when we're talking about just a couple of simple bare-bones 42-U racks costing less than a grand each - to house no less than a dozen and a half servers currently scattered all over.

Have you encountered a situation like this before?

In my case, I finally got my way in this scenario. But I wanted to hear from more of you who serve at the "front line," however. How would you justify the value of proper server racking?

Splurging on the wrong things (Reacting from fear)

Just before I joined this particular company, one of the database servers suffered a serious hard disk error, resulting in a corrupted database. The near line backup was no good because its mediocre hard disk had long ran out of sufficient capacity for even one full backup.

We recovered the data for the most part. Due to the resulting anxiety, management wanted to replace two of the database servers with brand spanking new ones.

I had just joined the company, and the exact instruction given by my boss was: "Just go for the best. I'll pay." Now, you must understand that these database servers, though critical, were used by no more than five users each; there are so many cheaper methods to prevent a recurrence of the problem. I confess that I didn't follow the instructions in the end. I quietly told the vendor to just give me something mid-end, and we ended up spending about $16,000 on two HP servers.

Still, the money spent could have been better used elsewhere- like replacing a couple of production servers that were more than eight years old and for which there is no functional equivalent in terms of hardware.

What tales of managements' server room shenanigans do you have to tell?

About

Paul Mah is a writer and blogger who lives in Singapore, where he has worked for a number of years in various capacities within the IT industry. Paul enjoys tinkering with tech gadgets, smartphones, and networking devices.

67 comments
Alpha_Dog
Alpha_Dog

I worked for a company around Y2k that still used mainframes and magnetic tape for what was functionally data processing and data mining. Data processing the nightly batches generally took about 20 hours, so when we were asked to take on an additional data mining project it couldn't be done the existing way. Unfortunately, the execs of the company were not used to hearing No, just speaking it when it came to our operations and upgrade budgets. Frustrated, I took a DLT drive to my home office, crunched a boatload of data with a Linux box, and returned with the tape the next day. This would have been a week long process tying up 3 DGMV mainframes and 2 9-track decks, killing our nightly batch runs. The boss was impressed and made a "command decision" to switch to Linux. Now, this was my goal, but the way in which it was attempted was certainly not what I had in mind. The way the system was designed, one system would read the tape and spool the contents to a second mainframe which would crunch the data, sending it to a third to spool and write to the output tape. Being a methodical fellow, he chose to replace the input side first. The system worked well, reading and moving the data faster than the SCSI-1 9-track could and sent it to the processing mainframe. The Linux system wasn't even breaking a sweat. Unfortunately, before long things slowed to a crawl as the buffers began to fill on the processing mainframe since it was receiving data faster than it could process it. The system began to compensate by spooling the temporary output to tape and began to beat itself to death with read and write cycles on the old reel to reel 9-tracks, yet the data kept coming. The result was tape everywhere, the mainframe errored out, and a Linux box wondering where it's friends went. Needless to say, that migration plan was cancelled. I explained what had happened and why, recommending an alternate course of action which included replacing all 3 mainframes with the Linux box (CPU utilisation never went above 7%), but they chose to can the whole project. I left what we called Jurassic Park less than a year later, making good money as a contractor data mining the same data for my old employer. The company still exists, and from what I hear still processes data with the same 16 DGMV mainframes that are older than their operators.

sgerper
sgerper

please help me guys, we are moving to a new building and top management wants to put the server room in a place where there is already an electrical transformer, this transformer is not for powering the servers but it is for other production purposes not IT related, I think this is dumb and risky, but i would like to have more specifics so i can present a erbutal of why we shouldnt do that covering all possible reasons, health and datawyse. this is kinda urgent

maclovin
maclovin

Worst decisionS: Thinking the fire extinguisher down the hall will be adequate protection. Storing the tapes "off-site" at the bank next door. Hiring people who have a known history of not working and trying to circumvent the measures put in place, so that they can talk online, and generally not work at all. Using passwords that suck. Thinking that I shouldn't be allowed to see something, and moving the keyboard just in case I might see what they're typing. Which, as most know, doesn't really guarantee security. Using ROOT as the main account on a database server open to the Internet, because their programmers in India said it was fine. (What the hell do I care at this point if the information gets stolen!) Thinking my salary is enough Telling me that one of the three companies I work for is more important that the ONE, 1, SINGLE company that pays the bills for the other two to even operate. Being generally retarded

reisen55
reisen55

True. Continuum Health Partners, regional hospital network in NYcity, has -or had- a legendary IBM mainframe in their Secaucus NJ location covered by a Blue tarp because the ceiling had a water leak.

aadato
aadato

I have not read anyone else's comments, but I do have one to add: At a medical billing company I tempted at, they decided to put us medical printing and approval techs in the same room as the servers. Not that that was the bad decision, it actually saved the company from losing a few servers five times while I was there over one summer. The bad decision was to keep a whole rack of servers under a failing A/C unit. To make matters even worse, when the server would fail, it would leak, right on top of the server rack! Don't ask me how those servers did not fail, they were possessed to work by some IT Ghost I guess. The only thing we could do is once we heard the A/C unit start to go is to run for the large garbage bags and duct tape and try and manage the water flow into a bucket on the floor. By the time I left that company, management made a decision: To keep the bags and duct tape in place.

amy.wanda
amy.wanda

Been here, done that. I was in an almost identical situation where a production server dies and backups were no good. It was already a brand, spanking new server, but one of the brand new drives died and took a lot of data with it. This was one of those all-nighters as I'm sure many reading this have experienced. We got as much data recovered as we could. The "big bosses" decided we needed a better solution. What a great idea!! Sadly, their solution was to purchase $35,000 in separate disk arrays that could be attached to a server. This is great since that means if something happens to the OS, the data is safe. The stupid part was they purchased these arrays knowing they couldn't be used with any of our current servers! Then they ran out of money to actually purchase the servers needed to use the arrays. So, we ended up with about a dozen servers out of warranty (with no hardware replacements) and no way to use the $35,000 worth of equipment.

bill
bill

My hosting provider back in the nineties had about 10 sunos servers running in literally a back office. They were stacked every which way, wires strewn about, over a desk, under a chair, no fire protection, and they would lose 1-2 motherboards a month due to heat problems! http://www.cube3design.com

lyallaust
lyallaust

Definatelly the worst I still have to deal with is a server room that was setup with a wire mesh security door to separate it from the users prying hands. Why wire mesh? Because it was to save on putting in a separate air conditioner. Guess who lives in Australia, gets 40+ degree heat, and has a server room with no aircon because it gets turned off on weekends?

jruby
jruby

Our backup tapes were shipped off site once per week, so I requested a waterproof/fireproof box to hold daily tapes until they were rotated out. Management thought that was a great idea, and promptly provided a very nice, very expensive keypad controlled firesafe - with very large magnetic locks on the inside. I had to actually store a tape in it for one day and prove to them that the data was unrecoverable before they replaced it with a box that had mechanical locks. I guess they thought 'computer' tapes were made of hardier stuff than cassette or vhs tapes... Jim

CElliott316
CElliott316

Your job is to apply your knowledge, experience, and research skills in accomplishing what mgmt wants done, not second guess them about the wisdom of doing it. Why could you not have found some way the filer could have gotten his job done on Sunday, a second source of power, extension cord or something?

reisen55
reisen55

Hospital network I supported: 1. A Windows for Workgroups Server (yeap, a server) with patient data on it was set alone in a closet that showed a flood line about mid-level server and the server was set over a drain so it would flood again. 2. Same client, servers racked so close together that they were overheating each other. 3. Same client, backup tapes kept in secure safes in case of disaster. These safes were right next to the servers. Brilliant. 4. Same client: Server backup tapes were all over the floor, on counters, etc. 5. Insurance Company: it is really nice when a door on a 42U rack is indeed hinged to the rack and not just leaned up against it. I opened the door and it almost took me out with it. 6. Insurance company: data tapes were not re-staged for an overnight backup in a sister division. Our tapes were re-staged as there was a server room outage over the weekend. On Monday, our data tapes went out to Iron Mountain. Sister division did not have current tapes to send out. On Tuesday there was no building. 2 World Trade Center, South Tower. 103rd floor. The admin for the sister division went back up to get his groups data tapes. He did not make it out. Remember him well: Stephen Poulos.

LarryD4
LarryD4

I started my current job 5 years ago and started the same day my current manager was elevated to his management position. Before my hire, this guys was part of the redevelopment of the whole building. Basically a floor by floor gut and rebuild. Instead of bring in the Ops/Server Room in to a newly designed computer room in the basement, which would be near the IT staff offices. instead he choose to create a Server/Ops room on the second floor between two offices. Its cramped and poorly managed..

dwain.erhart
dwain.erhart

Well, I've got some real ones but I'll dangle one of the worst I have dealt with. When I first hired on with one organization, they had a server problem - the server kept crashing. Well, I decided to investigate the problem. The first thing was that the server was in a break room and it had NO UPS. Every Tuesday morning the server would crash at 7:00 am sharp. So, I talked to the maintenance crew who informed me that every Tuesday at 7:00 am, the generator ran a test. Well, a UPS would have solved that problem - except that they also experienced down time in the middle of the day. So after investigating, I found that the server was on a 15 amp circuit, witb three coffee pots and a microwave oven. Whenever someone used the microwave on high and the coffee pots were all on, the server would shut down. Yuck. Anyway, once that was fixed... well let us say this was the tip of a very large iceberg.

gmcatee
gmcatee

Actually, the worst data center error I ever saw was at a law firm on the 56th floor of a high rise. Their cooling system ran off of a chill water feed, and it drained into a stand-pipe. The union guys who maintained the chill water feed and the drainage system didn't want a fixed drain pipe, because they'd have to take it apart to clean the drain pipe. They asked for (in writing), permission to drain the cooling unit through a rubber hose which would be lowered some 2' into the standpipe. They received permission, in writing. When the pump in the cooling unit kicked in to pump out the heat exchanger reservoir, the hose would jump about and try to pop out of the standpipe. To address this, the contractors secured the hose into the pipe using bungee cables that hooked into flanges they welded onto the standpipe. Several months later, during maintenance on a Friday evening, one of the contractors forgot to put the bungee cords back when he got done, and by Monday morning the hose had popped out, and filled up the server room about 22" deep, that being the height of the head of the stand pipe. Oops! It turns out that the room was in fact more-or-less airtight to ensure that if the halon system went off it didn't affect the air supply for the IT staff, who worked in the office outside the server room door. This held water, too, at least well enough!

bernalillo
bernalillo

Install carpet and locate the entry to the only bathroom on the floor to a high school library in the server room. Sheer genius.

jmarkovic32
jmarkovic32

Let's put all of our mission-critical servers in a small corner in the Finance Department. Let's allow users to put their personal space heaters and fans near the servers. How about a wiring closet? Nah! We'll just keep the switch in the ceiling. If it's not broke don't fix it! One time a network port went dead and the contractor refused to fix the problem unless I indentified where the cable was. Being young, dumb and naive and eager to please management, I spent one evening popping up cieling tiles across an entire half of the building trying to locate the problem cable--to no avail. Why did the contractor have me do this? Well, they were fed up with having to go to a certain woman's office full of boxes as trash just to get access to main switch...just for a $100 cable repair job. The next time it happens, I'll just tell the contractor to add an "inconvenience fee" to the total. I want management to see just how much our IT problems really cost. Needless to say, I'm looking for another job.

Selltekk
Selltekk

This sounds like one of those "When I was your age, we were beaten to sleep every night, and we were thankful for it!" However, I believe every one of these stories. How's this: IT manager and network administrator think its a good idea to move our CMDB application the all of us use every day, all day long, off of a new Rack server to an 8 year old tower style box that seems to be having hardware problems. Their reasoning: "The new server runs WSUS and ePO, so it must be overworked."

scalv
scalv

We had the dedicated AC in the server room die. To keep things from overheating, we had ceiling tiles popped out, and the door (which opens to the warehouse - not even air conditioned office space) open with a large fan in the doorway. We got quotes on a replacement system, and brought the final req to get signed off. It got turned down by senior management with the reasoning "We don't need it. We've been doing fine so far without it". No arguments about heat, dust, or security (open door) had any effect. A few weeks later we had the first warm spring day. It got hot enough in the server room that the alarm system went off (temp sensors hooked up to building alarm), and shortly after we had to shut most systems down to keep them from cooking themselves. The new AC system suddenly got approved as a mission critical item with immediate need.

Jaqui
Jaqui

to even LOOK into the server room, when cleaning the cable nightmare from under ONE desk took 4 hours. 7 power strips 6 extention cords 13 unused device power supplies 4 phone lines going nowhere 3 25' network cables doing nothing. and not one cable run through the desk cable control system.

Mr_Sprouts
Mr_Sprouts

There are SO many, where do I start? Water sprinkler is server room. No air conditioning in server room, must leave door open. Routers and T-1 endpoints sitting on a shelf that is taped to the wall. The tape gave way but the cables were short enough to prevent equipment from crashing to floor. No UPS. No backups because, tape backup unit broken. More problems exist and some of the listed problems I have corrected but, these were problems when I started.

jacobus57
jacobus57

I am an IBM SSR, X-Series/SystemX server specialist, and I have seen it all. But I want to focus on "Why racks?" I am appalled when I see expensive boxes stacked like blocks on the floor, a table, on a groaning slab of particle board. The bottom line is, we--and I am sure most other legit service providers--are within our rights to refuse to provide service if a machine is not accessible. And guess what? That pizza box stuck under a pile of three or four in-production 2U servers ain't accessible! And that's not even accounting for the lack of airflow, non-existent cable management, etc. And then there are the NOCs with coils of powers cords. fiber and ether ankle deep on the floor, no cell reception and no land line, no crash cart--I could go on. Really, what are these boneheads thinking!?!

NickNielsen
NickNielsen

We've got wire shelves with no standard layout; no site is the same as another. In most cases, there's not even a patch panel, just a series of mud boxes or terminal boxes mounted on (or hanging from) the wall. Adding insult to injury, somebody decided that cable management in these installations could be had by wrapping patch and power cables around the wires of the shelves; trying to trace cabling is impossible. Edit: remove excess

robo_dev
robo_dev

A power transformer is really not all that big of a deal. At another company there was one of these in the tape storage vault. I was auditing their data center and my first reaction was that this was insane. However, after doing some research, I determined that since these units are made to be used indoors, they are very well shielded, and well protected from things like water leaks and so forth. I looked into how much EMF is radiated from one of these units and you could set backup tapes on top of one without risk to the data, although obviously that's not recommended. the only minor issue is to make sure the heat generated by the transformer is factored into the room cooling requirement. There's more of a fear-factor than anything to something like this, as a large UPS unit or PDU is likely to emit about the same amount of EMF. I would look at the specs of the transformer. As long as it's a fairly new shielded unit, it's really not an issue. It could be moved, but that would cost a lot...it would be cheaper to build a partition wall to hide it.

JCitizen
JCitizen

Thanks for that one aadato!

DNSB
DNSB

Perhaps a way for this person to do his work could have been found. I do wonder why the filing was left to the last minute? Was this not within management's control? You did read the original post's comment that the power outage was due to an ANNUAL electrical maintenance? That the office was dark? In all likelihood, the power was out for the entire building. This just might have made running extension cords a trifle difficult. In my not so humble opinion, management carried the can for the screwup. If nothing else, for not reading the posted notices and the emails.

paulmah
paulmah

Hi Elliot, Thanks for your response. Extension plugs are out of question as power is being shut down to the entire office block. Well, yes, the electronic filing must be done on a Sunday. I actually recommended various ways of mitigating future such incidents - once I became aware of it. This includes cheaper alternatives such as purchasing a laptop with wireless (mobile) Internet access for just $1-$2k. All were rejected. I suppose I took issue with what a blatant waste it is to splurge $10-20k on a "problem" (power shutdown) that happens only annually. Actually, even with the UPS upgrades, nobody could be sure if Internet would work as well since we don't know if there are any active telecommunications equipment in the building itself - which I mentioned also. You can see why I gave this article the title it got. :) Regards, Paul Mah. (Edit: Whoops, corrected half-written title)

Alpha_Dog
Alpha_Dog

Proper Planning Prevents Piss-Poor Performance. The 6Ps have been a mantra for me ever since I was 14. Here, 35 years later, it's a different world, the the basic rules are the same. Sometimes I wonder what the server rooms look like behind the pearly gates... ...we know what they look like in hell.

kycpow
kycpow

Do any of these idiots know how to say "electrician"? Do they know how to use or recognize a server? Doesn't anyone have to get an inspection even a safety inspection would have helped. I've seen photos that would curl a simple electrician's hair much less an intelligent computer designer. Do they think these things are toys? You guys are risking your lives around these things. tsk tsk tsk. Get management (I hate to use the word) to come down and spend a day or two with you.

paulmah
paulmah

Hey, I can certainly identify with multiple switches and wiring nightmare. There is a company I was at with about 45 clients, and guess how many switch? ELEVEN switches. And it only based on a single level, and not too big at that. The previous IT folks basically kept adding switches as the layout of the offices changed. (Actually, the management probably refused to spend on proper network wiring) For the first two months, most wiring problem took hours to solve at the minimum. Since you really don't know where the wiring goes, or if it got 'extended' by another switch that has fallen behind the cabinet. Regards, Paul Mah.

ARAHIGIHS
ARAHIGIHS

I know how you feel...The building we are in was built in the early 1900's and A/C was added later, more as an afterthought than any real applicable utility. Our "Server Room" is the smallest office in the building, primarily because nobody else that "mattered" wanted it. It has sprinklers and a rather leaky A/C unit in the ceiling. No room for racks and we have 3 tower style servers plus an AS/400 with an accompanying storage cabinet. All of this connects to an HP Procurve switch located in the ceiling 50 feet from the "Server Room" with multiple switches scattered about throughout the building consisting of 3 floors.

OntheEdge
OntheEdge

We had a new facility built 4 years ago. I couldn't get the engineer to put in A/C because I couldn't provide him with "printed" statistics on the heat produced by servers. He thought an exhaust (cheap consumer grade bathroom) fan exhausting into the office space outside the server closet would be sufficient? This setup normally ran at 90 ? 95 degrees in the closet until the A/C went off which happened quite regularly because they also cut corners on the building A/C system. On top of that, it took 2 years of cooking our servers to get him to reconsider the matter. This only after they had tried several modifications to their system trying to make it work. They also put in sprinklers in the closet? Oh yea one more, they wouldn?t put the server closet on the generator because they didn?t want to overload it. And I thought only mental midgets worked in education, thanks everyone for proving me wrong?

blackepyon01
blackepyon01

And now the servers are heating my coffee cups cause you said we didn't need AC >)

dave.schutz
dave.schutz

When I started my last job our server room was part of a storage room, no AC, and not enough electrical power. I plugged in a new box and started having brown outs! They had fired the LAN admin just before they hired me so I stepped into a mess. No backup to speak of and servers were NT4 and each was its own PDC. Eventually the servers started crashing and we built a new network in a nice room from scratch.

blackepyon01
blackepyon01

I may have crawled out of a few less "idiot-filled" closets then some of you so I may be biased on this, but I believe that management positions should have requirements such as, "knowledge about what departments they manage", for instance? Some of these guys have absoulutely no idea what they are managing and just go ahead and order what they want without a thought on how it is actually going to work, and of course, without consulting the computer department. And don't get me started on the server racks argument >_

blackepyon01
blackepyon01

I may have crawled out of a few less "idiot-filled" closets then some of you so I may be biased on this, but I believe that management positions should have requirements such as, "knowledge about what departments they manage", for instance? Some of these guys have absoulutely no idea what they are managing and just go ahead and order what they want without a thought on how it is actually going to work, and of course, without consulting the computer department. And don't get me started on the server racks argument >_

seth
seth

The boneheads are thinking that they are an executive and you are an overpaid geek. I have heard this exact conversation between two executives in a bathroom (try looking in the stalls next time guys!) regarding my salary and the cost of IT equipment and what a waste it was. So.... I unplugged the server switch from the main stacked switches and walked into their offices and gave them each a pencil and a pad of paper. Welcomed them back to 1980. Smiled all day long after that!!

CrankyOldBugger
CrankyOldBugger

One place I worked at decided to upgrade the fire extinguisher system.. so they put water based sprinklers in every room, including the AS/400's room. If some comedian decided to pull the alarm, I would have been taking a shower with the AS/400 and several big line printers. A recent client of mine has his server under a staircase. The monitor was sitting on top of it. To do any work on the server, you had to sit right over, balance the keyboard on your lap, and run the mouse on your leg.

robo_dev
robo_dev

Perhaps the forum should have an animated 'Zombie Alert' or something

sjamieson
sjamieson

the roof gutter runs righ over our server room. Normally this is ok but on 3 occasions water has poured in over the switches due to heavy rain or leaves blocking the gutter. Our managements solution was to put plastic sheeting over the racks.

seth
seth

BRAND NEW BUILDING and the server room designated by executive management is a closet. We have 1 2 post rack just for switches, routers & wire management. Right next to it is a fully enclosed rack (full height) for our main servers, backup and such. Next to that was an IBM half rack, enclosed for the casino floor systems. I expressed to my director that we needed to move the racks to the opposite side of the room or have the air conditioning unit moved in the ceiling as well as the sprinkler (which was right over the main rack). Nope, too much money, only 7 days till opening. LOL!!!! So, I get an urgent call: "GET TO THE SERVER ROOM NOW!!!" I run up there and I walk into water POURING out of the ceiling onto the top of the IBM Half Rack. Everyones standing around staring at the site. I ran in and pulled the power and turned off the APC's. Then some smartass walks in with a few rolls of paper towels and tells everyone to start wiping it up. Oh, I freaked out. I pulled the servers out of the rack and as I pulled one out, water poured out of it. Oh My God!! Did we get new servers?? NOOOOOOO!!!! No one wanted to claim responcibility for making a stupid decision so I was told to take the servers outside and lean them up against the wall in the sun and to place fans in front of them (this is during construction!!!!). OK... whatever. 2 Brand new IBM servers and they had so many problems for the two years I worked there and they would NEVER allow me to replace them, yet if they went down the whole gaming floor went with it and all of the player data as well (backups were local every two hours, to tape every night). STUPID. IT guys need to start moving into executive management or these companies will continue to waste money, time and great IT guys.

Neon Samurai
Neon Samurai

I don't know if it's premoting the manager to where they can do least damage or what but being premoted to the level of one's incompetence seems to be a regular corporate practice once your into the management ranks.

blackepyon01
blackepyon01

Aren't casinos supposed to make money by ripping people off their money by asking them to try again? Looks like the management is too tight with their hard earned money to bother with the logical choice. No sir, please dry off the sensitive equipment and try again with the (proven faulty) inadequete setup so we can save even more money.

NickNielsen
NickNielsen

H3ll, today I AM his handle! Hap py flip pin' Fri day! edti - splel

CrankyOldBugger
CrankyOldBugger

I worked long and hard at coming up with that one...

Editor's Picks