Leadership

Video: Five dumb mistakes IT pros make that can mess up their networks

Every admin has at least one embarrassing story about something they did to hose their own network. In this IT Dojo video, Bill Detwiler discusses several dumb mistakes that can really mess up a network, so that we can avoid them in the future.

IT pros love to share stories about the dumb things that users do, but sometimes you have to admit your own mistakes. Every admin has at least one embarrassing story about something they did to mess up their network. After all, you're busy, networks are complicated, and sometimes it's easy to overlook the obvious. In this IT Dojo video, I discuss several dumb mistakes that can really mess up a network, so that we can avoid them in the future.

For those of you who prefer text to video, you can go to the video player page for this IT Dojo episode and click "See Full Transcript," or you can also read Deb Shinder's original article, "10 dumb things IT pros do that can mess up their networks," on which this video is based.

You can also sign up to receive the latest IT Dojo lessons through one or more of the following methods:

About

Bill Detwiler is Managing Editor of TechRepublic and Tech Pro Research and the host of Cracking Open, CNET and TechRepublic's popular online show. Prior to joining TechRepublic in 2000, Bill was an IT manager, database administrator, and desktop supp...

46 comments
NickNielsen
NickNielsen

Background: I was working with the client's help desk, troubleshooting a wireless access point failure in a grocery store. (For those that don't know, wireless access is absolutely necessary for proper inventory management.) It's Friday, December 24, and the lines are halfway back up the grocery aisles. We were following a network fault isolation tree and the next step was to disable Spanning Tree Protocol. When the level 2 HD tech killed STP, every status LED on every switch in the room went SOLID. My first job was to get the point-of-sale back on line; when I disconnected that segment from the rest of the network, those switches recovered, so I bypassed the core switch and connected them directly to the router. It took me 20 minutes to isolate the loops in the remaining switches. In the end, I found multiple cross-connects. One switch even had two sets of adjacent ports looped. Freed up a bunch of ports that day... I'm told that STP is no longer a consideration during wireless troubleshooting.

pworlton
pworlton

I can't relate any "big mistake" stories, but I've found that some seemingly little mistakes can become much bigger than I expected. One example is missing an appointment - something I admit I have issues with. One appointment I missed lost me a client and friend, even though I remembered a few hours late and apologized immediately and profusely. Another one stems from how hard I am on myself when I miskey or misclick something. I would always say, "Oops" without even realizing I was doing it until someone pointed out to me that it doesn't exactly inspire confidence in the client that I know what I'm doing. Those minor typos can be easily corrected, but a first impression can't be so easily mended.

james.howel
james.howel

Can I use my sword if I become an IT Ninja?

Dilberter
Dilberter

I applied for a job as a Novell Network Administrator and was almost hired. ( The previous guy had set the network to Large Font [ which at that time disabled many of the features/applications on it ]; something you shouldn't do ). When the manager handed me a piece of paper to read; I took out my bifocals [ the kisss of death ]. When the hiring manager realized I used bifocals ; I lost the job and was out the door in 5 minutes; saying Wha Happened!!??

Falcon_IT
Falcon_IT

I have done a few that i thank GOD for saving me... I logged into the Corporate Network through the VPN through my home wireless. At the same time i was configuring a Router/firewall attached to the Ethernet port of my laptop. (SAME MODEL/BRAND etc.)I typed in an IP address (testing) of the local attached Router and it brought up the Web Interface. Remember I am still logged into the VPN to Corp...I make changes to the Router (thinking I was logged into the LOCAL router) BUT I WAS LOGGED IN TO THE CORPORATE ROUTER...(DON'T ASK) I think it was very late and I was VERY tired. Come morning and users complaining about not being able to access this and that... took me 1.5 seconds to realize what i had done and thankfully to a backed up config, i restored it within mins... Oh Man! I have never forgiven myself for this. There were so many times I coud have picked on what i was doing but just was not paying attention.

sirloxelroy
sirloxelroy

Not standing up to your Boss or the CEO at appropriate times. I have committed, and still have problems with this. You need to enforce the security policies on them also, and to make sure they understand their importance. Just because they are the head honcho, does not mean they can get by with the worlds worst password, get by the complete web filter system, or get unsecure software installed. Educate them to the reasons, then politely enforce them. Also, educate them on purchases.

michaelaknight
michaelaknight

our CEO just recently wrecked the network, and infected an application server requiring a system state restore and about 4 hours downtime. He hasn't said much lately about policy enforcement... i think it's a given now.

justin
justin

Although those are good they seem more like oversites instead of mistakes. Mistakes in my opinion are like the following. 1. Allowing drinks in server room and then spilling said drink on server. 2. Replacing the wrong hard drive in a bad Raid Array. 3. Opening the wrong ports on the firewall 4. Or my personal favorite, remoting into the server and instead of restarting it, accidently shutting it off. Requiring a 45 minutes drive to go turn it back on. Been there done that. Those are mistakes.

michaelaknight
michaelaknight

#1 yep #2 or inadvertantely breaking a healthy array #3 yep #4 or remotely choosing "install updates and shutdown" by accident and walking into a blue screen because of a whacked out hotfix...been there last week. #5 finally rebuilding a server, promoting it, and getting it running squeaky, then never doing an image until you've put back all of the software thats' perfect combination caused the crash in the first place. #6 using an OEM server 2003 disk that doesn't have a repair option, choosing to install windows (as one should), walking away not realizing that you're gonna get a good clean, anattended installation of windows installed into the C:\WINDOWS.000 folder on a system drive that doesn't have enough space... instead of the in place repair option you would normally get.

ken.leitman
ken.leitman

Nothing new here, why waste time making a video

XnavyDK
XnavyDK

Ken. Maybe you didn't learn anything from the video. I did and probably quite a few others. Heres an idea. You are alone in the world. since you are all alone, this link should keep you busy. http://www.dumb.com/badfortunes/

jmarkovic32
jmarkovic32

There are IT people at every different level watching these videos. I wish I knew about TR when I started back in 2003! I could have saved my company a lot of time and money!

JCitizen
JCitizen

I choose not to listen to the nannering nabobs of negativism like ken.

D Squared
D Squared

Just because nothing was new to you or I, these mistakes still happen to new IT people and in new business. It is probably new to somebody!!

reisen55
reisen55

Forget the network. I just had a colleague spend an entire day on-site with one of our BEST clients and at the end of it TELL the client that the entire visit was a WASTE OF TIME. Oh Beautiful. KMS: a good acronym - Keep Mouth Shut.

Mick_obrien685
Mick_obrien685

Luckily the complete and utter was 8,000 miles away when I PROVED that he should be working at Mcdonalds.

LRepko
LRepko

umm, well where to begin... I started at this company March of 2008, and it took me 5 months to realize that the file server that hosts our Business critical information was in a raid 0 config. Yea, i was working on implementing a good back up, so that means, if one of those drives would have crashed... there would have been no back up; that was what i was implementing. After this mild heart-attack i had, i booted into the perc5 configuration, and noticed that there was a spare scsi available so i could rebuild the array for raid-5. I almost sh*t myself. That was an awesome day. I just remember taking a break in my chair at my desk, and leaning back to look at the raid rack. I studied the blinking pattern of the Hard drive I/O leds, and my jaw dropped. I was like whaaat?! who did this?!

pgit
pgit

I installed a server at a medical facility and thinking I knew all the addresses in use assigned it an IP. The result was a bazillion dollar digital xray machine didn't work. I got the call and came over, looked at everything and concluded the problem was the xray unit. They called the technician, who had to drive 80 miles to get to the place, the total cost for his call was several hundred dollars. But he scratched his head, too, and left with everything as it was; broken. So I came back in and while trying to fathom what the heck could be wrong I saw a scrap of paper next to the computer that runs the xray, with an IP address scribbled on it, the same one I assigned to the server. I called the xray tech and asked him where he got that number. Turns out the xray machine has it's own IP. We never would have figured this out if the guy hadn't scribbled the number down. Talk about embarrassed. With the new server off line I fired up my laptop and ran arpscan, sure enough there was the IP.. Gave the server a new one and they were back in business. A free tool that takes seconds to run, prior to just randomly picking an IP and all that would have been avoided. Yes, the xray tech got paid, and I didn't, per my own request. Thankfully those folks are pretty laid back.

reisen55
reisen55

I have done that with my clients on occasion, usually as a debt of service knod to whatever gods manage IT. Correct to request not to be paid. Your client will honor and respect your honesty. My issue with my colleague was that he spent an entire day working on a Point of Sale system that WE did not setup or build, only cable in place. Third party stuff. And he may be breaking rules and support agreements, I hope not. He did this to produce a report that, at the end of the day, the client told him they did not need. (Should have checked that out in the first place). So, a waste but then he told that client that singular fact and left. Oh, beautiful. Arrogance.

Bill Detwiler
Bill Detwiler

I totally agree with you on requesting not to be paid. It may have cost you or the organization a single payment, but such actions have always built me good will with my customers.

Bill Detwiler
Bill Detwiler

If it was a waste of time, does the client still have to pay? :)

Mick_obrien685
Mick_obrien685

Very Good! Only another 45 more to cover and that's the basic ones done. time lapsed. GUILTY. Thank you, 51.

DonG43
DonG43

I worked with a guy who operated a data center in the Air Force. One Sunday he was driving by and noticed his deputies car in the lot. He walked in to see what was happening. The deputy had been working and had heard the Halon warning alarm. Not seeing a fire, he hit the "Don't dump" switch. As long as he kept the switch depressed, it would not dump. But there was no phone to call anyone to stop the dump. He had been waiting for three hours for someone to come by to call for help.

NickNielsen
NickNielsen

About halfway through that ordeal, I think I would have had a cigarette. Did he have a phone put in? Or a locking abort switch? :D

jck
jck

Not plugging in all the jumpers to the switch :^0

michaelaknight
michaelaknight

I have on more occasions than i'd like to admit, patched the switch into itself... I mean c'mon.

XnavyDK
XnavyDK

second domain controller... nor do I have a foolproof plan for business continuity. I do have excellent backups and a potential plan for for a second domain controller, but its not the best scenario no matter what I do.

reid.partlow
reid.partlow

Setup backup DC as a VM. Keep its size small and make backups of the entire vm setup and and hard drives. This way you can move it around and put it up on almost any piece of hardware temporarily. It may not be fast, but it will you buy you some time. Reid

Bill Detwiler
Bill Detwiler

Is your lack of a second DC a results of limited budget or some other situation?

XnavyDK
XnavyDK

I made a plan now. yes it was lack of funds, but when our server STB we were down 24 hours on a business day. Turned out to be a raid cont failure. so we inturn bought a second server, and failover capabilities provided by Doubletake. I now have a second DC on a vm running on another machine. I still do my daily backups and you know server 08 backup util is surprisingly nice. I also back up various db we have to a external usb so if something weird happens I can take the back end and still continue Business as usual on a workstation if the failover fails. and just for fun. I always post a link to a crazy web pages I find http://walkingdead.net/perl/euphemism

michaelaknight
michaelaknight

I have a small network that is the same thing.... not a lack of money but a lack of trust in wrecking the existing Domain controller by allowing 2003 r2 to update the domain policy... maybe one day when I'm good and hungover, i'll do it.

Bill Detwiler
Bill Detwiler

In the following IT Dojo video, I discuss dumb mistakes IT pros make that can mess up their networks Original blog post: http://blogs.techrepublic.com.com/itdojo/?p=221 These are only a few of the most common dumb things net admins can do to mess up a network. If you're in a confessing mood, tell us whether you've committed one of these mistakes or another major IT goof by taking our IT mistakes quiz. Poll: http://blogs.techrepublic.com.com/itdojo/?p=222

benroberts
benroberts

I was sysadmin of a medium sized network, spread over three floors of an office block. We had run out of connections on our old hubs so the decision was made to replace them with nice new HP switches. I had an assistant with me and we kicked off on the admin level after 5pm. Our plan was to power up the HP units and then, in a flurry of re-patching, switch all of the connections over. We got to work and completed the cut-over in less than a minute. Thinking we had just performed a miracle of efficiency, we went to the next level. As we were waiting for the lift, the calls started coming in. "We can't see the server!" "Internet is down" etc etc. we were puzzled as the switch looked fine when we left. We returned and decided to substitute the switch we'd just fitted with the one for level 2. Knowing the users were off the network anyway, we took our time. Everything worked fine. We thought we had a dud switch so called off the rest of the swap-outs. When we spoke to the supplier the following day, h was amused. "You guys worked too fast, the switch couldn't cope with all the connection requests" He was right, when we did the others, we did it more slowly and they all worked fine. Moral: Don't work too fast.

NickNielsen
NickNielsen

...[u]then[/u] power up the switch. Let the switch sequence through the ports for the connection requests. Users are only down for a minute or two more and will usually put any problems down to network load. Learned that one the hard way too! ;)

tech
tech

I will always be the first to admit that I don't always install service packs or updates straight away. Far too many times I have done the deed and then regretted it, especially when doing it on a remote server. Not anymore, I have to be there in front of it or it doesn't get updated. I used to also run Ghost remotely just to have an image of the system as a fall back on updates, until one friday evening I set it off and it never came back up until Monday when someone in the office called me. Seems that Ghost didn't recognise one of the external hard drives and sat there waiting for a user input before it would image. That just reminded me of the time when I was in front of a client server one evening and thought I would run ghost on their system for the first time, only it trashed the boot files and I had to do system repairs when I should have been in bed asleep. Oddly enough I don't use ghost anymore!! But the weirdest 'accident' I ever had was when I plugged a memory stick into a server and it BSOD'ed and went off. Mission critical? Oh yes - very. Not been too keen to insert usb pens anywhere since then.

pworlton
pworlton

Most people aren't aware of the amount of current going through USB devices, but it is substantial enough that I've had some cables and flash drives start to smoke and melt. All it takes is one damaged pin on the USB plug and you could have a fried mobo or other melted item to deal with.

BookiesDad
BookiesDad

Being a fairly new administrator of a small datacenter of 15 servers running through a 4500 series core switch, 7 other 3750 switches and a host of perimeter firewalls, anti-spam, RAS units etc...I had been tasked with testing the software component of our UPS. Power Chute runs on select servers, and when main power is cut the Symmetra tells Power Chute to perform graceful shutdowns on select systems to extend the Symmetra's battery life. To test this I opened a PowerChute application which communicated with the Symmetra; and ordered it to perform a 'self-test', believing that the Symmetra would simulate a power failure and order a graceful shutdown of selected servers. Right when I was marvelling in how effective the servers had been dynamically shutdown, every-and I mean every-single little green light in the entire server room went out. Every light, every fan, every server, router, firewall, switch and appliance...including the Xiotech 2TB SAN. Everything simultaneously went out in an instant. What happened? Here's where it gets embarrassing. Not only did the Symmetra self-test initiate software shutdowns of the selected servers, it also peformed a shut down of the UPS itself-which ought not be done when such a UPS is in production with several hundred thousand dollars of assets running on it! The nature of the fiber optic SAN and the Cisco environment meant that it would take about 2 to 3 hours to get everything up again. That's a lot of time for a lot of Homer Simpson "Doh!" exclaimations

greg.hruby
greg.hruby

Not establishing service agreements for data/application management. There is nothing worse than needing to perform maint work on a server system and some user(s) have decided to put in some unannounced overtime - cleared with their boss but not IT. We finally set up policies and schedules when the maintenance work would be done - so it could be done.

darren.meyer
darren.meyer

Being a security geek, I made the *opposite* mistake regarding password policies: I once insisted on fairly complex password rules and a 30-day password expiration. Anyone in security will immediately guess what happened: users simply wrote their passwords down and stuck them under keyboards, etc. Good password policies enforce some complexity and regular password changes (with a fair bit of reuse-prevention) -- but too complex, or too regular, can present problems as well.

jmarkovic32
jmarkovic32

My biggest repeated mistake has been skimping out on quality just to get a purchase approved. In the end, each time it's come to bite me square in the ass. See, the root of the problem is that expensive purchases tend to sit on the CEO's desk for weeks while he comtemplates signing them. To move the process along and to make my purchases go "under the radar" so to speak, I've skimped on purchases. One purchase in particular is the purchase of 25 computers and 25 monitors. The computers were generic whiteboxes from some vendor in Texas (I'm in SC) who I didn't know about. Basically I had a middle man (my reseller) handle all correspondence. The 25 monitors were the cheapest brand out there. Well needless to say, some of the computers started to fail due to shoddy workmanship and all of the monitors started to fail. Yes...all 25! Fortunately everything had a 3 year warranty. However, we still had to pay for shipping. I spent the past year shipping out and replacing all 25 monitors (I still had to replace one yesterday). Due to dealing with a middle man, it took almost TWO MONTHS to get the repaired computer back from the whitebox vendor! If you deal with the big PC vendors, you know that they'll ship parts out or send someone out to fix it for free if it's under warranty. So I should have spent an extra $50 per machine for brand name computers and monitors. In the end it would have saved me time and money. And since time = money, it would have saved me money and more money. Moral of the story is that spending a little more up front will probably save you money in the long run. Now I request the best of the best (or next to the best) and I don't care how long it takes to get signed even in emergency situations. When the users scream, that usually gets his attention.

wolfshades
wolfshades

Back when our network was decentralized and we had small Banyan Vines servers in every site, we took delivery of six servers for our office and put them into production right away. It was only after we got them up and running that we realized we couldn't do backups - that the vendor who sold them to us had experienced a stock shortage with an important cable for the tape backup component and hadn't told us. We quickly spoke with them, and they promised to come with the parts and install them. We weren't too worried as the turnaround for this would be quick. There was only one server that we were kind of concerned about, as it housed the only copy of one particular business critical application. Murphy joined us that weekend. I happened to walk into the office and noticed there were no lights on that business-critical server. Sure enough it was down. I pushed the power button. Nothing. My team and I spent the next 18 hours troubleshooting this problem, trying to restore it to life again - before Monday morning. Vines was a finicky script-kiddy OS and so we weren't sure if the configuration was bad, or what. Finally, at around 4:00 a.m. we realized that one of the hard drives was toast, and that the server warning lights which should have alerted us to that weren't doing their job. We only found out what it was by swapping out various parts from other servers. Fortunately we had a RAID-5 configuration so when we realized which hard drive was bad, we took it out and the machine was able to recreate all of the data - but it took an incredibly long time for that to happen, and our users were without the application until shortly after noon that day.

tstemarie
tstemarie

Keeping it simple is propably the biggest mistake that I see people committing when things go wrong they look for complex solutions when really all it takes is something minor to fix an issue.

yawningdogge
yawningdogge

I tried to put two computers on the same network drop by running through a home router with switched output. I forgot to shut off DHCP and crashed the DHCP service at the server. (Microsoft loses the fight by default.) The whole company went down (about 30 machines) except for one that had been configured with a static IP.

doughtymartin
doughtymartin

I have a picture, somewhere, taken in the 1990's, of a PC from a UK school. The staff, to their credit, had reliably made backups on a daily/weekly basis to floppy discs. Their mistake was to store them all in a box on top of the PC. One day the PC caught fire. The picture shows the remains of the PC and the melted backups dripping down the frame. Make sure you store your backups somewhere safe !!

Editor's Picks