Two decades ago some people still used dial-up modems, and now the world is at our fingertips. Read on to get a sense of how much has changed in the IT office since 2000.
I previously wrote about the top data storage trends of the last 20 years. Now let's take a look at the top trends in enterprise IT overall of the same timeframe.
As in the storage version of this topic, I'm reluctant to use the word "inventions" because the underlying technologies mostly evolved from computer science labs in the 1960s, 1970s, and 1980s. I present the list alphabetically—feel free to debate their order of importance among your colleagues, or add your comments below.
This is a concept, not a thing. In my definition, big data begins at the point when your organization's data grows faster than your IT department's ability to manage it. Computer department staffers used to leave work on time, except maybe when they were extinguishing a fire or writing code. Now, data management is a specialty field.
We wrote client programs to interact with green screens, then we coded for DOS and Windows PCs, and now the user experience is all about responsive browsers (and mobile apps, too). That means more planning is needed for varying user interfaces, along with prioritizing data states when connectivity is interrupted. On the plus side, it's easier to learn than in the old days.
There's a Facebook meme where a little boy asks, "Daddy, what are clouds made of?" and the father responds, "Linux servers, mostly." The moral: It's (still) just someone else's computer. There are public clouds, private clouds, hybrid clouds, and special-purpose clouds. You trust them with your applications and with your data. In theory, it's cheaper, easier, and more secure than doing that work internally. In reality? The jury is out. One major concern about cloud computing is vendor lock-in.
The major IT vendors wanted to sell you overpriced hardware in the 2000s. Then cheap hardware became good enough, so the top vendors changed their focus to software in the 2010s. From backup appliances to firewalls to PBXs, off-the-shelf became normal. You still might want something special for a transactional database or other high-end applications.
It was hard to predict in 2000 that by 2020 people would want everything to work like their phones. Users want to bring their own devices into work, and technical workers (developers, engineers, scientists, etc.) aren't happy when IT tells them "no" to spinning up their own server or establishing undocumented networks. We live in a user-centric world now.
SEE: BYOD (bring-your-own-device) policy (TechRepublic Premium)
All of those phones, tablets, laptops, hotspots, projectors, and charging stations have to be documented, secured, and maintained. The challenge is, Apple and Samsung are interested in satisfying our gadget cravings, not in sticking to old-school corporate buying cycles. Device management is a big industry. Its practitioners are coming up with new ways to track everything, back up data, roll out over-the-air updates, and keep gadgets safe when they're in the wild.
What if programmers and IT worked together instead of separately? That is DevOps. On paper it leads to applications that function better, make users happier, and interact smoother with the rest of a company's computing systems, all while being built and updated faster. In reality, whether these separate and sometimes competitive breeds of Speciesus Nerdus can peacefully coexist remains to be seen.
Programming language proliferation
C, COBOL, and maybe (for the kids) some Java were good enough. Throw in SQL, and you've got a functioning info-tech department. Fast-forward a couple of decades: C is still a stalwart, but Java isn't for kids anymore, Python is a thing, and now even Google has its own programming language. Also, what used to be files and libraries is now microservices and APIs tied into cloud services.
Oh boy, this is a big one. Security used to be an afterthought. Maybe you'd put some copy protection onto your software and passwords into your networks, but today one hardly needs to be an elite hacker to get through that stuff. Chief Security Officer is a very serious job. As the CIO at one of the world's largest law firms told me a few years ago, it is no longer about preventing your company from being hacked, it's about how to keep systems safe in a 24/7/365 cycle of constant hacking deluge.
The concept goes back many decades, and in the 2010s it became huge. Most non-critical servers these days are running several operating systems and applications, while the software applications blissfully think they're the only ones the server loves. Pros: Save a ton of money, save space, save energy, reduce noise, reduce hardware management hassles. Cons: Good luck to you if that server crashes.
A final note: This is my last article for TechRepublic, as I'm starting a new career chapter. Thank you for reading my work these last four years. I can be reached at firstname.lastname@example.org, however do NOT send press releases or story pitches.
- Launching a career in cybersecurity: An insider's guide (free PDF) (TechRepublic)
- Quick glossary: Cloud platform services (TechRepublic Premium)
- How to write a good security policy for BYOD or company-owned mobile devices (ZDNet)
- Best cloud services for small businesses (CNET)
- DevOps: More must-read coverage (TechRepublic on Flipboard)