The way we develop applications has to change at the architectural level. For too long, software developers have tried to separate logic and presentation in a completely ineffective manner, and the idea of code reuse has been an elusive target at best. We have been focused on the wrong levels of separation and the wrong concepts of reuse. Let’s explore how to rethink your architecture to give you a longer lifespan on your work and improve maintenance and extension.

The fundamental issue is too many developers seem to think the ecosystem they develop in today will be the ecosystem they develop in tomorrow. As a result, developers get hung up on whether a piece of logic gets put into the code handling the UI or some other class of the system. Unfortunately, that presumes the code you write is accessible to future projects. Guess what? It probably won’t be.

As Web applications and mobile applications are taking the world by storm, we find ourselves constrained by decisions that were made ages ago (which could mean “six months ago” with the current pace of change) and unable to make the best decisions for our needs. All of the special care you put into your object-oriented class structure of your .NET class library isn’t worth a hill of beans when you realize that your Web front end will be best written in Ruby, or you now have to write an application for the iPhone in Objective C. While that may be great news for the purveyors of .NET and Java tools, it is bad for you.

One major reason why code reuse has been so rare is that, with the exception of data handling code for common data for an enterprise (such as, the employee list) or UI widgets, almost all logic was application specific, so where would you reuse it? And if you can’t reuse it in totally different applications, and if you can’t write a new front end to that logic in another system or language, then where would you reuse it?

The answer to this is Web services. Just as I was formally down on Web applications, but conditions have changed to the point where I see them as the preferred way of doing things, I used to be quite down on Web services, but now things have changed enough where I’m not. The big things that have changed are bandwidth, tooling, standards, and CPU speeds. Previously, my objections to Web services were:

  • Bandwidth: XML has a lot of fluff in it that soaks up bandwidth. Now, there is a lot more bandwidth, and people have learned to not dump huge amounts of data on the wire for the client to process.
  • Tooling: The tools used to be poor; SOAP services that were written in Java tended to have trouble in .NET and vice versa, and there were no good alternatives to SOAP. Tools and libraries create and consume Web services much better than they did in the past. WCF is especially awesome for this.
  • Standards: While I’m not too pro on REST still (mainly because I think it needs something like WSDLs to make it easy to deal with in static, strongly typed languages), it’s been a game changer in many ways. In particular, it’s let JavaScript developers work well with Web services, and Ruby and Python developers have benefitted as well. In addition, more lightweight specs like JSON have emerged, which cut down on bandwidth and CPU requirements compared to XML.
  • CPU speeds: Creating and parsing XML has always been a relatively intense task, but CPU speeds have picked up enough where it’s not the slowdown it used to be.

As a result of the improvements to Web services, my recommendation is to stop trying to separate concerns along the lines of “UI or not UI” and view code reuse in object-oriented terms, but instead, view every UI as a client for a Web service. In a nutshell, you can still do all of the OOP you want, just do it behind the scenes of the Web service or in the application side. There should be a clean break where the “application” ends and the “logic” starts, and that break is delineated by a Web service. By moving to this architecture, you will be able to reuse your code all you want as the need to support new operating systems, form factors, etc. emerges. You now get the flexibility to write a desktop application today, thin mobile applications tomorrow, and Web applications down the road when you decide to move to a tablet system.

The architecture of the future is going to look a lot like the old mainframes and green screen combinations of the past, but with a bit more of the logic in the client. Your clients are going to be processing user input, pre-validating data (why round trip to the server if it isn’t necessary?), and creating a device-specific UI that presents the information appropriately. Meanwhile, the server should be handling the brunt of the processing and logic and storage.

Is this appropriate for all applications? No. With something like a photo editing application, you may not want to go back to the server with every little change, since passing the graphic back and forth will take forever. There are still some applications where the client needs to do more of the work, but those are getting rarer. A few years ago, multimedia consumption was considered a sacred cow in terms of local storage requirements, but now, lots of people use streaming video and audio much more than local storage, and many services and systems like Apple’s are moving to a streaming model as well (with iCloud).

If you are still trying to separate concerns and perform code reuse strictly with the OOP paradigm, it is time to reconsider things, as you are probably painting yourself into a corner in a world that is changing too quickly to allow it.

J.Ja