Enterprise Software

Use Web services to separate concerns and for code reuse

Justin James says developers who are still trying to separate concerns and perform code reuse strictly with the OOP paradigm need to reconsider things.

The way we develop applications has to change at the architectural level. For too long, software developers have tried to separate logic and presentation in a completely ineffective manner, and the idea of code reuse has been an elusive target at best. We have been focused on the wrong levels of separation and the wrong concepts of reuse. Let's explore how to rethink your architecture to give you a longer lifespan on your work and improve maintenance and extension.

The fundamental issue is too many developers seem to think the ecosystem they develop in today will be the ecosystem they develop in tomorrow. As a result, developers get hung up on whether a piece of logic gets put into the code handling the UI or some other class of the system. Unfortunately, that presumes the code you write is accessible to future projects. Guess what? It probably won't be.

As Web applications and mobile applications are taking the world by storm, we find ourselves constrained by decisions that were made ages ago (which could mean "six months ago" with the current pace of change) and unable to make the best decisions for our needs. All of the special care you put into your object-oriented class structure of your .NET class library isn't worth a hill of beans when you realize that your Web front end will be best written in Ruby, or you now have to write an application for the iPhone in Objective C. While that may be great news for the purveyors of .NET and Java tools, it is bad for you.

One major reason why code reuse has been so rare is that, with the exception of data handling code for common data for an enterprise (such as, the employee list) or UI widgets, almost all logic was application specific, so where would you reuse it? And if you can't reuse it in totally different applications, and if you can't write a new front end to that logic in another system or language, then where would you reuse it?

The answer to this is Web services. Just as I was formally down on Web applications, but conditions have changed to the point where I see them as the preferred way of doing things, I used to be quite down on Web services, but now things have changed enough where I'm not. The big things that have changed are bandwidth, tooling, standards, and CPU speeds. Previously, my objections to Web services were:

  • Bandwidth: XML has a lot of fluff in it that soaks up bandwidth. Now, there is a lot more bandwidth, and people have learned to not dump huge amounts of data on the wire for the client to process.
  • Tooling: The tools used to be poor; SOAP services that were written in Java tended to have trouble in .NET and vice versa, and there were no good alternatives to SOAP. Tools and libraries create and consume Web services much better than they did in the past. WCF is especially awesome for this.
  • Standards: While I'm not too pro on REST still (mainly because I think it needs something like WSDLs to make it easy to deal with in static, strongly typed languages), it's been a game changer in many ways. In particular, it's let JavaScript developers work well with Web services, and Ruby and Python developers have benefitted as well. In addition, more lightweight specs like JSON have emerged, which cut down on bandwidth and CPU requirements compared to XML.
  • CPU speeds: Creating and parsing XML has always been a relatively intense task, but CPU speeds have picked up enough where it's not the slowdown it used to be.

As a result of the improvements to Web services, my recommendation is to stop trying to separate concerns along the lines of "UI or not UI" and view code reuse in object-oriented terms, but instead, view every UI as a client for a Web service. In a nutshell, you can still do all of the OOP you want, just do it behind the scenes of the Web service or in the application side. There should be a clean break where the "application" ends and the "logic" starts, and that break is delineated by a Web service. By moving to this architecture, you will be able to reuse your code all you want as the need to support new operating systems, form factors, etc. emerges. You now get the flexibility to write a desktop application today, thin mobile applications tomorrow, and Web applications down the road when you decide to move to a tablet system.

The architecture of the future is going to look a lot like the old mainframes and green screen combinations of the past, but with a bit more of the logic in the client. Your clients are going to be processing user input, pre-validating data (why round trip to the server if it isn't necessary?), and creating a device-specific UI that presents the information appropriately. Meanwhile, the server should be handling the brunt of the processing and logic and storage.

Is this appropriate for all applications? No. With something like a photo editing application, you may not want to go back to the server with every little change, since passing the graphic back and forth will take forever. There are still some applications where the client needs to do more of the work, but those are getting rarer. A few years ago, multimedia consumption was considered a sacred cow in terms of local storage requirements, but now, lots of people use streaming video and audio much more than local storage, and many services and systems like Apple's are moving to a streaming model as well (with iCloud).

If you are still trying to separate concerns and perform code reuse strictly with the OOP paradigm, it is time to reconsider things, as you are probably painting yourself into a corner in a world that is changing too quickly to allow it.

J.Ja

About

Justin James is the Lead Architect for Conigent.

3 comments
TexasJetter
TexasJetter

A couple of months ago I attended a user???s group meeting where Marcus Egger w/EPS Software espoused many of the same views you do here. What I did not realize was that with WCF you could pass strongly typed custom data. By removing the XML serialization it becomes truly useful. I'll admit at this point I am new to WCF, so I'm still learning. I have come across a couple of issues which I know will require further understanding (on my part) before implementing. One is security. Obviously if you expose your data via a web service of any kind it needs to be secured in some fashion (unless it is public data and exposed read-only). I'm not sure I understand at this point how the application can effectively pass authentication to a WCF service. Does it have to pass it each call, or is there a "single sign on" that can be negotiated. Along with authentication there is the issue of authorization. Should the web service handle the user authorization to specific endpoints, or is that the client's job? The other quandary is validation. As you mentioned in the article client side validation is a smart thing to do. Better to catch it before the round trip to the server is made. However I would also think the web service, as the gatekeeper to the data, would be required validate the data as well. This would ensure valid data before committing, even if the client did not. Does this now mean I have to duplicate my validation? I suppose this is not entirely a new issue, we have all been told for years to use client side validation, but don't rely on it (user could have scripts disabled), so you end up validation on the service side as well. Am I missing something here?

BALTHOR
BALTHOR

I use VBox here.Some of these XP ISO's are really spectacular!But I have run into a problem.In a lot of these sites many files are listed as pulled by the author or removed because of copyright infringements.A little further on I see XP ISO's offered by the Pirate Bay.I personally would not download from them.Is this Internet stalking?They block my download and put their Pirate Bay stuff in the ISO to declare it theirs?This has happened many times and follows the browser search.I currently suspect that in downloading the actual download is very fast but there's someone in there slowing the rest down.With Torrents you get the Torrent file then with a Torrent client program you continue the download.If the program is a one gig download,the file that usually resides in your documents folder reads as the full download.In properties the file is one gig big.This is no more then a few moments after you establish the link.(I think that the Pirate Bay is Italian.) http://thepiratebay.org/

Justin James
Justin James

With WCF, authentication happens on every call. It's HTTP after all, so there's no authenticated session to maintain. Authentication is typically done at the IIS level (I suppose you could pass in a username/password as a parameter if you really wanted, and authenticate like that), so if you want granular permissions you use impersonation and access the DB as that user. Validation... yes, ALWAYS validate server side, even with client side as well. But we've been used to having to do this anyways, so it's really nothing new. There are some better systems out there which automatically take the server-side rules and generate client scripts for it, though. J.Ja

Editor's Picks