Earlier this summer, I wrote about a business process improvement project that we are beginning to undertake at Westminster College. Since that time, I’ve held meetings with departments on campus and am beginning to see a number of methods by which we, as a campus, can be more efficient. I’m not done yet meeting with departments and have yet to compile a formal report for the rest of the executive team. However, over the rest of the summer, I have learned a great deal that will help us along the way. Namely, we shouldn’t limit ourselves to simply enhancing existing processes by making them electronic. Instead, we must think much more holistically and include a lot more buzzwords!
Kidding aside, besides undertaking this project, some other things took place this summer, making me realize that, in order to be successful, our business process improvement project must be expanded to include a service-oriented architecture element. Specifically, my database person left the organization. Bear in mind that my IT organization totals eight people, myself included. Over the years, we’ve accumulated a lot of responsibility for managing processes across campus. From initially loading incoming students into our dining and printing systems to running batch programs that handle student billing, our application support person/DB person has a hand in many critical functions. However, over the years, all we’ve managed to do is shift data management burden from individual offices into the centralized IT function. Sure, in the past, all of these processes were performed manually in each individual office and, with the batch programs now in place, that burden is much less, but has now shifted to IT. Like other departments on campus, we’re struggling to keep up with our regular workload and these manual processes don’t make it any easier for us to truly innovate to help people solve their problems.
Worse, even with the pseudo-automation in place, we eventually must revert to manual processing. For example, in our current scenario, once we’ve batch loaded our dining system, changes made after the initial load are handled manually and there are two systems involved. Our housing office makes changes to our primary student system and then, via email, notifies dining who makes the manual change in their own system. Obviously, in a number of ways, this is less than ideal. Besides being extra work, each time a person has to get involved in a process, the possibility for error increases.
To help solve the IT-time-crunch problem as well as the problem above, we’re going to begin to look at ways to convert these batch load processes into real-time processes. For example, instead of doing a batch load each summer, as soon as housing activates a student’s meal plan, that information should be automatically created in the dining system, which is a food service point of sale system. When a student makes a meal plan change, that information should also flow automatically to the dining system. Middleware needs to be developed to facilitate this inter-database communication. This development must precisely follow the human logic that would be taken for each type of change.
We simply need to develop processes and procedures that keep repetitive manual labor to a minimum. For example, when we get a donation, someone prints a letter, delivers it to the President, has it signed and then delivers it to the mail room. Why can’t an overnight process simply print letters on the printer in the President’s office to take some of the labor out of the process?
I suppose that this is a sort-of service oriented architecture project that will happen in conjunction with our business process review. Going hand in hand, these efforts are designed to drive inefficiency out of our systems and processes to allow individual departments to focus more on providing services to our students and less on data management.
Next week, I’ll share with you some preliminary thoughts on how our business intelligence/dashboard efforts fit into this mix as well.