Builder AU recently caught up with Keith Short, one of the co-authors of the -software factories” methodology and manager of enterprise frameworks and tools architecture at Microsoft.

Builder AU: What sort of skills will traditional Visual Studio developers have to acquire to use the new -Whitehorse” modelling tools in the upcoming release of VS.NET 2005?

Keith Short: Most Visual Studio developers will be very comfortable with the Team Architect Edition tools. We’ve taken great pains to emphasise productivity. In much the same way that Visual Basic developers could drag and drop components onto the design surface and modify their properties, Visual Studio Team Architect Edition users will be able to drag and drop services, modify settings and properties, and visualise their entire service oriented architecture.

History tells us that case tools were not used by developers in en-masse. Why do you think they were not successful and why is Microsoft looking to go down this similar path in Visual Studio?

Customers have told us that many of the case tools they bought in the Eighties and early Nineties did not add sufficient value to the development process. Many of the benefits by which they were sold, failed to materialise, and even good products got swamped by the over-hyping of the promise of the technology. If the diagrams they supported were used to generate code, they typically became out of sync once the developers added other code around the generated code, and even products that did a good job of -round tripping” the generated code, eventually overwhelmed developers with the complexity of solving this problem. If the diagrams they supported did not reflect the code and other implementation artifacts, they quickly became out of date and irrelevant. Often these problems were caused by the case tool trying to operate at too high a level of abstraction relevant to the implementation platform beneath. This typically caused the generation of large amounts of code, and made the solving the problems due to mixing developer-added code with generated code even more difficult. Many 4GL products in this era were more successful, but at the expense of being very constrained in their ability for developers to use them outside a very narrow domain.

Microsoft wants to learn from these experiences and avoid these pitfalls by following a pattern of model driven development based on the following ideas:

  • A model should be a first class artifact in a project – not just a piece of documentation waiting to become outdated. Models are based on a precise syntax, often graphical, and semantics based on how concepts in the model map to implementation artifacts such as code, project structures and configuration files. In this way, a model is just like a source code file, and the mechanism that synchronises it with other artifacts, including generated code files, is much like a compiler.
  • A model should be a focused abstraction that adds value to a developer in a well-defined area of development. For example, given all the other tasks a developer must focus on to build a service oriented application, there will be a time where he or she can just focus on the definition of Web service contracts and the messages they interchange – the connectivity of his overall application. The Visual Studio Team Architect Application Designer supports that aspect of development.
  • Models can be implemented by other well-defined models, but often it is necessary to manage how developer-added code and other artifacts are mingled with code and artifacts generated from the models. As mentioned before, this was one of the primary failures of early case products.

We call modeling languages which are defined in these ways Domain Specific Languages or DSLs. Good ways to find candidate DSLs are to watch the patterns of code a developer uses and encapsulate these into a modeling language, or to surface concepts in a software framework as DSLs which can then use minimal code generation to build code that extends the framework. Adopting this approach means we can control the amount and complexity of code, while offering real value to developers, with none of the hassles brought about by old style case products.

Do you think Microsoft’s approach to modeling would be different if they acquired Rational before IBM did last year?

We can’t speculate on things like this. I can say that our approach to modeling is to make it accessible, and to ensure that modeling isn’t just an artifact of the development process, as it is with most existing modeling tools, but to drive modeling as a key contributor to the entire development process.

Some critics claim that Visual Studio, even the -Express” editions are becoming -bloated” with new features in an effort to cover a wide range of functionality formerly provided by other third-party tools. Are you worried that developers will start looking for alternate, less complex developer tools to get the job done?

We are actively looking to remove features from Express, rather than add them. Express will be lightweight and very much geared towards the novice audience. As for the rest of Visual Studio, application development is a complex task these days. Visual Studio contains the tools necessary to build modern SOAs. At the same time, we recognize that we need to show people not only how to use our tools, but why and when. This is the primary reason why we’ve included process guidance into the product.

I understand you contributed to the UML 1.0 spec. What do you think of UML 2.0?

I think the UML effort has been a great step forward for the industry in providing standard notations for conducting object oriented analysis design of software and has influenced the activity of informally sketching out software solutions. But the specification has a number of areas that are open to interpretation, and added to the fact that domains outside of OOA/D must be implemented using the weak extensibility mechanism defined in the 1.x specifications, this leads to a number of problems when UML is used to generate actual software artifacts, and issues of interpretation of models when interchanged between different vendors’ tools. With others in the industry, we had hoped that UML 2.0 would address these issues.

Sadly this has not been the case, and the UML 2.0 specification appears to have grown in complexity yet still does not address critical areas of a modern development approach, like database design, testing, deployment, and user interface in a natural way.

What is the relationship between UML and Whitehorse? If Microsoft is going to move away from the open UML standard, why?

For us, this lack of coverage of the whole development process, coupled with the lack of precision in the specification, meant that we could not see how to implement the desirable qualities of model driven development I mentioned above with the UML metamodel as a basis. Our DSL approach may be likened more to one based on a simpler fundamental model which supports precise definitions of a family of related and interrelated modeling languages, each of which has well-defined mappings to other DSLs or implementation artifacts. We have recently announced the DSL Definition tools, which will enable customers and partners to build DSLs using our underlying technology within Visual Studio 2005. Partners can choose to implement various of the UML diagrams as DSLs if they choose, as evidenced by Borland’s announcement to do just that.

Is Whitehorse a step towards OMG’s MDA?

We regard the Whitehorse tools and infrastructure used to build them as the first step towards useful model-driven development. MDA is a brand licensed to the OMG that is based on using UML for model-driven development, and places a strong emphasis on platform independent models and generative techniques. Tool vendors are still trying to determine what MDA actually means given there’s little in the way of conformance described. We think that MDA addresses only a part of the real issues in enabling effective model-driven development.