Programming news: WP7 'Mango' SDK, JetBrains dotPeek, CODESTRONG 2011

Read about Silverlight Integration Pack for Enterprise Library, JavaFX 2.0 Beta, m-Power, TeamCity 6.5 Professional, free Microsoft Press eBooks, and more.

Language/library updates

Silverlight Integration Pack for Enterprise Library

I know a lot of folks like the Microsoft Enterprise Library package, and now Microsoft has created the Silverlight Integration Pack so that you can use it within your Silverlight applications.

BlackBerry 7 Java SDK beta

RIM is letting folks download the beta of its BlackBerry 7 Java SDK.

JavaFX 2.0 Beta

Oracle has released the first beta of JavaFX 2.0. JavaFX is similar to Adobe Flex or Silverlight.

Windows Phone 7 "Mango" SDK

The Windows Phone 7 "Mango" update is coming in the near future, and Microsoft has the beta of the SDK available. The forthcoming update looks great, and if it has the same fit and finish of the Windows Phone 7 original release, than it looks like Windows Phone 7 "Mango" will be as feature-rich as iOS or Android and have a few extra tricks up its sleeve, like SQL CE.

Tools and products

mrc's m-Power seamlessly crosses device types

mrc released an update to its m-Power development tool that allows applications to have a different presentation layer for phones, tablets, and full-sized computers. This way, applications look great on all three platforms with no compromises needed.

JetBrains offers free .NET decompiler

JetBrains is building a free .NET decompiler (hot on Telerik's heels) called dotPeek.

SOASTA cloud testing with two new editions

SOASTA created two new versions of its cloud testing product. The new Enterprise edition is designed for large teams, and the Standard edition is aimed at teams working on internal applications.

NephoScale now allows secure VNC access to cloud servers

NephoScale's cloud server offerings now allow SSH secured VNC access to its Windows and Linux servers, giving developers a secure way of accessing their environments.

Standing Cloud and Nexaweb to provide cloud-based client/server apps

Nexaweb, which makes a platform to turn legacy client/server apps into Java Web apps, has partnered with Standing Cloud's PaaS offering to put these apps in the cloud.

New Relic adds user monitoring to SaaS performance management tool

New Relic is now including user monitoring to its SaaS performance management tool, allowing developers and QA teams to get insight into how real users experience their applications.

Quova now returns JSON

Quova's location awareness APIs now provide results in JSON as well as XML.

Jaspersoft 4.1

Jaspersoft's flagship BI tool has been updated to version 4.1, with a new UI, support for more database backends, and native 64-bit architecture.

TeamCity 6.5 Professional now free

JetBrains is giving away TeamCity 6.5 Professional for free. The catch is that Professional is limited to 20 build configurations.

Editorial and commentary

Lodsys sues iOS devs over in-app payments

Lodsys is asserting that its agreement with Apple allowing them to use in-app payments does not extend to third-party developers. Lodsys is now going bananas suing iOS developers. This is more proof of the need for patent reform in this industry.

Are "extensible, open, and standards-compliant" a red herring?

Embarcadero's Mike Rozlog, whom I respect an awful lot from our phone conversations, has a bombshell of an article on Dr. Dobbs asking if trying to be "extensible, open, and standards-compliant" is a waste of time for most apps. I am in agreement with him that it is, by the way.

The relationship between the 800 lb. gorillas and the cloud

SOASTA CEO Tom Lounibos wrote a thought-provoking post looking at how companies like Microsoft, Oracle, etc. with a deep investment in sales channels and license/maintenance revenue streams can react to the demand for the cloud.

Tips and tricks

Logon and account creation code for ASP.NET MVC

Joe Stagner wrote a tutorial on how to make a basic logon/registration system for ASP.NET MVC applications.

Twitter + ASP.NET MVC and Razor

Here's a quick look at how to integrate Twitter into an ASP.NET MVC application using the new Raxor view engine.

Working with gestures on Windows Phone 7

Microsoft has a series of videos showing how to develop with gesture support in Windows Phone 7 apps.

How to bin deploy ASP.NET MVC 3 apps

Phil Haack wrote a good tutorial on how to do a binary deployment of ASP.NET MVC applications.

Building cross-platform Web apps

Microsoft's "Project Silk" guidance from the patterns and practices team shows how to make cross-platform Web applications.

Free ASP.NET and ASP.NET MVC video training

Pluralsoft has created some free video training for ASP.NET and ASP.NET MVC.

Free Microsoft Press eBooks

Microsoft Press has a number of free eBooks for download, some of which are useful to developers.


Appcelerator's CODESTRONG 2011

Appcelerator is holding its first developer conference called CODESTRONG 2011 in San Francisco on September 18 - 20. They have early bird pricing for those who sign up quickly.

PHP North West conference

The PHP North West conference will be held from October 7 - 9 in Manchester, UK.



Justin James is the Lead Architect for Conigent.

Mark Miller
Mark Miller

It sounds to me like Rozlog is working in a large corporate environment, and I was surprised that companies were setting these ambitious goals. As I thought about what he said, I got the impression they were talking about a programming language, an "engine" of some sort, or a runtime, not an application, because I can only imagine that the goals they were speaking of would be accomplished that way, with the exception of standards compliance. In years past, I thought of "open" in a different way. In all of the places I worked, in the IT services industry, we provided complete source code to the customer. Very rarely did they avail themselves of it, but sometimes they did. To those few, I think it was very helpful to them, because they could take control of a project if they felt they could get the amount of work on it that they wanted, for lower cost. I've talked to a few people who've worked at companies where they used a critical piece of software, written by an outside developer who refused to turn over source code, and the relationship never felt good to me. It was like the customer was being held for ransom. In terms of standards compliance, I can see that up to a point, and I agree with Rozlog about it addressing a need. As for the application itself being standards-compliant, so that it could be used in the future by something else, that's a tough one. The only time I've seen a focus on standards apply in a constructive way is if it was anticipated that the project would need to be ported to another platform (a variant of the developer OS we were using), if it facilitated known network communication requirements, etc. He is right that standards change. Fifteen years ago the standard character format we all used was ASCII. Now it seems Unicode is universal. This can get confusing as well. I remember shortly before I left one of my IT jobs I worked with a consultant, who was working for my employer. He was asked to come up with a design for a more adaptable server system, using Microsoft technologies. We had a Unix transaction server that we had written and used on multiple projects, but it only worked with a thick client that we had also written. The consultant came up with a plan so that the server would be able to transmit data in our old flat file format, in XML (I think, or something compatible with COM), and in HTML. On its face, this looked reasonable, but then I realized that in the HTML "channel" he was talking about something different than the other two "channels," because the HTML channel was intimately linked with client functionality. The other two were not. He had gotten confused, because each of the protocols were something that a server could transmit to a client. What he had neglected to realize was that in reality, in order to carry out the HTML channel correctly, he would have to bring the functionality of the thick client (sans GUI stuff) to the server, and transmit the user interface that the server generated, not a record set. As I read the article, it reminded me of a story one of my co-workers told me about working for U.S. West (now Qwest) years ago. She said they never completed their software projects. They'd work for months on something, and then drop it. She said, "It was fine working there. You learn a lot. You just never accomplish anything." It was fun reading Rozlog's section on "What can be done?" I would've answered, "Must be maintainable." That was always a high priority for me, but I didn't always accomplish it, because sometimes the technology I worked with, or the schedule, didn't allow it. The requirement that "It must run" just goes without saying for me. That was drilled into me when I took CS. My professors cared about nicely written code, but above all, the program had to run. If it didn't we'd (typically) get a "D" at best.

Justin James
Justin James

Some people treat every single product as a massive item, and assume that at some point a public facing API will be made or something. Of course, there is always that possibility on some products, so it's good to take it into account up front. What I learned a long time ago, is that if you have a 10% chance of needing a feature, but it will increase your time to deliver by, say, 15% just to architect in a way that the feature *is* possible, then you are wasting your time. There's just no ROI there. And a lot of people just go bananas over design, especially people who just learned certain principles. You see this in OO designs too... it's the same mind set that has folks writing 100 lines of interfaces and abstract classes and factories just for a simple class where you'll never see a a sibling or child in the inheritance tree. I see this stuff all the time, people doing things like writing LINQ providers to their custom enterprise service bus/message queuing ball of wax, and stuff like that, all to gain some measure of redundancy/fault tolerance that their application neither requires nor cannot be provided by using an ACID-tolerant database in the first place... J.Ja

Mark Miller
Mark Miller

One thing I'm learning as I go along is that architecture has a purpose. It's not just something you use because you know it exists and how it works, or it's "the hot new thing" where we can all get excited about its features and what they enable. One of my first conversations with Alan Kay was on OO, and basically what he said was that he explored it as a way to achieve a high level of scalability in software. It's a way to decentralize, have more autonomous units, so that whatever you're building can expand more and become much larger, without it becoming unwieldy and unstable. So the idea of it is to grow. It seems like the concept that's used in the Smalltalk world, at least, is more of a peer-to-peer way of doing things. People mistake OO for being hierarchical, because of inheritance. Like I've said before, the popular languages that call themselves OO don't allow you to do this all that well. So they've missed the boat on that. In one of the talks that Kay gave back in 1997, he talked about architecture (you can see it here), and he said that you don't need a sophisticated architecture to build a doghouse. You can build one using a simple architecture, and it'll work. The problem comes when you try to use the same architecture to build a house that people would want to live in, or a skyscraper. You could build a house using the same simple architecture, but it wouldn't be as stable as a house that uses modern building techniques. You could try to build a skyscraper with it, and you'd most likely fail. The caveat I added to this concept is you might be able to do it with a lot of patching and ad hoc internal reinforcement, but still, I don't think most people would want to use it once it was built, but this is only because it would be readily apparent that the thing was tipsy and creaky, and would look and sound like it was about ready to fall over. This isn't so much the case with software, unfortunately. It's very easy to hide architectural flaws so that users don't see them (though programmers might see them, if they could evaluate it with a critical eye). Kay advocated thinking big even when your programs are small in terms of what they're spec'd out to do. He said, "When you think your programs are small, that's why they're so big!" What he advocated most of all was looking at architecture first and foremost. He told me, "Most of programming is really architectural design," and he didn't mean the kinds of architectural plans that a system architect/analyst would come up with. He meant the way that information and logic/processing are structured in the runtime or development environment. I've heard a few others elaborate on what he's talking about, by saying the ideal is you create the architecture and "you're done." You end up having to do very little work to finish the solution, because the architecture already embodies 95-98% of what the solution needs. To make an analogy, the feel of this is probably like what you experience with OutSystems Agile, where the development system takes care of the details, and allows you to focus on the business problem at hand. To contrast with this, Kay said in the same conversation, "[M]ost of what is wrong with programming today is that it is not at all about architectural design but just about tinkering a few effectors to add on to an already disastrous mess." I'm not sure what he meant by "effector," though there is a CS definition: "effector: A device used to produce a desired change in an object in response to input." An example I can think of along the lines of what Kay considered ideal, which would meet the requirement for "openness" (at least one form of it) comes from an idea that he put out in the talk I link to, assuming that an OO architecture would be appropriate for a system/application. He said that every object should have its own URL. Secondly, he said it should be possible for objects to internalize their own pointers to other objects, whether they be local or on the internet. In other words, when a method refers to someObject, the programmer should just be able to refer to that object without having to do much to qualify whether the object is local, or on another server. What he meant by this is this is something that should be in the runtime, or in the development environment. It's not something that you should build a class library in C# or something to do. It should be inherent in the nature of the objects you're using. This instantly enables a web-based API, whenever you want it. Now, in terms of security, you could have as the default that all objects, except for a few that you select, don't respond to http requests at all, and you could have a registry of external entities that are approved for communication, so that when a method wants to send a message, the programmer doesn't accidentally (or purposely...that sometimes has to be considered...) contact an unapproved entity. This would be a way to achieve "openness" within your application without the application programmer having to do much of anything. You let the runtime/dev. environment deal with that. This is what I was alluding to earlier. I know it sounds like a tall order, but it seems to me this would be a better approach to doing what these corporations *say* they want. The reason I talked about "incompatible ambitions" is that I doubt any corporate software dev. operation would approve something like this, because they'd think of it as too costly and too risky. The exception being if there was something like OutSystems that actually did this in .Net/Java, what have you.

Editor's Picks