Yesterday, Brad Chen announced on the Google Code Blog the open source release of Native Client: A Technology for Running Native Code on the Web. The idea is to create a new, portable, cross-platform technology for running code on Web clients (i.e., Web browsers) that uses “native code” rather than a combination of client side JavaScript and server side PHP, Perl, Ruby, Java, et cetera.

One of the benefits he cites for using client-side natively compiled code is eliminating latency. If Web-delivered applications can access local system resources on the client to run native code, the latency required by shuttling data back and forth between the client and the server can be avoided a lot of the time.

On the subject of security, he had this to say:

To help protect users from malware and to maintain portability, we have defined strict rules for valid modules. At a high level, these rules specify 1) that all modules meet a set of structural criteria that make it possible to reliably disassemble them into instructions and 2) that modules may not contain certain instruction sequences. This framework aims to enable our runtime to detect and prevent potentially dangerous code from running and spreading. We realize that making this technology safe is a considerable challenge. That’s why we are open sourcing it at an early stage: we believe that peer review, community feedback, and public scrutiny greatly improve the quality of security technologies like this one.

The second point he makes about what the rules defining valid modules would include concerns me. It says “modules may not contain certain instruction sequences,” which seems to indicate that the plan is to take a default permit approach to malware protection. Generally, when it comes to protection from malicious behavior, pretty much every security professional will agree that a default deny approach — where everything is to be prohibited unless specifically, individually allowed — is the safer policy. My consolation is that Chen’s statement is ambiguous and lacks detail, so that what it seems to suggest is not necessarily the intended meaning.

Google has a lot of positive recent track record on its side with regard to software security development. Among other examples of its obvious, growing, innovative approach to pushing advancement of software security technology are its very conscious actions leveraging the right licensing model to maximize security benefits, the release of RatProxy and Keyczar, and the much celebrated case of pushing the envelope of Web browser security with its Chromium project. I’m hopeful that Google’s Native Client project will go in the win column as well.

At least superficially, however, the Native Client project sounds suspiciously like ActiveX: The Sequel. There are obvious differences, such as the fact that ActiveX was never intended to be anything approaching cross-platform compatible, while Chen specifically mentions Firefox, Safari, Opera, and Google Chrome as target browsers, and MS Windows, MacOS X, and Linux-based systems as target platforms. Considering Chen’s comments and Google’s obvious concern with matters of security in recent months — verifiable security, considering the openness of the code licensing used for its software offerings — there’s reason to hope that the Native Client project will not follow in ActiveX’s bloody footsteps.

The Native Code project is open source software, released under a copyfree license as indicated on the developer site. This means that, assuming the project is well managed, chances are good that it will benefit from a broad range of attention from developers who have an interest in, and expertise with, secure software development. Judging by the Native Client announcement in the Google Online Security Blog, this is a primary motivation for providing the project code under a copyfree license at this stage of development. There’s a lot of reason to have high hopes.

My main concern is with the verifiable protection for security and privacy (and, of course, privacy is security) in the native code the Native Client would allow you to run on your system. How much benefit can something like this really provide over current, JavaScript based client-side scripting if it must be restrictively sandboxed for security reasons (which seems to be Google’s main approach to securing it)? How can we be reasonably certain of the security of the native code if it isn’t sandboxed? The problem is that we can’t just blindly trust developers if we want real security and privacy for our computers and data.

Completely aside from the security concerns, however, Native Client may create new business models for software previously restricted to other distribution models, new opportunities for high-performance Web applications, and otherwise change the way we do a lot of things with our computer. It may also turn out to be, in some ways, just another failed attempt to Change The Game, along with Java applets and ActiveX (and, in time, probably Silverlight). The focus on portability and platform agnosticism differentiates it from ActiveX, however; the use of native code delivered to the client differentiates it from efforts like Silverlight (or even AJAX); the approach it takes to portability seems destined to differ significantly, and positively, from that of the “write once, run nowhere” Java Virtual Machine.

Google was instrumental in making JavaScript for Web applications cool again. Maybe it can do the same for native system code delivered over the Web. Hopefully, it can do so securely as well — and maybe even help improve the security of Web applications in general. The introduction of new technologies, designed from the ground up with the intent of advancing the state of the art, provides an excellent opportunity to get security right where it was broken before.

(Thanks to Sterling Camden of TechRepublic’s IT Consultant Weblog for pointing out the Native Code project announcement.)