Software Development

Poll: Does C++ 2011 renew your interest in native apps?

The next version of C++, which is currently called C++ 2011, will likely be out by the end of the year. Developer Justin James asks: Do you think it's time to give native apps another shot?

It seems like fewer and fewer developers write applications in native code, for a variety of reasons. The next version of C++ looks to be finalized before the end of this year and to be called C++ 2011. With a new version of C++ coming out, do you feel like maybe it is time to give native apps another shot? Take the poll, and then discuss your answer.

J.Ja

About

Justin James is the Lead Architect for Conigent.

47 comments
Mike Page
Mike Page

C++, C#, Java, Ruby, etc are all tools. Pick the tool that best solves your problem. C++ is great for the high performance scientific desktop applications I write, because ityou can do anything with it, and produces fast code. But, I have not chosen it to write web applications, since it has no built in functionality for that realm. C# has features that make it easier to write Windows applications, and if performance is not a primary consideration then it is a good choice. One common theme I see in this discussion is that C++ is too hard. It makes me want to scream, "Quit the whining, put on your big boy pants, and learn to deal with things that are hard." In computer science and other technical disciplines many concepts are difficult to grasp. But, there is a great reward to understanding and then using these things.

Mark Miller
Mark Miller like.author.displayName 1 Like

IMO, there comes a point where the attempt to add features to a language becomes absurd. By this I mean the designers are attempting to get beyond the essential tools of the language, while still having to acknowledge their existence every step of the way. In that case I'd say they need to come up with another language. As Sterling and Justin were suggesting earlier, Go is an example of that. My memory is that Go is more like C, but it removes some of the complexity you would otherwise have to deal with in C to do what it does.

Jaqui
Jaqui

for those who have never really worked in C/C++ then. give them an idea of the capabilities of the languages without the risks. I myself like the challenge in C, he direct allocation and freeing of memory, the games you can do with pointer arithmetic that risk wiping your program out. ;) Though I still get caught up with the smallest / most impact change to C. void main(argc, argv, *char) is dead. int main(argc, argv, *char) is required. [ ok the arguments don't need to be specified ;) ] but the latest C standard specifies that main must be type int and return a value. still gets me, to used to using void main.

Sterling chip Camden
Sterling chip Camden

You can certainly trigger a panic at runtime if you don't know what you're doing, but it removes some of the more frequent abuses like pointer arithmetic and casting, providing in their place some unique abstractions that don't burden performance. My only annoyance with Go at this point is that it seems like they've already decided not to add many more features. While I understand the aversion to feature-creep, it seems a bit young for that yet.

Tony Hopkinson
Tony Hopkinson like.author.displayName 1 Like

Had to go back to C++ recently, for some code that checked things like is .net installed, haven't done any for years. Pain in the arse that was. I mean I did that stuff years ago but I've been spoiled by powerful abstractions since the mid nineties, short of it's definitely the wrong tool for the job, no desire to go back to the good old days at all. :p You take a modern day programmer, give them Pascal or C, or C++, they be f'ed...

Sterling chip Camden
Sterling chip Camden

I was helping a client of mine over Gotomeeting. We had to make some changes to an old DLL they had, that was written in C. We had the code up on the screen, and I outlined the string manipulation steps that we'd need to add. The client jumped in and said, "OK, I'll drive" and started trying to code it up with a String object a la .NET. I started laughing. "Better let me take the wheel," says I. I declared my buffers and pointers, and coded the whole thing up in about ten lines of code with some double indirection and pointer arithmetic. Meanwhile I explained the code as I went along. The amazed sounds from the other end of the line made me have to chuckle. The client had never coded in C before, and he just assumed it would be similar to C#. When I was all done I said, "Now you've had a proper introduction to the C Programming Language."

Sterling chip Camden
Sterling chip Camden

Decades ago, when I directed software development for an accounting software firm, we looked at alternatives (as we always did). I liked the power of C to build abstractions, but there was one thing that I was never able to figure out how to abstract away in C: the pointer. And I knew that if I let our crop of accounting application developers anywhere near a pointer, our applications would crash in the field. That sealed it.

Mark Miller
Mark Miller like.author.displayName 1 Like

C is returning to its roots as a low-level system programming language, which is fine with me. C++? I don't know where it's going. BTW, I have faced the fury of fanboys for my criticisms of C++. I am quite clear in what I see, and I don't have to defend it as a "preference." I think there are a few related problems in that we are "held hostage" to certain hardware and OS models, and we think we *have* to use them. We don't have a choice. This has really held back the computer industry. It would be a lot better if we could think of systems in terms of effective models for accomplishing what we want, as opposed to "this is how you can *get this* to do X." In the early and mid-1990s, C was treated as both a system programming language, and an application development language. I was using C at that time to program full-fledged applications (even in Windows). Later I got into doing the same in C++. At some point after using them enough, I figured out they were ill-suited for application development, from the standpoint of reducing complexity and making code comprehensible for the task at hand. If you think about it, C was originally created to write Unix. Think about its language features. It suits system-level programming. As Jaqui pointed out, you get direct access to memory addressing, and memory management features. It simplifies stack management, from the standpoint of assembly language. It enables you to think of memory in structured terms (but it does not restrict you to that), as opposed to *having* to organize addresses, set aside memory for specific uses, and worry about what memory is used for what (though it does not prevent you from dealing with these issues if you need to). From an assembly programmer's perspective, I think that's a big deal. Another big thing about it, I think, is it standardizes some structured processing techniques which you would otherwise have to code yourself over and over again in assembly (or create macros in a macro assembler). One of the big revelations to me about C came in one of my jobs just out of college. I was writing database transaction servers on Unix. For a couple projects I had been storing records in char arrays, and using macros to calculate the field offsets into them. I used the same technique for reading flat files. Then a project came along that was designed by the customer before we got it (excellent!), and they used the following technique for reading records in flat files. They defined structs, containing char array fields. I forget if we figured it out, or they specified it, but we'd read the data thusly: fread(&[struct instance], sizeof([struct type]), 1, filePtr); This was thinking in terms of memory imaging, not strictly structured information, but in C you got the benefit of both! I could think of the struct as a blob of memory, and at the same time, I didn't have to think of calculating offsets into records anymore. The struct did that for me. I thought later about unions, that they allow you to interpret binary data through different "masks." I figured this must have really helped with developing OSes at some level, because you could think of binary data in these terms. You didn't have to think of it as just a stream of bytes. You got the ability to think of it that way, *and* some advantages from structured high level programming, where you could see the data in terms of some abstract structure. Unix has a similar philosophy about it. Everything is oriented around manipulating/processing streams of data, and it's possible to reinterpret data from one form into another. So C really fits its purpose in that sense. I don't see C++ as having the same clear purpose. From what I've seen of it, there isn't this seamlessness between structured data and memory imaging, or data streaming. Dealing with the memory model in C++ is a lot more complex. The thing about C is it's had a system model that supports the way it works quite elegantly. That's the reason it's been nice to use for some purposes on the computer system models that we've had. C++ has not had that same level of support. It's been fancied as an application development language that can handle greater modeling complexity, with operational efficiency. While it has efficiency in its favor, I think it has added complexity to the task of programming, with no clear purpose as to why it was added, which cuts into any advantages you'd gain from the purported ability to handle greater model complexity. There isn't a nice clean connection between it and a system model. Its use of internal C-style pointers in objects makes it really difficult, if not impossible, to create a good operating system that would support the way it works, because it mixes memory mapping (explicit hard addresses to access sub-structures within itself) in with the data. C++ objects have no utility, that I can see, outside of the program in which they're created. So the power of the language has been limited by its design, and by a limited vision about its role in computing.

Jaqui
Jaqui

still waiting for them to make a tarball available of the sources.

Jaqui
Jaqui like.author.displayName like.author.displayName 2 Like

specially compared to the current crop of languages. the fact they are just a small step away from assembly/machine code is what makes them so incredibly powerful, versatile and dangerous. I would hate to let a Java/.NET/VB only developer touch any C/C++ project, they would hoop the entire system more often than not simply because they don't grasp the inherent danger of languages that can control [ DESTROY ] hardware directly. but the reverse can also be true. just because someone is good with the low level languages does not mean they can work with the higher level languages as well as they do the lower level languages. and people forget, the CLOSER to direct control of hardware, the lower the level you are working, and the HIGHER the skill required. Though we do tend to think of it inverted, and the closer to direct interaction with the hardware is seen as a much higher level of operating. Probably because of the far higher level of skill and knowledge required to do so without screwing the system over.

Tony Hopkinson
Tony Hopkinson like.author.displayName like.author.displayName 2 Like

I once posted that C/C++ were low level languages, some proponents got well bent out of shape about that. I put the stuggle people have with operating at that level of abstraction down to them never working at a lower one. I donlt like some of C++, but not because it's too close to the machine, I started with machine code, so it is a higher level of abstraction to me. It's people going backwards who struggle, they never learned what their environment is an abstraction of...

Mark Miller
Mark Miller like.author.displayName 1 Like

I heard these metaphors many years ago. One of them was that C gives you enough rope to hang yourself with, and something about how C++ gives you more than enough... What they both speak to is the fact that a lot of developers even back then got confused by C and C++. They both looked like high level languages, but they didn't act like other HLLs all the time. I made the realization in college that C was just a high-level assembly language. That's really the right way to look at it. Another way to look at it is as a sophisticated macro assembler. C++...it's hard to say. It carries the baggage of the assembly nature of C, but it provides some ways to hide it, while still exposing you to the advantages and pitfalls of that heritage. It seems to me the first step they took down the path to getting away from it being a high-level assembler was the referencing feature (&), and run-time type checking. That's one feature I discovered later would've been a nice addition to C: run-time type checking. Without it, variable-argument functions are error-prone. Anyway, once you realize that C, and to some extent C++, is just a high-level assembly language, and that the whole language is made up of "statements with slots in them," and expressions with varying levels of precedence, then you can see what you're really doing with it. You can still mess up in mystifying ways, because the complexity of programs can make some issues opaque (particularly with pointers), but I think C/C++ programmers become less error-prone with this understanding.

Mark Miller
Mark Miller like.author.displayName 1 Like

a fellow developer told me many years ago. He was working for a rather seedy outfit (he speculated it was connected with the mob) developing a call center system in C. It was a similar situation. He was coding, and there was this other guy he was working with who wanted to participate, but...didn't quite get it. The other guy said, "I know C. I don't know about pointers, but I know C." The developer said (to me, maybe not to the guy's face), "That's like saying you know French. You don't know how to conjugate a verb, but you know French." :)

Sterling chip Camden
Sterling chip Camden

It manages to give you almost all the power of C, while minimizing its dangers.

Jaqui
Jaqui like.author.displayName 1 Like

you can shoot yourself in the foot with programming, c gives you the gun? :D in many ways c is the most elegant language to work with, because it is such a small language. The direct access to hardware control makes it the most dangerous as well. c++ is only marginally safer than c. both are definitely not for the developer that only knows the framework type development.

Jaqui
Jaqui

the better performance of c/c++ really benefits system level programming. yet most newer systems level code is the app framework abstracted level coding. the cause of massive increase in hardware requirements just to be able to turn the computer on. edit: danged typo

Tony Hopkinson
Tony Hopkinson

you missed lesson one on optimisation. They are either messing about, or they are incompetent. I was thinking more use of arrays / lists instead of switch or if Dictionary or sorted list instead of Find on a List Caching The stringbuilder class... Lots of things you can do in frameworks if you learn them instead of trying to revert to machine code in desperate attempt to save one clock cycle per hour...

Justin James
Justin James

All complex compilers have built-in optimizations, and knowing what the optimizations of your particular compiler are is a quickly dying art form. For example, way back when, this loop in VB.NET: For i = 0 to array.Length - = Next would perform a LOT worse than this one: Dim max as Integer = array.Length - 1 For i = 0 to max - 1 Next Because on each iteration of the first, they'd reperform "array.Length - 1". At the time, C# had a compiler optimization for this, where it wouldn't recalculate on each loop. Fact is, both compilers were right. The VB.NET version was a literal interpretation of the code, the C# version was a more useful but less direct interpretation of the code. The difference is, the VB.NET developer had to be smart to figure out how to optimize his code, the C# developer didn't... that being said, there is always the possibility that the C# code will run *slower* in the future due to a change in the compiler's optimizations, and there is a chance in the future that the VB.NET optimization will no longer be needed. And as a result... when you say: "I can't even describe the dread I feel when I consider programming in a language where there are discussion about how best to assign constant integer values to a variable." ... you are basically ruling out every language out there with the exception of Assembly, C, and C++... maybe FORTRAN and COBOL too (where there is only one way of doing things). As soon as you get into OO languages, this is mandatory. Never mind the fact that in nearly EVERY language with strings, there will be performance issues around assignment and concatenation. :) J.Ja

nwallette
nwallette

Assuming you're talking about optimizing code built on frameworks... I don't know if there's a human being alive with enough of a grasp on all the underlying code for a typical Java or .NET application to really optimize the code. I sat through an argument on TheDailyWTF between several Java programmers trying to decide the most optimized way of assigning a constant integer to a variable. You see, creating an integer object (WTF?) and assigning a constant (like 10) is waaaay less efficient that using a pre-generated integer object. So, in fact, it may be best to build an integer table of -500 to 500 and use indices instead. But wait! Java already pre-generates a table similar to this, someone says. So, in fact, some_integer = 10 is already plenty efficient since it will be replaced with some_integer = [index_to_pre-generated_integer_lookup_table]. Meanwhile, I was thinking: /* C-style optimized method of assigning the value '10' to a new integer */ int some_integer = 10; Done. I can't even describe the dread I feel when I consider programming in a language where there are discussion about how best to assign constant integer values to a variable.

Jaqui
Jaqui

The idiocy of the hardware is cheap development model can readily be shot down, if the company is wanting to get the "green" market share. more hardware = bad for the environment. [ higher power required for more hardware = more pollution, more hardware means more hardware to dispose of when time comes ... ] It just takes people to promote that awareness, which here in North America is a hot subject.

Tony Hopkinson
Tony Hopkinson

problem not a technical one. The abundance of people who can muddle through the frameworks, coupled with the productivity improvements those of who can do a bit better get + an extra machine or two, is way cheaper than having to pay very expensive talent a long time to achieve the 'same' product on a now out date piece of hardware.... Financially efficient as opposed to technically, at least in the short term. On those rare occasions where you've done as much as you can at a high abstraction level and it's still not acceptable, then someone posted earlier, "you get to put the big boy trousers on" One of the things that really troubles me, is how few know how to optimise within the framework, if they can't understand it at that level, they are f'ed closer to the machine.

nwallette
nwallette like.author.displayName like.author.displayName 2 Like

(Hacker in the white-hat tech enthusiast sense.) People who enjoy programming, and are passionate about doing it "right" will use native code when it makes sense, interpreted code when the complexity outweighs the performance penalty, scripted languages when something just needs to get done despite the lack of eloquence, and mod_perl or PHP for dynamic websites. :-D OK, OK... maaaaybe also Python. If you're not into whitespace.

Justin James
Justin James like.author.displayName 1 Like

The overall, fundamental problem here, is that maybe 5% of the actual developer workforce has a true passion for their jobs. The rest are there for a paycheck. Maybe they are good, maybe they stink, maybe they enjoy it, maybe they don't. But you aren't going to have success with stuff like C++ when most of the folks in the labor pool don't want to work that hard, or lack the talent. It's easy to say, "oh, but this is what we get paid to do." Good luck hiring them. Heck, you can barely hire qualified .NET, Java, and PHP devs, let alone someone you'd want handling memory and pointers! J.Ja

Tony Hopkinson
Tony Hopkinson

Actually it would be a good test. If you didn't roll about the floor laughing after reading it, you are not a suitable candidate. :D

Jaqui
Jaqui like.author.displayName 1 Like

but then, I never lost my interest i native apps, so it can't renew it. :p I'm not a big fan of the interpreted languages for apps, simply because they don't perform as well.[ any java app is a perfect example of poor performance ] I'm also not a fan of the "app framework" development model. These tend to be a major source of bloat in application hardware requirements. Though you can lay the blame for this concept right at the feet of the ANSI/ISO team that set the C++ standard in the first place. The STL of C++ is the first attempt at an application framework. It looks a lot like a big part of the design in Java was to do an interpreted language implementation of the C++ standard. Thy pushed the app framework concept further along in it right from the start. Remember when Java was introduced? The Sales reps for the language coined a phrase to help sell it to the developers that were complaining at hardware requirements. That phrase [ from a HARDWARE MANUFACTURER ] Hardware is cheap. I'm still all for native apps, but the newest version of the C++ standard, probably not anything I'll really use.

terjeb
terjeb

>> any java app is a perfect example of poor performance Really? Seriously? You think this is still the case? Time to get back into the time-machine and come back to 2011. 1999 is a long time ago.

Tony Hopkinson
Tony Hopkinson

compared to native code written by some one who's knows what they are doing, and quite possibly without the overhead of OO, java , .net , straight intepreted code can be outperformed. How valuable that is, is up for debate, whether is not. Byte code to native or text/token to native guarantees that...

Mark Miller
Mark Miller like.author.displayName 1 Like

I looked at the Wikipedia article on C++0x. It brought to mind what I remember reading a couple years ago that they were adding lambdas to it. This was a clue to me where they were going. It strikes me that the new version of C++ is basically the community saying, "Java, C#, and (maybe even) Ruby have some good ideas," but they're trying to give each of the good ideas a C++ flavor. I found myself not wanting to finish reading the description of all its features, because it felt tedious and boring. I felt like, "Why would I go through the trouble to do all this crap just to do what they're talking about?" It reminded me of a complaint I had about Java a few years ago, that you had to go through gymnastics just to do "type calculus," rather than focusing on what the core task was. ANSI C++ was much the same way. I think the new version, like the old ANSI standard, makes things too complicated. I could see right off the bat that there were going to be new pitfalls to be avoided. One example was with lambdas: If a closure object containing references to local variables is invoked after the innermost block scope of its creation, the behaviour is undefined. Ah yes... The old scoping rules still apply. You have to know when to copy by value, and when to copy by reference... Edit: Decided to delete code sample, since it was probably bad code.

Sterling chip Camden
Sterling chip Camden like.author.displayName 1 Like

Unless I misread that, they shouldn't be called "closures" then.

Mark Miller
Mark Miller like.author.displayName like.author.displayName 2 Like

This gets to what Alan Kay was talking about in the quote I cited earlier: "it is disastrous to incorporate optimizations into the main body of code." But this is par for the course with C++. The designers incorporated the idea of copy by reference and copy by value into their idea of closures, so like a function/method in C++ (though fortunately, at least in this draft of the spec., they don't incorporate pointers!). The way they do it is like this: [environment-capture-spec](inputs)->return-type {block-code} So what that quote I was talking about from the Wikipedia article said was if you do this: // auto is a keyword used for duck-typing in C++0x auto someFunc() { int a = 5; // [&] captures all variables in scope by reference. // Could've also done [&a] to be specific. return [&](int b)->int {return b + a;}; } void main(void) { auto cl = someFunc(); // I'm making some assumptions on how this would work cout int {return b + a;}; then it's okay, because you're telling the compiler to copy variables in scope by value (could've also done "[a]" to specifically capture "a" by value). What they were trying to allow for was the situation where you have a large structure, or array, defined inside a class, or function, and you don't want to have to capture it by value, because then that would be inefficient. This is where languages that have garbage-collected memory typically have a leg up, because all variables are bound by reference, rather than being associated with hard addresses, as in C/C++. In the languages that use GC memory, you don't have to worry about where a value is located, or if it's going to disappear when you exit a function, or get rid of a class instance (because, for one thing, everything is copied by value when going in and out of a function, except for class objects), because the memory system will keep track of who is referencing what. This is a double-edged sword, though, because it's quite easy to get memory leaks using GC memory (memory that is never freed, because something is referring to memory unexpectedly). I've heard others declare that you can get the benefit of GC memory in C++ using smart pointers (included with STL), but from what I've seen, that's only part right. I remember trying to use STL smart pointers 10 years ago, and they were pretty simplistic. From what I remember, if I copied from one smart pointer to a second one, the first smart pointer lost the reference to the value! Basically all they were designed for was so that you didn't have to delete something after you had allocated it. It didn't do reference counting.

Tony Hopkinson
Tony Hopkinson like.author.displayName like.author.displayName 2 Like

Well may be I, seeing as you're a young 'un. :D Now we are paid to produce things quick not produce quick things, for that in general business will simply throw more machine at the problem. Besides there are all sorts of things you can do to your design/implementation, to make the few clock cycles you could save writing native code irrelevant, and compiler design since the early days is what the really clever boys have been concentrating on for a long time. A well written piece of say ,net or java once compiled from byte code will get damn close and it will still be much easier to manage. Not to mention if it became a required skill, most of the people who are curremtly qualified for our role would be out on the street, because they never learnt how the machine works in the first place....

Tony Hopkinson
Tony Hopkinson like.author.displayName 1 Like

need C++, like an extra hole in the arse though.

Slayer_
Slayer_ like.author.displayName like.author.displayName 2 Like

I'd be happy with a new native language. Assuming it has good performance. Right now, it seems all languages are to make programming easier, rather than making the program itself fast and efficient. This seems backwards to me. Programmers are paid to understand complicated programming, why are we spending so many CPU cycles trying to dumb down programming?

Justin James
Justin James like.author.displayName 1 Like

"Programmers are paid to understand complicated programming, why are we spending so many CPU cycles trying to dumb down programming?" Because the average application is too complicated for the average developer to understand. The pool of qualified people is far too small. The *only* solution is for the really smart people to spend their time making systems which even fools can make good applications with, and let everyone else focus on using those tools. We're not there yet, and unfortunately, too many modern tools dumb down and increase complexity at the same time. :( J.Ja

Mark Miller
Mark Miller like.author.displayName 1 Like

The reason is the version that's installed with the OS may not match the version with which the app. was developed. So I anticipate the same is going to be true of .Net installs. An installer is going to need to check the version of the VM and the framework (since they don't always correlate), and either include the necessary versions of each in the install package, or go out to the internet and grab the .Net distribution installer, and install it during the process.

Sterling chip Camden
Sterling chip Camden like.author.displayName 1 Like

Even experts in language design often confuse the discussion of semantics with implementation and optimization details.

Sterling chip Camden
Sterling chip Camden like.author.displayName 1 Like

You need languages and frameworks that abstract certain solutions for you, otherwise why not write everything in machine code? OTOH, the .NET Framework is a good example of how not to design a proper framework. Everything in it is way too heavy but somehow manages to omit the one thing you need.

nwallette
nwallette like.author.displayName 1 Like

I see why it's necessary, too. You have to understand, .NET is relatively new. Native code requires frameworks, too. It's called "The Standard C Library". :-) Try running a program without msvcrt.dll, or glibc installed. The difference is, the C runtimes are expected to exist already. There's no need to include them in distributed form because we can reasonably assume you already have them. Once Win 7 is considered the baseline OS, well.. we can assume you have some version of .NET as well. (Although, if they keep releasing new layers of it, there will be a continual need to ensure you have that particular runtime.)

Slayer_
Slayer_ like.author.displayName 1 Like

Shouldn't need to install them. Any code used by the programmer, should be compiled into the EXE, including required frameworks. So, a basic example, if the programmer uses the standard messagebox command, the compiler should compile the needed code for the messagebox creation into the EXE, rather than relying on the framework. (Or have the option to do this) The frameworks should still be an option though, an organization that primarily uses .Net, would benefit from the single location support files. But, it gets stupid when, game launchers for example, which generally serve no purpose but to edit INI files for game settings and check a system for requirements, require .Net, but the rest of the game does not. This pisses me off.

Justin James
Justin James like.author.displayName 1 Like

Your understanding of .NET is a bit off. It does not require a "couple of gigs of support files". The DOWNLOAD is big because it's a complete package that covers a few different OS's (such as 9X and XP and 7, etc.). The *reality* is, what actually gets installed on any given system is roughly 35 MB. And yes, it requires registry changes, because it is a virtual machine. It needs to do a lot of work to interface between the OS and the app and allow the app to run without caring what the underlying machine is. You'd expect, say, VMWare to make a lot of reg. changes too, right? Especially if you expected it to magically start a VM and load stuff within it on demand and have it seemlessly function as part of the OS? Since .NET code is dynamically, not statically linked, if the DLL or library isn't needed, it still doesn't get compiled into the EXE regardless of whether or not it was specified. That being said, the default "includes" tend to be a touch heavier than needed, but a good tool like ReSharper will alert you to unused references and includes and strip them out. It's easy to blame the frameworks for "bloat", but go ahead and try writing a modern app that does modern things without them. An app that would take a few minutes or hours in .NET or Java would take a week in straight C++ with minimal libs, and it would be riddled with security bugs and be prone to crashing if it wasn't written by a top programmer, and even they make mistakes from time to time. The frameworks take so much of the guesswork out. It's easy for folks to sit on the sidelines and say, "oh, but look at the bloat". Yes, there is some bloat. But it's that bloat ("building on the shoulders of giants") that allows developers to get apps written in a reasonable time with reduced numbers of bugs. If you were to look under the hood and see the number of LOC needed to do even the smallest thing in C++ or other native code choices (for example, put a window on the screen, which is 1 LOC in .NET, and a zillion in C++ if you interface with the Win API), you will wonder how apps got written *without* these frameworks in the first place. If you REALLY want a return to the Windows 3.1 and early 9X days, that's what a world without .NET and Java looks like... where every app blows up constantly because they are written in C++ by people who aren't top 5% developers. J.Ja

Mark Miller
Mark Miller like.author.displayName like.author.displayName 2 Like

I've head Alan Kay talk about "separating meaning from implementation," and that it was found a few decades ago that this could be done. Here's a quote from an article where he talks about this: Another key idea here is to separate meaning from tactics. E.g. the meaning of sorting is much simpler and more compact than the dozens of most useful sorting algorithms, each one of which uses different strategies and tactics to achieve the same goal. If the "meanings" of a program could be given in a way that the system could run them as programs, then a very large part of the difficulties of program design would be solved in a very compact fashion. The resulting "meaning code" would constitute a debuggable, runnable specification that allows practical testing. If we can then annotate the meanings with optimizations and keep them separate, then we have also created a much more controllable practical system. This is critical since many optimizations are accomplished by violating (hopefully safely) module boundaries; it is disastrous to incorporate optimizations into the main body of code. The separation allows the optimizations to be checked against the meanings. Edit: Didn't realize when I posted the excerpt that the double-quotes from the article were translated to "???" here. I fixed that.

Slayer_
Slayer_ like.author.displayName 1 Like

Everything requires a large framework now. I will just pick on .Net for a moment, why to run a 20kb EXE, do you require a couple of gigs worth of support files and untold numbers of registry changes. Would it not make more sense for the compilers to only compile code modules that were actually used, and leave out the rest? If a function is never called, a class never used, a constant never read, why does this all need to be available. Just compile in what you need. This will mean there are some duplicate code in each program, but if that bloats the 20kb EXE to a 40kb EXE cause it needed to include support modules, thats a very good trade off. Send a .Net EXE that can run on any Windows computer, what a dream that would be...

Sterling chip Camden
Sterling chip Camden like.author.displayName 1 Like

I'm going with Go instead.

Justin James
Justin James like.author.displayName 1 Like

Once I can find a really good use for Go, I want to learn it. Chip, if you can suggest to me some good uses for it (you know my interests and needs pretty well), please let me know. J.Ja

Sterling chip Camden
Sterling chip Camden like.author.displayName 1 Like

performance, concurrency, and simplicity. Therefore, some target applications that spring to mind include high-scalability web sites and services (I presume that's what drove its development at Google), system programming, and intensive but parallelizable algorithms.

Editor's Picks