Software Development

My Language Choice Conundrum


A recent TechRepublic article has sparked a fairly fierce discussion about the appropriateness of particular languages to particular tasks. What I find interesting about this is the attitude of, “well, if this is the language that I know, why shouldn’t I use it?

About

Justin James is the Lead Architect for Conigent.

159 comments
metilley
metilley

In APL2 (IBM's original dialect of the APL language), there are no "libraries", response time is instantaneous, APL is interpreted, so there is no linking or compilation steps. The benefits go on and on. APL used to only run on IBM mainframe computers but now run on all platforms: mainframes, RS/6000 RISC workstations, Microsoft Windows' PCs. I don't think they run on Apples yet, but I am not sure. I have been using APL since 1971 and I won't use anything else. I have looked at COBOL, C, C++, Java, Perl, et al. They are all crap! And if you can't get past the uppercase "Greek Symbols" in APL (these are the operators and powerful functions), you can always use the J language. J is just as powerful as APL, but without the "funny Greek Symbols". Look, I taught myself APL while I was a junior in high school, so the complaint that it's too difficult to learn is bogus. Give it a try. You can learn more about APL2 at www.ibm.com

jslarochelle
jslarochelle

APL is the first language I learned (around 1978). It is an interesting language and very different from other languages. In APL you express the solution to your problem as a series of martrix operation. Of course the available matrix operations are very powerfull (extended to apply boolean operator, work with characters, etc...) I don't remember ever using FOR or WHILE loops but my memory could be at fault here (1978 remember). I would worry about support and the fact that it is very marginal. That being said I did have a lot of fun working in APL because I am a programming language lover (inner product, outer product, interesting...) and the environment was exotic and exiting for a newcommer(mainframes, high speed printers, etc...) What is the company that makes the software for PCs and how much money are we talking about ? I am correct and did see J# in one of the comments on this thread and if yes is this the J for APL ?

Justin James
Justin James

I have been dreaming of having something that would let me do matrix logic for a while now, to avoid those n-by-n nested if/thens. J.Ja

Justin James
Justin James

I definitely am intriqued by it! I am already thinking of combining it with some vector search algorithms and WordNet using one of the .Net implementations to do my "writing habits analyser" that I have been thinking of forever now. J.Ja

Justin James
Justin James

I definitely am intriqued by it! I am already thinking of combining it with some vector search algorithms and WordNet using one of the .Net implementations to do my "writing habits analyser" that I have been thinking of forever now. J.Ja

metilley
metilley

You will love the APL2 language. The possibilities are endless and it is a fun language to use. It is an array processing language. (Arrays are one-dimensional matrices). For more information, try to find a book called "APL2 at a Glance" by James Brown, and others. Wikipedia offers a nice overview too, but here are a couple of links for additional information: http://www-306.ibm.com/software/awdtools/apl/ http://www-306.ibm.com/software/awdtools/apl/success.html The J Language is also worth looking at. It is NOT the same as J# and has nothing to do with Java. Here are 2 links for more information: http://en.wikipedia.org/wiki/J_programming_language http://www.jsoftware.com/ Enjoy!

Mark Miller
Mark Miller

J# as the name is commonly used today is a Java dialect published by Microsoft that runs on .Net. They can't call it "Java.Net", because it doesn't adhere strictly to the Java spec.

jslarochelle
jslarochelle

... surprising. A Programming Language that you can use on the Web. I though that this language was just too exotic to survive. I goes to show that there is room out there for different languages. JS

jslarochelle
jslarochelle

That would have been surprising. I don't think you will ever get an APL# JS

Justin James
Justin James

I think you are the first person I know who has really mentioned APL at all. I may check that out, just to try something different. If it has been around that long, there a good chance that it has a good amount of tested, reliable libraries, too. Thanks! J.Ja

d_baron
d_baron

Like .net? Like Linux. Well you probably know about Mono. Not nucleosis but Novell's implementation of .net. Very extensive and growing. Not tied to (or excluded from) Redmond's tools.

Justin James
Justin James

To be honest, I am not a big fan of VB.Net or C#; to me, much of the value in .Net is the Framework itself, and far too much of the .Net Framework is Windows specific. While I applaud the idea of the CLR on multiple platforms, and I like the idea of writing in multiple languages targeted to one machine specification, Mono is quite behind where the .Net Framework and CLR itself is. I would love to see Mono get a bit more up to speed in terms of getting up to par with the .Net Framework and CLR. Another thing that would be really helpful is if there was a translation layer to transform some of the Windows-specific functionality into something that works outside of Windows, like translating Windows Forms to X, some of the Active Directory stuff to LDAP, etc. J.Ja

techrepublic
techrepublic

If ignorance is bliss, then Justin James must be the happiest man in the world.

apotheon
apotheon

From what I've seen, Justin certainly has some blind spots, but he's not just universally ignorant. Perhaps you could elaborate on how you came to that conclusion for me.

Riverwind
Riverwind

People has different points of view of "right programming". Yes, .net framework is great idea for sharing 'components' of codes in the same environment. It is an ideal game. But is this everyone favour to using such layer? My answer is no. There are lots of reasons like cost, compliance, setup, developer habbits, learning issues and so on. Developers perfer keeping their past and hand-ons skill sets valuable for repeat using, rather than learning in new areas of coding.

Tony Hopkinson
Tony Hopkinson

I personally like learning new languages, I like the opportunity they give to implement a solution in a different way. In my experience it's companies who are more in favour on a single language environment or who suffer from inertia on the language front. In fact any devloper who say's they don't want to learn another language, or who tries to justify a poor choice in terms of comfort value as opposed to a technical ones, I view with extreme suspicion. I'm familiar with many more languages than I've used in anger, and until you've done the latter, you can't really claim to be able to make a useful judgement on one. I'm a programmer, language is a tool, not the discipline. The tools you use do force particular ways to express a silution to a problem, sometimes a language choice is an unnecessary technical constraint.

ProblemSolverSolutionSeeker
ProblemSolverSolutionSeeker

I wonder what language is used to do the print out of this article? The overlays make the article hard to read. I need to see more of these kinds of articles on TechRep, as I have decisions to make on career changes. Good stuff, guys! and gals!

sMoRTy71
sMoRTy71

The page prints out much cleaner if you use our own "Print this thread" page rather than your browser's Print function. Here is a link to the printable view of this thread: http://techrepublic.com.com/5206-6230-0.html;jsessionid=LkulKHz2BO_lhlxS7q?forumID=102&threadID=204117&start=0 The "Print this thread" link is located at the top of the thread tree (right next to the "Subscribe to this thread" link).

LocoLobo
LocoLobo

sometimes just to read or browse the thread. It lets the thread use the whole page and cleans up a lot of the extraneous stuff. Paul

Justin James
Justin James

Thanks for the compliments! TechRepublic's HTML code is generated by a combination of WordPress (for the blogs, written in PHP) and another system that I am not familiar with, written in JSP (Java). I know that the forums use an application called "Jive", but I am not quite sure if Jive is Java (pretty sure it is). I have pointed your mention of the readability issue to management. Please feel free to get in touch with them as well about it as well, they are quite responsive to reader feedback. Thanks again! J.Ja

Mark Miller
Mark Miller

You got to the crux of the matter when you said that what goes into a language choice has a lot to do with other factors, and less to do with the merits of the language itself. That is so true. A programming language needs lots of support in order to get adopted. IMO the fact that the other more powerful languages stay in minority status illustrates this. I can't explain why exactly. From what I've read, it sounds like the reason is they are maintained by communities that are "stuck on principles" and do not look at the practical realities that business enterprises deal with. It may be that they believe they are above all of that, and may have a superiority complex going on. Whatever. It keeps the number of people who use these languages small. I can kind of relate to wanting to stick to principles. The vision that went into creating these languages is fascinating, so as I've looked at them even I feel a certain sense of not wanting them to deviate from it. I'm sure you've read Paul Murphy's posts over at ZDNet. I don't agree with everything he says, but one thing he's said that rings true is that the "data processing" mindset in IT in business and in academia predetermines a certain outcome in the computing world. The more powerful languages were not written strictly with data processing in mind. You can do it with them, but they're capable of doing more, and doing it well. As I've read more about them I've discovered that it really does take a different mindset to use them. They're not only different in how you approach solving problems, but they also require a different emphasis in the other aspects of development. I wrote about this before in the comments section of your blog. IMO documenting your variables, particularly the types that are assigned to them, and when, is essential. Otherwise you'll probably go bonkers in a large project trying to figure out what type a variable is. In the languages, like Lisp, where you can create macros, it's essential to document what they're doing and probably when they get invoked. Otherwise, again, you could go bonkers trying to figure out what the program is doing. Something I heard from a professional developer who's used Ruby is that TDD is absolutely essential. If you're not using it, you're just asking for trouble. These are all methodologies that in the other languages like C++, Java, and .Net you can skip and still make a workable, fairly maintainable product, because you can write the code such that it's fairly self-documenting. Future developers will hate you for it, but you can skate by without them. The consequences of not using them in the dynamic languages are more dire. When you say that trying to read functional language programs is like an English person trying to read German, I assume that's because you're just not familiar with the languages and how they operate. Functional languages, and dynamic languages generally, operate by different rules than imperative, strongly typed languages, like Pascal, C, C++, Java, C#, and VB.Net. It's difficult to wrap you head around it at first. If you're interested in pursuing it further you may need to start with some instructional material that explains the semantic rules. I know when I first started trying to understand them years ago I got a headache. That got better when I found some information sources that clearly explained what these rules were and how I could use them. I'll close with the fact that some of the more powerful languages do have IDEs that allow fullscreen, step by step debugging. Squeak (Smalltalk) is one of them. I believe there's a Lisp IDE that does this as well, called "Lisp In A Box". I've heard of it but haven't tried it yet.

john.madden
john.madden

I agree with some of your comments, from my experience with .Net and C++: 1. While .Net really does cut development time, there is still some holes in the framework that need attention. I do not like the idea of multiple languages on the framework, give us one great language we can all program with instead of several incomplete adaptations. The development world does not need the "Balkanization" of .Net. many will not agree with this, but the idea of a project written in many laguages portents huge problems with maintenance down the line... 2. When push comes to shove, I still use the Win API and C++. Call me a dinosaur, but it does everything very well, just not quickly. Our society is one where management wants complex problems solved in easily understood safety languages, something that is just not going to be possible. Train intelligent people well, and then pay them what they are worth. I'd rather pay a fewer number of great coders well then an army of average minds collectively typing away with mediocre results...

Justin James
Justin James

"When push comes to shove, I still use the Win API and C++. Call me a dinosaur, but it does everything very well, just not quickly. Our society is one where management wants complex problems solved in easily understood safety languages, something that is just not going to be possible." This is why I am stuck using .Net and VBA. "RIGHT NOW!" as opposed to "RIGHT!" are the buzzwords. And then they wonder why dumb, obvious bugs get into the code. Good code takes time. Bad, mostly working code doesn't. "Train intelligent people well, and then pay them what they are worth. I'd rather pay a fewer number of great coders well then an army of average minds collectively typing away with mediocre results..." Jakob Neilsen measured this, a great programmer (top 1%) is literally 20 times more productive than an average programmer. This means that you could pay 1 great programmer $1.5 million a year and still save buckets of cash ($75k salary, $25k taxes, benefits, etc. to an average programmer). J.Ja

M_a_r_k
M_a_r_k

You said you've never used C# but you called it a dog of a language. Why? How can you make that statment if you've never used it. I have used C++ and C# extensively and some Java. C# is by far my preferred language. Yes, it is specifically for a Windows environment, and more specifcally .NET. In writing Windows or web services code with a Windows server, I have found that I can crank out a lot more code that is a lot more reliable with C# compared to C++. The learning curve is easier than with Java because documentation for the API is readily available with MSDN and the development tools are built in via Visual Studio. With Java, you have to go searching all over the Internet and bookstores for API documentation and you have to find your own development environment. Of course if you are writing code for a proprietary system or embeddedd code, then neither Java or C# will work. In that case I recommend C++ or good old C. C# and Java are basically improved versions of C++. For you C# and Java haters, I ask you what advantages does C++ have over either C# or Java when developing Windows software or web services? You don't have to worry about pointers, memory leaks and destructors with either C# or Java. An added advantage of C# over Java is that pointers are turned off by default but you can use them if you need to using a C# unsafe code block. I've found that I only rarely have had to fall back to using pointers. I could have gotten by without them and it is recommended you not use them (a lot of today's novice programmers have never used them), but there have been a few times where some things have been easier with pointers, though I'm careful to limit their use and visibility.

Justin James
Justin James

"How can you make that statment if you've never used it." I have read plenty of C# code over the years; I know what goes into a language that I call "good" and what doesn't, and C# is too into the OOPy style of verbosity. "Yes, it is specifically for a Windows environment, and more specifcally .NET. In writing Windows or web services code with a Windows server," That does not bother me too much, I spend a lot of time using VB.Net (which I dislike a lot too, I may add). "I have found that I can crank out a lot more code that is a lot more reliable with C# compared to C++." I would not doubt that; it takes a truly exceptional (like top 1%) programmer to be able to write C++ quicker than C#, Java, VB.Net, Perl, or most other languages. C++, while terse, is not a fast language to write. C is even slower, I may add (nothing like doing malloc for everything...). "For you C# and Java haters, I ask you what advantages does C++ have over either C# or Java when developing Windows software or web services?" Speed of execution, plain and simple. When you are writing something used by a lot of people, the cost of execution speed becomes a concern. Let's take a small application used by 10,000 people at a company, for example. This app is written by one programmer in 1 month in C# or Java, and 2 months in C++, that's roughly $4k in developer time vs. $8k. Now, let's say that the C++ app saves 1 minute per employee per day over the C# or Java app. That is 10,000 man minutes per month, which is 166.6 hours. In other words, the extra month of development is paid back within the first month of use! It gets even worse when you are talking about a large Web app, where a 10% improvement in performance scales to 10% less hardware costs, which scales to a 10% reduction in maintenance, software liscense fees, etc. (assuming that we are talking about a big application farm, not something small that takes up 2% CPU on 1 server somewhere). If you are writing "enterprise apps", you really should be using an "enterprise language", and that means C++. "The learning curve is easier than with Java because documentation for the API is readily available with MSDN and the development tools are built in via Visual Studio. With Java, you have to go searching all over the Internet and bookstores for API documentation and you have to find your own development environment." I will probably always favor VB.Net and C# over Java for these reasons. Java is sort of cross platform compatable, when the Iris of Osiris is in alignment with five of the Seven Sisters of Thanatos and your JVM of Smiting +3 has been properly polished with the Wax of Compatability and stored in a bag of holding... that is the one thing Java has over .Net. "a lot of today's novice programmers have never used them" While I agree with this statement in fact (can't dispute it), it is amazing and saddening to me. As far as I am concerned, a "programmer" who is not capable of understanding pointers is not a programmer. They are a professional libary gluer. Do I use pointers? I use references in Perl. I pass by reference frequently, for performance reasons. Programmers who don't understand pointers also seem to make a lot of other rookie mistakes too, like passing an entire dataset by value, not by reference (oh, my aching heap!), poor object lifetime management, not freeing resources (I love it when people manage to make memory leaks in a GC'ed language, or do not explicitly close a file handle, so the file is "in use" until the user closes the program) and so on. It isn't even that *using* pointers makes you a better programmer, it is that *knowing how to use* pointers makes you a better programmer. Learning EdScheme was useless from a practicial standpoint, but having that experience made me a better programmer. Learning data structures in C is not knowledge I use to today, but it made me a better programmer. Taking Calculus has not helped me one whit at a practical level, but it taught me how to think about problems a bit better. Reading lots of stuff, very rarely even computer related does not come up in my job, but it makes me a better programmer. And so on and so on. I really think that what people used to say about BASIC (check the Jargon File) is already applicable to the programmers weaned on Java, C#, etc. J.Ja

fredmoscicki
fredmoscicki

You rag on don't you. You specifically said in your initial article that you don't like java because it is more time consuming to program in (if I got your take correctly). Now that someone is confronting you on that, rather than defending your initial statement, you change course and start talking about speed of execution. What exactly is your beef anyway? Did a java programmer get under your skin at some point? With regards to java documentation availability and tool-plug-in-ability, I would only say research and get the facts before you start your rant. Have you used Eclipse or Net-Beans before? No - I didn't think so.

Justin James
Justin James

I used Java for a full year, a few years back. Maybe things have changed, but this is what my experience at that time was: * I do not like the Java language. It is insanely verbose. * It executes slowly, even more slowly than .Net does, and that is quite an accomplishment. * Sun: great technology company, in terms of produces good quality stuff, but they still have not figured out how to actually help their customers or users get their job done. Take a look at Solaris's language selection process during an install if you don't beleive me. * Broken promises of cross platform compatability; between different JVMs, conflicting versions of the Java spec, and different app servers, Java is not nearly as cross platform as it claims to be. * Documentation shoddy at best. Incorrect syntax listings, actual operation not as described in the documentation. Confusing behavior for certain things. * Making primitives and objects not equivalent. Again, that was *my experience*. I have been meaning to give Java another go around, but let's face it, if every Chevy you ever bought broke down on you, would you keep buying Chevys, hoping that they improved? J.Ja

M_a_r_k
M_a_r_k

And your statement comparing knowledge of Calculus to knowledge of pointers is very accurate. I have taken four Calculus courses but haven't ever used it for work. Just knowing the theory has helped me, in my opinion, be a better engineer and software developer. Same with pointers (and assembly languages too, for that matter). Knowing what's going on behind the scenes has helped me write better code; it is especially useful when debugging. It is very irritating to work with some so-called software developers who have no concept of how what the computer is really doing when it executes a program.

grant.volker
grant.volker

Test for constantly increasing memory use Hi, I've been tasked with monitoring a Windows process developed in the .Net framework. I'm looking to monitor the memory for any consistent increase and also any degredation in CPU performance. I have some ideas below, but would really appreciate your insights before I go ahead! Ideas from: http://msdn.microsoft.com/msdnmag/issues/07/01/ManagedLeaks/#void Is it a memory leak? -------------------- Note that constantly increasing memory usage is not necessarily evidence of a memory leak. Some applications will store ever increasing amounts of information in memory (e.g. as a cache). If the cache can grow so large as to cause problems, this may be a programming or design error, but is not a memory leak as the information remains nominally in use. In other cases, programs may require an unreasonably large amount of memory because the programmer has assumed memory is always sufficient for a particular task; for example, a graphics file processor might start by reading the entire contents of an image file and storing it all into memory, something that is not viable where a very large image exceeds available memory. To put it another way, a memory leak arises from a particular kind of programming error, and without access to the program code, someone seeing symptoms can only guess that there might be a memory leak. It would be better to use terms such as "constantly increasing memory use" where no such inside knowledge exists. The term "memory leak" is evocative and non-programmers especially can become so attached to the term as to use it for completely unrelated memory issues such as buffer overrun. Checking for Leaks ------------------ There are a number of telltale signs that an application is leaking memory. ? Maybe it's throwing an OutOfMemoryException. ? Maybe its responsiveness is growing very sluggish because it started swapping virtual memory to disk. ? Maybe memory use is gradually (or not so gradually) increasing in Task Manager. When a memory leak is suspected, you must first determine what kind of memory is leaking, as that will allow you to focus your debugging efforts in the correct area. Use PerfMon to examine the following performance counters for the application: ? Process/Private Bytes The Process/Private Bytes counter reports all memory that is exclusively allocated for a process and can't be shared with other processes on the system. Test: If Process/Private Bytes is increasing, but # Bytes in All Heaps remains stable, unmanaged memory is leaking. ? .NET CLR LocksAndThreads/# of current logical Threads. The .NET CLR LocksAndThreads/# of current logical Threads counter reports the number of logical threads in an AppDomain. Test: If an application's logical thread count is increasing unexpectedly, thread stacks are leaking. Test: If both counters for 'logical thread count' and 'Private Bytes' are increasing, memory in the managed heaps is building up. ? .NET CLR Memory/# Bytes in All Heaps The .NET CLR Memory/# Bytes in All Heaps counter reports the combined total size of the Gen0, Gen1, Gen2, and large object heaps. Test: By default, the stack size on modern desktop and server versions of Windows? is 1MB. So if an application's Process/Private Bytes is periodically jumping in 1MB increments with a corresponding increase in .NET CLR LocksAndThreads/# of current logical Threads, a thread stack leak is very likely the culprit. Test: If total memory use is increasing, but counters for 'logical thread count' and 'Private Bytes' (measuring managed heap memory) are not increasing, there is a leak in the unmanaged heap.

M_a_r_k
M_a_r_k

Thanks for the explanation, db. Sounds like it may have been coded right but the design needed to be tweaked to delete jobs for every condition. C++, C, etc have a lot of coding idiosyncracies, as well as these design issues, that you have to consider. A great book that discusses these types of problems with C++ and helped me very much is [i]Effective C++[/i]/.

Mark Miller
Mark Miller

I read up on them a little bit. I can't remember exactly how you do it, but you assign your variable to WeakReference (fudging a bit here), and from that point forward the variable can be garbage collected. The danger is that it can be garbage collected before you'd expect it to be (like, before it goes out of scope), so I assume there's a way to check first before you access it. That's the downside of it. Just like you don't know when the GC is going to collect objects after all references to them are gone, you don't know when the GC is going to collect a weakly referenced object either. That's the breaks. It's conceivable you could run into a situation where you need to reinitialize a new instance of the same object, because you're not ready to give it up yet. Doing my Office automation project, which I described in another post, I was looking into this, because I needed to release all references to the COM proxy objects I was using in order for Excel to shut down. I ultimately decided to nil them out myself so I could determine when the GC would have access to them. It was easier to deal with that way.

Mark Miller
Mark Miller

I read an article on The Code Project that promoted the idea that something like what you describe be done for values/objects held in session state that expire between pages, in an ASP.Net project. I looked at it and it just looked like trouble to me. The scheme that was suggested was to use some function that generated a unique ID. This would be the ID in all session entries made for that page, during the page session. I think the author suggested concatenating a mneumonic onto the ID to uniquely identify values that were used between page round trips. The problem was when you left the page (went to a second page), and came back to it, because then you would not use the same ID as before. You would get a new ID for the same page, thereby losing any data stored for the previous page session on that page. This was the idea. This way you don't have to worry about the codebehinds for different pages accidentally clobbering each other's session values, or a page reading a value from session state that doesn't mean anything to it. The problem as I saw it is that these discarded session entries were never nulled out. They would just stay in session for the entire run of the app. True, session state eventually expires, but that's when ALL of session state is freed up after a period of inactivity from the user. For short-run applications like e-commerce sites you could get away with this, but not with a line of business app., where it may be used by a single user or at a single workstation by multiple users for hours at a time.

jslarochelle
jslarochelle

references to null all over the place because this can screw up the memory caching mechanic and you could end up keeping the wrong memory block in the cache at the expense of more useful data. The example given in this branch is a valid application of this. JS

jslarochelle
jslarochelle

GC'ed language. It is not because you don't allocate memory explicitly that you can't run out of it. With simple applicaions that are fairly static in nature this is easy to take care of but with more complex application you have to plan your design to handle running out of memory. I still think that for what we do Java is great However, I think that languages like C/C++ or even assembly language are very good at building programming discipline. To build anything usefull in assembly language you really had to think before you started coding. Assembly language was also a really good way to learn about the hardware. In school I think they should kept assembly language on the curiculum (or bring it back).

fredmoscicki
fredmoscicki

I don't think there is any language that would handle that, and if it did, I certainly would not use it. It would resemble some so-called smart features in MS Word. Things happen in a 'smart' kind of way based upon someone else's interpretation of what I am probably trying to do. Why would you want the language to take care of those non run jobs? How is it supposed to know you wanted the object turfed? You would have to run some clean up code, which is probably what you ended up doing in the end. That was just plainly a programming error. Fact is, we can all make silly programming mistakes in any language. The only thing is, there are fewer to make in a garbage collected language. The only downfall of garbage collection is increased resource requirements and the increased possibility of indeterminate behavior.

Justin James
Justin James

That explanation of a memory leak in a GC'ed language was spot on. It is actually quite frequent. A good coding practice, I have found, is also to try to set all objects to Nothing right before they leave scope, to help the GC know to clean them up, as well as to make sure that I get an error if I am still referencing something that I should not be. J.Ja

jslarochelle
jslarochelle

and sometimes this includes using classes like WeakReference or SoftReference.This is Java but C# must have similar classes or equivalent mechanism.

dbmercer
dbmercer

Garbage collection only deallocates memory that cannot be accessed from the stack (directly or indirectly), not memory that *will* not be accessed. Therefore, if you inadvertently keep references around (e.g., in some sort of collection object) to objects that you will never reuse, you have effected a memory leak. Here's how to do it: My instance of making the mistake is in managing communications with multiple jobs. To implement this, I created a job object that encapsulated the data and state information for a job, and I passed the job object (or, rather, a reference to the same) to various functions and objects for action. In addition, some communication was done extra-process, so a unique ID was assigned to each job and references to the jobs were used in interprocess communication. The jobs were then stored in a collection keyed by ID, so when a message was received referencing the job ID, the appropriate job could be pulled up and updated as appropriate. One way to cause the memory leak would have been simply to neglect to delete the job from the collection when the job completes. In our case, this was being done most of the time, but there were some unusual/unexpected ways for a job to terminate that were not being handled properly. In fact, they were coded correctly to perform the appropriate externally observable actions (sending the appropriate messages and signals to announce the event), but they were not removing themselves from the job collection, so the only effect of this was an over-enlarged heap after several days. The garbage collector would not deallocate the objects because they were still referenced by the collection which was referenceable from the stack. The garbage collector had no way of knowing that no message would ever come in with that unique job ID ever again, so the object remained in memory, just in case. (In fact, in the spirit of true defensive programming, we even had handling for this remote possibility, which was to delete the job from the collection, but since no message would, in fact, come in for a defunct job, this code was never -- and rightly should never have been -- invoked.)

Justin James
Justin James

"Memory leaks are less of a problem in garbage-collected languages, but the problem has not gone away, and I am confident that as the new crop of software engineers comes of age, never having been taught about memory deallocation, they will be able to increase the incidence of memory leaks back up to their previous levels." I love this quote. And has anyone else ever wondered why Excel insists on locking files so hard, you cannot even copy them while they are open? It is not like Excel is streaming data to the file constantly... J.Ja

M_a_r_k
M_a_r_k

How did this memory leak occur? I have heard occasional stories about memory leaks in Java and .NET but I haven't ever had them occur in my own code. I have always assumed the garbage collector would eventually get around to cleaning things up. I think that some resources such as a database connection does need to be explicitly closed but memory reallocation should be automatic. A memory leak is one of the last things I am going to suspect when/if my .NET program randomly crashes. Can you give me some things to watch out for that might cause them?

dbmercer
dbmercer

"Knowing what's going on behind the scenes has helped me write better code; it is especially useful when debugging. It is very irritating to work with some so-called software developers who have no concept of how what the computer is really doing when it executes a program." Hear, hear! "I love it when people manage to make memory leaks in a GC'ed language." Yes, not only have I seen this, I have done it myself. However, with my knowledge of how memory is allocated and deallocated, I was at least able to recognize the problem, track it down and fix it. Memory leaks are less of a problem in garbage-collected languages, but the problem has not gone away, and I am confident that as the new crop of software engineers comes of age, never having been taught about memory deallocation, they will be able to increase the incidence of memory leaks back up to their previous levels. :-)

john.madden
john.madden

Justin, much of what you said i have also reiterated before. While I lovve the .Net framework, there are still some holes where you have to use PInvoke. I am also seriously concerned with everyone's enthuisasm for multiple languages on the platform - good for now, but what about 5 years down the road when maintenance has to be performed on a project written in several languages? As for the venerable C/C++, much has been written to detract from them. However, I think education is the key to proper use, and let's face it - not everyone is cut out to be a developer.

Justin James
Justin James

begin/end were from Pascal, not VB, as far as I know. But yes, VB always has been awkward for me as well, and I am grateful that VB.Net has improved that a lot ("elsif" being replaced by "else if", for example). But it is still a goofy language. The size - 1 thing I see in lots of languages, particularly the OO ones; it *does* make sense. "Count" indicates the number of elements, not the uppermost bound of the element. Any language that uses zero-indexed arrays combined with a "count" or "size of" operator/method/property will have this goofiness. Indeed, if you remember from a while ago, putting that value (size - 1) into a separate variable to use for looping slashed execution time dramatically! J.Ja

Mark Miller
Mark Miller

I've probably mentioned this before, but I chose C# as my "standard language" just because it was similar to the languages I had been using for years, C and C++. VB.Net took a while to get used to. It was more verbose, from where I sat. I like curly braces as opposed to "begin" and "end" for everything. I like expressions like: for (int i = 0; i < size; i++) {...} To me the terseness has some elegance to it. From what I hear VB.Net has some features, particularly in .Net 2.0, that C# doesn't have that make it more productive for some things. The "My" namespace comes to mind. The one nicety that VB.Net had that C# didn't in the project I worked on was the VB.Net code editor would do a static check of my code as I was writing it. That kind of impressed me. I don't know if this was an illusion or not, but my VB code always seemed to compile faster than my C# code for some reason. Maybe it was doing some pre-compilation during the static checking? The toughest thing for me to get used to was the way it handled line breaks and end-of-lines. In C# you use a ';' at the end of each statement (statements with curly braces completing them are the exception). In VB.Net you end a line by hitting Enter. If you want to continue a line in C# you just hit Enter and keep typing. In VB.Net you have to hit '_', then Enter. I kind of got how variables were allocated in VB, but I was never quite sure I was doing it right. In C#, if you're declaring an array you type: int []c = new int[20]; You have to do it this way. All arrays are on the heap. But in VB.Net I can do: Dim c As Integer(20) and that's it. It looks as though it puts the array on the stack, though that could be an illusion. One thing that showed that VB was retrofitted to .Net were lines I had to type like this: For i = 0 to size - 1 ... Next i BASIC was designed for 1-based arrays, but that isn't supported in .Net. Anyway, working in VB was a bit of a trip down memory lane. When I was in public school I programmed in old style BASIC (with the line numbers) for years. No way would I work in an environment like that now. At least BASIC has grown up since then. I didn't mind working in VB.Net too much, but if I had to make a choice between them I'd use C#, if it was practical.

Justin James
Justin James

Thanks for mentioning that about the VB.Net/Office connection. I will be shifting from VBA within a "template" to VB.Net modifying the document very soon, too. Right now, VB.Net is my "standard language", simply because I have not yet found a reason to be using C# for anything. Also in the back of my head is the fantasy that other people in my company will be debugging/QCing/assisting my code, so that really makes VB.Net the choice for that. J.Ja

Mark Miller
Mark Miller

In general I agree with the notion that consistent language use is good for optimizing a software development team. You don't have to worry about hiring someone who knows an obscure language. I have run into the situation though where it was just better to use VB.Net (the main language I use with .Net is C#), and that was when I did some Office automation. VB.Net just understands better how to communicate with Office COM. You can do it in C#, but it's not as nice. Before I made my decision I looked at some sample Office automation code, both in VB.Net and C#, and it was no contest for me. VB.Net understands optional parameters and named parameters in COM components, and Office COM uses these characteristics a lot. You can deal with optional COM parameters in C# by filling the parameters you're not using with some enumerator value (I can't remember what it is at the moment). The thing that struck me is you have to fill in ALL of the parameters you're not using--even if there are 20 in all. Aaagh! VB.Net handles this for you. I don't think C# understands named parameters. My guess is you just get around it by filling in unused parameters with the enumerator value. I worked on a project where they wanted Excel worksheet functionality in an ASP.Net app. (I'm sure I've talked to you about this before) So I did the web app. code in C#, and I did the Excel processing using VB.Net. I made this decision because it made me more effective at the job. I could've done it in C# to be consistent, but I was worried I'd run into some nasty situation where C# just wasn't well suited for dealing with Office COM. I think you can justify using some "one off" languages if you feel that doing something in the "standard" language you use would be more trouble than it's worth. I don't mean in the sense of having to write more lines of code, but rather "how many hoops do I have to jump through to get this done in my standard language?" An example you cited in an earlier post on Perl being your secret weapon I think was a case in point. Doing what you talked about in .Net was just way too inefficient. Doing it in Perl saved you a ton of time. It wasn't because of the lines of code involved. The process just ran a lot faster. You have to make a value judgement about it. IMO, you could probably write most of the business logic you need for an app. in your "standard" language. If it looks like a part of the business logic is going to get really hairy, then consider other language options that will solve that particular piece more effectively. That's how I'd look at it.

Justin James
Justin James

I am now officially reconsidering my stance on multiple languages for .Net because of what you're saying. I never really looked at it like that. While multiple languages in .Net is a dream for me ("yeah, I'll write this little analysis piece in F#, the heavy string handling in Perl, the event handling code in VB.Net, and the bulk of the business rules in Ruby"), there are tons of languages out there which may not be around in 5 years, and tons of programmers who don't learn languages as quickly as I do. Maintenance is indeed a problem in that situation. Thanks for bringing this up. J.Ja

Editor's Picks