Software Development

Three things C# only developers might not know

Justin James highlights the important things developers might be missing out on if the only programming language they know is C#.

During a recent discussion with a friend, I realized there is an entire generation of developers who have been raised exclusively on C# with perhaps some Java or PHP in college as well. I like C#, and for the work I do it's my only real option other than VB.NET, but the idea of having never been exposed to any other programming language is a little disturbing. Here's a list of three things you might not know anything about if you've been raised on C# (or .NET in general). I also explain why you might want to learn these three things, and I offer advice on how to learn them.

1: Macro/scripting style

Writing code to be used for macros or scripts is an entirely different world from application programming. You can use C# for this task (particularly via PowerShell), but it is a pretty uncommon thing for .NET developers to do. This kind of programming is useful for automating repetitive tasks, especially those that are error prone. While it is usually associated with our systems administration friends, it also comes up when working with a variety of applications that support macros, such as Office and various graphics applications.

Learning to write scripts can save you a lot of time in the long run and make you more productive in a number of scenarios. It can also help you lend a hand to the administration staff. As a .NET developer, learning PowerShell is a great first step to getting a handle on this kind of development.

Related resources: 10 fundamental concepts for PowerShell scripting and Basic Windows PowerShell commands you should already know.

2: Resource management

In the world of .NET, automatic memory allocation and the garbage collector do a really good job at making sure that you don't need to think too hard about memory. Along the same lines, things like file handles and database connections often get closed down when they are garbage collected. However, "not needing to think too hard" and "not needing to think about it at all" are different.

Many .NET developers pick up some bad habits that are free of consequence in low usage scenarios, but are deadly to performance and system stability when put under load. For example, a lot of developers don't bother closing open resources (file handles, database connections, etc.) and just let those objects fall out of scope. This will work (the garbage collector eventually clears it out and the finalizer for things tends to close them too), but it is a really bad idea. Instead, you should become more conscious of this. If a class makes use of a resource, look for a "Close" method and make sure to call it when you are done with that resource. Also look for the "Dispose" method, which is used to discard resources safely. Beyond that, you should ensure that objects live at the most restricted scope that still makes sense to ensure they don't linger in memory any longer than necessary.

3: Interface with the Win32 API (and COM)

The .NET Framework does a really great job at taking the overwhelming majority of the Win32 API that we would need to work with on a regular basis and presenting it to us in a tidy package. But every once in a blue moon, you discover there's something in that API or perhaps a COM component that you absolutely need to get to. This is a great time to get to learn about P/Invoke. It will only take several minutes to learn what you need to know to be able to use it in the future.

J.Ja

About

Justin James is the Lead Architect for Conigent.

18 comments
rhyous
rhyous

I just haven't seen this among C# only guys. #1 doesn't matter at all for application development, though I have plenty of it (probably way more than most developers due working for a desktop management software company), but I rarely find a good use for "scripting/macros" in application development. #2 - Resource management - This I partially agree. If you are good, you know all about how to be efficient even with C#. There are ways to make sure your objects are cleaned up by the Garbage Collector (GC) and disposed of properly. However, even with this, the "need" is just not there that often in application development and sometimes when it is there, you realize your design sucks and once you fix your design, you don't need it anymore. #3 - I have never met C# only guys who don't have PInvoke experience because there is always one or two features that aren't in C# yet and you have to load a Win32 API. This post to me is a just not accurate. It is like the old generation ripping on the new generation just because it is a different generation. I have had a different experience. I have experienced the horrid result that comes when the old C++ developers jump into C# and next thing you know the code is C# but they are trying to write like they are using C++ and no, it doesn't integrate or work with standard C# libraries like Xml Serialization, WPF binding, etc...cause they don't know what they are doing in C#. For modern apps, I would hire C# guy before a C++ guy any day. However, I would hire a guy that is just good no matter the language, the kind of developer that learns the language before they use it. Another difference is that C# is much newer so when you talk of C++ guys you are talking guys with likely 20+ years of experience vs guys with less than 10 years of experience...yeah the C++ guys sucked their first few years too and posts like this would have existed from C and Pascal developers had their been a popular internet in 1985. Jared http://www.rhyous.com

Tony Hopkinson
Tony Hopkinson

Try VB as fas as resource management.goes. The worrying thing for me is I went on a C# course and was told you don;t need to manage your memory anymore. Fortunately I'd been around long enough to know that statement is somewhat misleading...

gishac
gishac

I've worked with C# (And also I've used Java, Go, C++), and in the projects I've worked in C# has been necessary be careful with resources (mainly memory), because of the quantity of data/objects that are instantiated, etc... For macro/scripting it's true, but I think that the usage of a linux based operating system also increases the probabilities to use a shell to do some processes (for developers)... in a Windows environment I think this kind of tasks are more related to the Windows Administrator, not to development team.

seanferd
seanferd

I feel it is a lot disturbing if these folks are coding publicly available software. :0

Sterling chip Camden
Sterling chip Camden

4. Thinking functionally instead of imperatively 5. Dynamic typing (despite the coming of the 'dynamic' keyword) Regarding resources: even in C# you might find yourself needing to fight that battle, but you'll have one arm tied behind your back when you do.

addween
addween

Wonderful. Share a website with you , put this url in google sirch ( http://www.chic-goods.com/ ) Believe you will love it. We accept any form of payment.

apotheon
apotheon

I had a look at your site, and noticed a small problem: http://sob.apotheon.org/img/tmp/rhyous.png This, obviously, makes reaching some of your menus a little difficult. I wonder if it might have something to do with the fact that I'm using a custom default font size setting in my browser. I haven't tried looking at your CSS to figure out the exact problem, though (I don't generally do that kind of work for free). I hope this helps. By the way, I'm a bit of a fan of FreeBSD myself, so I sympathize with your apparent appreciation for it.

Tony Hopkinson
Tony Hopkinson

you just backed up the point JJ was making... Scripting can come from other places, providing automation interfaces, abstracting functionality for dynamic execution, building installers, automatic testing, databasse ddl stuff. The weakness is more when people from either side of the divide reach for their familiar tool as opposed to the correct one. It's the only part of his statement we should focus on, not the C# bit.

Justin James
Justin James

... I've worked on apps in highly scaled environments where we discovered that dev were NOT closing DB connections, and every few hours the connection pool would run out and shut the app down in the middle of the day... it took MONTHS to find all of the connection leaks. Of course, *any* kind of developer can make this kind of mistake, but someone who has been spoiled regarding resources by a garbage collector will be a lot less in tune to the issues that someone who has spent time with a language like C or Pascal. J.Ja

Justin James
Justin James

Schools are teaching either Java, VB.NET, or C# right out of the box. Java shares the exact same problems with .NET of course. They graduate and move directly into a Java or .NET position. And there you go... a developer that has only been exposed to one language. To compound this, many schools switched to Java-only in the 90's and by the mid 00's, most schools went Java or .NET only. This means that many of these people have been in the job market to be considered "intermediate" or even "senior" level developers by now. I know a number of "architects" in this boat (they also know SQL, of course)! While they may be experts within the .NET or Java realm, the lack of outside experiences lead them to have very strong blinders on. And as these languages absorb ideas from from languages, people without those experiences are unable to use them to maximum advantage. J.Ja

Tony Hopkinson
Tony Hopkinson

Seen developers from "unmanaged" environments fight the garbage collector to a stand still. Basically a lot of them are the people the concept of garbage collector was invented for , ie those who worked in ana unmanaged envoronment but didn't allways bother to manage stuff. You get an immediate clue when you see something like this. // Free up memory GC.Collect(); Course that sort of developer was very popular at Redmond at one point as they patented memory leak and called it Windows. Managed and unmanaged environments are implementations, what people need to learn is lifetime management.

seanferd
seanferd

So, they aren't even doing basics or theory first? Does this include Comp Sci degree programs and the like? Wow.

Tony Hopkinson
Tony Hopkinson

I've been in the process of educating myself for decades. For instance I know the definition of slander, I read it in a dictionary once I also read C programming for dummies. On the page describing malloc, it mentioned free and vice versa. This was a clue to read both, not sure why I picked that up, innate genius perhaps... Everbody makes mistakes, even me. I learn from mine though, whereas way too many quite obviously did not. Level of understanding? How in Cthulu's name can you understand malloc without understanding free. It's one concept for ffs. I never claimed to do everything better, I just claim not to be in the habit of doing such a basic fundamental worse, again and again and again. Windows was as leaky as a sieve, for near a decade. Not going through the heartache of fixing it might have been a choice, repeatedly making the same crass error, incompetetence. Even if the big boys weren't behind the effort simple self defense as a coder would have them doing it right new code, yet they did not. I remember all too well.

koab2020
koab2020

it's easy to slander other programmers when you think you can do everything better. Tell me you haven't made poor choices in any given development environment that you learned later on how it could have been improved. This is life management - you have to do the best you can with the options in front of you and the level of understanding that you're at. Sound familiar?

Mark Miller
Mark Miller

I agree that the language should not be the focus, but sadly a lot of CS programs think it is the focus, because their goal is to "get students ready for the workplace." Really. I've heard CS professors say this. This is 160 degrees away from where CS was when I took it 20 years ago (I hedge a bit, because there was some focus on "what industry uses" when I took it, but theory was still emphasized more). I got into CS in the late 80s at Colorado State U. I got exposed to stuff like FSMs, automata, etc. in my junior year. Before that we had our introductory programming course in Pascal. This was the standard language used for the first 3 years of the program. In the senior level courses, the professor would pick whatever language they wanted to use, and the students were expected to learn it during the course. When I first entered, for example, I saw students in Operating Systems (a 4th year course then) using a language called "Turing," which resembles Pascal, from what I've seen. In my last year I saw other students using Modula-2 and C++. I was using K&R C in most of my senior courses (I only used ANSI C in my Compilers course). I remember getting into an argument with one of my professors about the need to teach languages according to what industry was using at the time. I was in support of the idea. He told me, "We don't want to become a vocational school." He said the goal was to make us flexible enough that we could learn whatever skills we needed on our own. They weren't going to train us for whatever industry wanted at the time, because he knew from experience that by the time students graduate, things will have changed. He was right. The common refrain I heard from my CS professors was, "Language choice is really a matter of taste. It doesn't really matter." I've found this to be somewhat constructive, and also somewhat misleading. Part of what they meant to communicate was, though I didn't get this at the time, and I doubt too many others did either, "Don't let the language dictate your design." They'd say things like, "You can do object-oriented programming in Fortran, if you want to, or even assembly language. You don't need an OO language to do it." The problem was none of our courses got us focused on developing our sense of architecture. All we got were rules of thumb, like, "Make your functions do only one thing." We never discussed what OOP really meant. Unfortunately the computer industry, and the school system I had been in before going to college, had so molded the minds of students such as myself that we didn't bother to research this stuff on our own. As far as I could tell, everyone I knew, including myself, was letting whatever language we used dictate to a large extent the design of our programs. If we were using a functional language, we'd try to write all of the logic of our programs in terms of functions. In some cases this was mandated. If we were using a procedural language, we'd try to write it all in terms of procedures. Likewise in an OO language. The way the above message was misleading is we took it to mean (I still hear this from people who've gone through CS programs recently) that, "It doesn't matter what language you use, because they're all Turing Complete." So we have people writing systems in C, C++, Java, etc. that fall into Greenspun's Tenth Rule, because, "It doesn't matter." They liked one language or another for their own reasons, so that's what they chose, but they didn't take architecture into account at all when designing their system. Re. the top tier schools All I've heard about MIT is that they recently moved what had been their introductory CS course, SICP with Scheme, to a junior level course (though unchanged in content), and they've made the introductory course deal with robotics, using Python. They've changed the orientation of the introductory course as well. Where SICP focused on building a "software computer" out of simpler computational components, the new course has students studying existing code libraries from a scientific standpoint, trying to figure out how they work, and then using them to program their own robots. An anecdote I've seen out of CMU was not encouraging to me. I talked about this in another discussion thread on here. A CS professor there wrote a blog posting recently, advocating the use of FP in introductory courses, because of the need for students to get into parallel computation in the future. FP is the new hot thing among CS academics now, because it features the ability to write code without side-effects, at least if you stick to pure FP. This is supposed to make asynchronous code execution easier to implement. Why is this important? Because industry is struggling to try to utilize multi-core processors effectively, using industry standard runtimes. This professor said that OOP was inadequate, because, "It lacks parallelism and modularity." I was shocked at his ignorance. I've heard of some great stuff in CS being accomplished at CMU, so I was really surprised to see this. Re. learning computer languages The way industry has gone with computer languages has been towards clumsy languages that are themselves easy to learn, but have huge libraries. So the challenge is not so much learning the language. That's the easy part. The challenge is in learning the standard library that comes with it, and any other libraries one will need to "glue" to a project to get something working. I relate it to the kind of learning that system administrators need to do when becoming familiar with a new system. It's a very different approach. I know. I tried being a Unix system administrator as part of a team that managed servers for a university department, for a couple weeks when I was in college. I realized very quickly I was not cut out for it. The mindset that's needed for that kind of work is more like that of a power user, not a programmer, though I did some amateur level Unix sysadmin work in one of my first jobs out of college. It was a small business where each of us had to wear many hats.

Hazydave
Hazydave

The first funcamental flaw in many of these discussions is the presumption that it's the job of a college or university to teach a language. It really isn't.. that may be the premise of a two year technical school. But a proper college education is different... not to suggest all schools do it right. I went to CMU in the early 80s, but I hope things are not substantially different by year, or at Stanford, MIT, or othe top tier schools. Every science or engineering student took at least one course to get a working knowledge of the basics of programming. Back then, had your choice of FORTRAN or Pascal, certainly neither would be a top choice today. For those going on in computer science, the next class (15-211, I'm sure many CMUer out there have those numbers etched in their grey matter), later split into two covering the same matrial, wass pure computer science. FSMs, lambda calculus, no actual program writing until the final program. But after that class, you could prettty much get working with a new language in a day or so. The other thing about this program, at least back then.. the language used was just a tool, and the right one. I took 5 courses in psychology and AI, using mostly LISP. We used assembly in some of my hardware classes. My course on CAD used Pascal, compiler design let us use LISP or Pascal (sadly, no C available).. but the language didn't matter. And it never has.. i have written in well over 40 languages. If you have a CS degree, that should be a guarantee that "computer language" is just a few days exercise, not a semester's course, much less the focus of a degree. And particularly a proprietary language lke C#... unless that's Microsoft U you're attending.

Mark Miller
Mark Miller

There's been a considerable narrowing of the technical skills that CS programs focus on. A lot of schools teach only C++ and Java, and it seems that the C++ courses, from what I've read from professors who teach it, take the "objects first" approach, which means that they try to avoid the C aspects and pointers as much as possible. They use references instead. Some schools have taken to only teaching Java through the whole program. When I took CS we got exposed to a bunch of different programming models. We had assembly, we had imperative languages (Pascal and C), and we got exposure to a bunch of "research languages," like Lisp, Smalltalk, and SML. That's all gone, though there are some holdouts who teach Scheme, and there's some receptivity to Python, depending on what major you go into. CS has split up somewhat into different disciplines. For example, at my alma mater you can major in what's called "HCI," which stands for "Human-Computer Interaction." At another school I've been reading about they have a Media Computation major, which focuses on programming to manipulate audio, video, graphics, etc. CS has branched out as well. Natural Science departments started teaching their students programming, primarily in Python, a few years ago. And schools of engineering have been teaching programming for as long as I can remember.

Tony Hopkinson
Tony Hopkinson

in the courses I did, and they were in pascal.... That's academia for you, I mean why bother with such esoteric considertions for a 10 line function that the eaxminer is only going to run once. Such issues go in the same bucket, as decent names and such. Shouldn't be allowed to teach it if you haven't done 5 - 10 years at the sharp end, which would give out of work IT people a job to go to as most of the academics couldn't even start to meet such a requirement.