Security

Make your apps more secure with these tips from Microsoft's Bret Arsenault

Microsoft's Chief Security Officer Bret Arsenault talks shop with Justin James. Hear what he says developers need to do better to make their applications more secure.

 

A simple fact of life in the IT industry is that, even if you do not use Microsoft products, how secure the company's products are will most likely end up impacting your work one way or the other.

A few years ago, Microsoft began releasing its Security Intelligence Report (SIR) in order to provide an accurate assessment of the latest threats to its products. Each report covers a six-month period. In early December 2008, I had the chance to speak with Bret Arsenault, Microsoft's Chief Security Officer, about the SIR: Volume 5 (January 2008 - June 2008).

I find this issue of the SIR interesting for two reasons. First, for a full year's worth of reporting periods, the number of reported High vulnerabilities has decreased. The second data point that interests me is the fact that more than 90% of HTML-borne threats affecting Windows Vista actually target third-party products -- not Microsoft products. Bret said that this shift makes a lot of sense, and I tend to agree with him. Windows Vista's security is not perfect, but it is now hardened to the point that the OS is no longer the lowest hanging fruit on the tree. In addition, as he pointed out, the data that the bad guys really want tends to be locked up inside the application now and not the OS.

Bret and I talked in-depth about what developers need to do better to make their applications more secure. He said the security holes developers are seeing are the same ones that we have been seeing for years: buffer overruns, data hardcoded into the applications, and many other bad practices.

At a technical level, applications are still not modular enough; in addition, many applications do not perform automatic updates. I asked Bret about the possibility of allowing third-party developers to participate in the Microsoft Update program, and he said it is not currently being discussed as an option.

What developers need now are the same remedies that have been recommended for quite some time. It is a matter of educating developers and helping them to become more rigorous in their practices. He said that developers need to be retrained and suggested that they should all learn about the SDL process and security, preferably as part of the training program for new developers (in other words, baked into a Computer Science or IS/IT degree program). He and I agree that it takes weeks, if not months, to give developers a good background in secure development techniques and that a couple of lunch 'n learn training sessions or a few hours with a consultant is not sufficient.

Another large part of the problem is that developers are extremely pressed for time. They often learn new things in the trenches and, as a result, do not realize the security implications of the way they are writing code. On that note, he pointed me to Microsoft's new site for providing security information to developers: HelloSecureWorld. He also mentioned that users are still on the hook too; there is nothing any developer can do in the face of a user who clicks "Yes" to everything. In addition, he reminded me about the Microsoft Security Assessment Tool (MSAT) and the User Awareness and Education Toolkit, which systems administrators can use to evaluate their security situation and teach users about safe computing.

I know the situation that Microsoft faces is pretty challenging. The company has so many conflicting requirements, such as maintaining backwards compatibility while making the security tighter. At the same time, it is good to see Microsoft taking the situation seriously and finally seeing some positive results -- even if it has taken so long to get some relief.

Thanks again to Bret for speaking with me. I really enjoyed our conversation.

Also, be sure to read Chad Perrin's post about the 25 most dangerous programming errors, which is a list that has been compiled by security experts from all over the world.

J.Ja

Disclosure of Justin's industry affiliations: Justin James has a working arrangement with Microsoft to write an article for MSDN Magazine. He also has a contract with Spiceworks to write product buying guides.

---------------------------------------------------------------------------------------------------------------

Get weekly development tips in your inbox Keep your developer skills sharp by signing up for TechRepublic's free Web Developer newsletter, delivered each Tuesday. Automatically subscribe today!

About

Justin James is the Lead Architect for Conigent.

39 comments
chris
chris

did I miss the tips?

Jaqui
Jaqui

after all, Microsoft has a 30+ year record of knowing nothing about security, it's going to take at least that long of minimal exploits to prove they have learned something about it.

oz penguin
oz penguin

Do Microsoft understand security ? I am not talking about the content, but the fact that they combine 6 months of data into one report and not even release that for more than another 6 months (8th Jan 2009). Security assessments need to be current, not something you learn about in history.

Justin James
Justin James

Do you feel that you have adequete information to write secure code? Why do you think so many developers are writing insecure code, despite the large amount of information on the topic? Does it take too long to really learn everything? Is it something that schools are not spending enough time on? And, do you think the the current trend of managed code environments (the .Net CLR, the JVM) make the problem worse, or better? J.Ja

robo_dev
robo_dev

The only tip I heard was to keep current with Microsoft vulns, which anybody who knows anything already knows to do. But that is like telling pilots to check the weather before they take off...

Justin James
Justin James

Just not as an explicit, bullet point list. But a summary may be helpful: * Learn about the most common coding errors (SQL injection, buffer overruns, etc.) and stop writing code with them. Here's a list of the top 25 offenders: http://blogs.techrepublic.com.com/security/?p=733 * Writing secure code is about process too. You need to make sure that your development process allows time to perform security assessments, testing, automated scanning, code reviews, etc. * Microsoft provides a few different resources to assist with writing secure code, improving your environment, and educating users: http://www.microsoft.com/click/hellosecureworld/default.mspx http://technet.microsoft.com/en-us/security/cc185712.aspx http://download.microsoft.com/download/1/9/9/1990AA19-2C4F-42D0-9A22-1E158EF0ABBC/Security%20Awareness%20Content.zip * Hope that various CS courses, programmer training courses, etc. spend more time discussing security and training people to write secure code. J.Ja

Justin James
Justin James

Honestly, every time I talk to folks at Microsoft, the security angle comes up, one way or the other. And everyone I have talked to (maybe 10 people out of a 70,000 person company) really "gets" security. They don't just parrot the company line either. They understand what the real problems are, how to fix it, and I may add, Microsoft official policies and processes are just about the most rigorous in the business at this point, when it comes to writing secure code. From what I hear, they even go as far as tracking down the breach to the developer who wrote the precise lines of code responsible for the hole (via the version control system), and find out why that code was written, and find out why the QA folks and automated scanners never discovered the breach. Keep in mind, for about 15 - 20 years of Microsoft's history, security was irrelevant, because those DOS boxes had no way of having anything bad happen to them, except for the rare user who actually had a modem AND used it to call up BBS's AND had a habit of downloading things, or accepted a floppy disk from someone in that scenario. Most users went years without seeing a virus, let alone actually catching one. Not because Microsoft products were secure, but because keeping a box safe when it has no connectivity and only can run one application at a time is fairly easy to do, just through dumb luck. That being said, Microsoft's history of making products that get deployed in "dangerous" environments in short... since around 1995 - 1998. Considering that in the last few years, there have been SIGNIFICANT improvements (can anyone remember the last time there was a security breach in IIS or SQL server, which both used to be "Swiss Cheese" products?), I would say that at the very least, Microsoft is clearly on the right path. Also keep in mind, the legacy situation. Microsoft can't just "make the OS secure" because then half of user's apps won't work. Look at Vista. It had a ton of stuff under the hood (as well as UAC, of course), to ban applications from doing things that were not secure. Microsoft had been saying since NT 4, "stop writing your apps like this!" Developers kept doing it. So they cut it off, the apps stopped working, and people threw temper tantrums. They added UAC, a mechanism no more obtrusive than the way KDE and GNOME seem to handle things, and people whine about how horrible it is. Yet, they want a "secure" system. When your history is that the OS is wide open and developers can write code that does anything regardless of who the user is, yeah, it can be difficult (at a level of "will anyone use this?") to roll out a much more secure OS, regardless of your technical abilities to develop it! So yes, my (purely personal) opinion, is that Microsoft actually does know a lot about security. Whether or not the upper level folks let something ship before it is fully baked is another matter, of course. Not making excuses for Microsoft in any way. They dug this hole, it's going to take them a long time to extract themselves from it! J.Ja

Justin James
Justin James

I agree that it would probably be more timely if the report was done on a quarterly basis, and only took, say, 2 weeks to produce. At the same time, this is not a "security assessment" in terms of, "let's check out code for bugs" and they are letting bugs sit for up to a year. This is a long term trend report, designed not to uncover new problems, but take an honest look at past problems, to see what kind of problems come up, and what needs to be done to mitigate them. Considering that the report is more than just reading over 6 months worth of reports from Secunia, or some other similar exercise, I think that the timeline isn't too bad. Also keep in mind, this report was out MONTHS ago. I want to say November or December 2008. I talked to Bret in early December, and I had already read the report when he and I talked. My post was written then too. It just wasn't *published* until now. I am on a one post per week schedule, and I had built up an incredible abundance of posts in advance; I got married and went on honeymoon in December, and I wanted to not be writing in December, so I had everything submitted through the beginning of January by the end of November. As a result, when this came along, my post on it was pushed back to now. J.Ja

Saurondor
Saurondor

There is no clear cost attached to security in "consumer" products and it can be easily (cheaper) to overcome through an EULA. If you charge for a product and don't deliver features that gets you in trouble. Security on the other hand is in the EULA; "can not be held responsible for damage done to your equipment bla bla bla". If you work for a firm that develops software it is clearly the company's fault for the lack of security. Lets take this paragraph: "Another large part of the problem is that developers are extremely pressed for time. They often learn new things in the trenches and, as a result, do not realize the security implications of the way they are writing code." Why are they so pressed for time? Haven't the new tools improved development? Why do we spend hundreds if not thousands of dollars on new tools? If these new tools are not giving us a development edge, why are we buying them? If they are giving us an edge why isn't it seen? Programmers should have more free time to invest in these security issues. After all they did all the other work 20, 50 or 100 % faster right? But management is pressing for more products in less time instead of pressing for better more secure products in the same time. Not to mention other cost cutting methods like not giving training. So in a lot of cases features are implemented by programmers in the code and security is implemented by lawyers in the EULA.

Tony Hopkinson
Tony Hopkinson

Because it's cheaper. To teach, to learn, and to do.... Any other reason would be silly wouldn't it?

apotheon
apotheon

My article for today, [url=http://blogs.techrepublic.com.com/security/?p=741][i]How should you handle software updates?[/i][/url], addresses that very topic. I disagree somewhat with Bret Arsenault's advice. Of course, it should be no surprise at all that a Microsoft employee basically advocates applying any and all patches uncritically, without even knowing what's in the patches.

Jaqui
Jaqui

in what way? neither gnome or kde throw a dialog box confirming you want to open the default web browser for the desktop. neither throw a dialog box up to confirm you want to see the contents of your documents folder. As far as I can see, from my very limited exposure to it, MS' UAC is the "You are an idiot, confirm this" model of security. But yup, they gotta lot of work to fill the hole they dug themselves.

oz penguin
oz penguin

well according to the published date on the Microsoft site, it was January 2009 as per the link you provided

Tony Hopkinson
Tony Hopkinson

help you achieve more in the same time, or of course the same with less people... Quality additions such as security, unless there's a real bottom line, hardly ever. That's said there's a lot of 'cheap' simple things developers can do to make things better. Unfortunately we are now at Softwaregate No one suggested there wouldn't be a bodge...

apotheon
apotheon

"[i]features are implemented by programmers in the code and security is implemented by lawyers in the EULA[/i]" That's an excellent way to sum up (a big part of) the problem. I'm going to end up quoting that at some point. How should I attribute it?

Tony Hopkinson
Tony Hopkinson

to enter negative numbers in boxes that are obviously only ever positive. See waht happens when name is blank, or quantity per batch is zero. The worst ones test functions that have 'nothing' to do with your change, which have some how magically broken... Boogers they are. :D

oz penguin
oz penguin

I am a big fan of the concepts of peer reviews and of developers testing each others work (thoroughly); it brings two developers up to speed with the change which is better knowledge for when the next requirement comes along. It also lets you learn new principles from your team (and vice-versa) even when people are not willing to mentor their team. This does NOT negate the need for testing by a proper QA team. Those test nazis will pickup what we developers consider 'illogical' uses of an application.

Tony Hopkinson
Tony Hopkinson

but there are a fair few management types who think, cut QA and have your code monkeys 'test' their own work is a good idea. Never seen it work myself, very different mindset.

apotheon
apotheon

I was talking about testing by the entire development "lifecycle" team -- not necessarily by code monkeys.

Tony Hopkinson
Tony Hopkinson

Come on, we are crap at it. There's a lot we can do, from sensible coding standards to automation testing, but there's no substitute, for people who not only know how to test, but do. Open source wins out on the fresh set of eyes principle. Peer review works on closed as well, given you can get the resources for it. One place I worked at we used to work in pairs, both develop their bit, then swap and test. Works quite well, and the small added cost in time is more than recouped. Just knowing Fred is going to be looking at your stuff, does some. Hands up developers how often has your code come back, with a basic fault and you have this memory of it occuring while you were 'testing' and that you fixed it. :(

apotheon
apotheon

"[i]The best I've seen, is the last place I worked, they had "Staging" servers that 100% mimicked the production servers, with the exception of the amount of RAM and CPU on them. Any patches (other than "OMG! 0 day exploit that affects things that we publically expose on the Internet!" stuff, like IIS exploits) were put on 1 box just for patch testing for a week, then deployed to the servers in Staging for a month, and then deployed to Production.[/i]" That's how most patch testing is accomplished outside of the developers' circles, in the real world -- and more than that, for the people actually using the software in production, is often not reasonable at all. It helps when the developers (and their immediate support network) actually test stuff well, which tends to happen a lot more often with the Linux kernel team than with the MS Windows developers, as one example, from what I've seen. Open source software favoring more thorough testing than closed source software in terms of thorough testing before release is a [b]tendency[/b], of course, and not an ironclad law of nature. WordPress comes to mind as a counterexample -- its update testing tends to be only about [b]as good/bad[/b] as Microsoft's. There are worse examples, but they tend to be very few and far between, and none come to mind right now. I've drifted off-topic. Anyway, my point was this: 1. What you describe is about as good as it gets at the public deployment level. 2. It helps a lot when the developers properly test patches before they ever get to the public -- so choosing your software options carefully is important to ensuring your protection against bad patches.

Justin James
Justin James

... it's hard to know precisely what is actually in any given patch. Sure, the vendor might tell you what files are affected, and what the patch supposedly does. And in many cases (open source, some propriety systems) you may even have the opportunity to look at the source code. I am fairly certain that even for that minority of admins who are curious about what is in the source code, they lack the time or the knowledge to really do it anyways. I am sure that some folks actually inspect patch contents before applying them. But in all honesty, I don't think many admins have the combination of knowledge, time, and desire to do so. For those who do, I'm a bit envious. :) But I think that the typical admin doesn't do this. The best I've seen, is the last place I worked, they had "Staging" servers that 100% mimicked the production servers, with the exception of the amount of RAM and CPU on them. Any patches (other than "OMG! 0 day exploit that affects things that we publically expose on the Internet!" stuff, like IIS exploits) were put on 1 box just for patch testing for a week, then deployed to the servers in Staging for a month, and then deployed to Production. So Production was usually 6 - 8 weeks behind, but there was plenty of time to find out if the patches blew things up. J.Ja

Justin James
Justin James

Linking to you a few times is a lot easier than me writing the same article, and more timely, too. I had my blogs written well in advance, getting ready for the wedding/honeymoon, so it wasn't possible for me to cover this in a timely fashion. At the same time, I think it is insanely important that programmers get this stuff right. The fact is, most of those vulnerabilities fall into a few categories: * People writing in C or a similar langage that lets you do really dangerous things, who clearly should not be using those languages (buffer overruns fall under this category). * People making mistakes out of ignorance or laziness. SQL injection attacks, cross site scripting, and other forms of "failure to validate input" attacks all fall under this heading. Sadly, these attacks are easily mitigated by using components and libraries that the major application development tools/frameworks all seem to provide now... but they are also circmvented by using those same systems. The developer who directly connects an input box to the database with a databound control is a great example. OK, the databinding eliminates the SQL injection (good thing, too!), but leaves the XSS injection wide open. The solution is to do one (or more) of the following: * Scale back deadlines, so people have more time to build secure systems. * More training (there's a dead horse still being whipped). * Cut back the number of development projects on a global scale by about 80%, forcing unskilled programmers out of work and into burger flipping, where they belong. * Make these frameworks/languages/libraries/etc. EVEN SMARTER (somehow...) to take over even more of this stuff automatically. For example, that data bound text box could automatically determine what domain name it "belongs" to, and neuter any scripting calls to sites outside of that domain (not a 100% perfect solution even for that limited use case, but an improvement). Good programmers will hate these systems even more, poor programmers will think that now they are 100% safe, even if they are really only 50% safer. It's a mess. :) J.Ja

apotheon
apotheon

It's much easier to remember what [b]does[/b] happen than what [b]doesn't[/b] -- so I'm not surprised that you forget about the times UAC fails to confirm whether a given operation should take place.

Justin James
Justin James

... that certain items do indeed bypass UAC all the time. You just don't notice it. So yeah, if someone can mimic those priviledged apps, they're golden. Makes me curious how those apps are "guaranteed" to be who they claim to be! J.Ja

Tony Hopkinson
Tony Hopkinson

in one place means there has got to be a way round it unless every piece of code that UAC might apply to checks to see if it does, which is damned unlikely. Another good hint that UAC might not be the best designed piece of software ever, is why is Protected Mode in IE7 not longer an option if you have UAC off? That said I like UAC, and I think it was probably the best starter for 10 MS could achieve from a business point of view. It does however suck big time compared to the proper way of doing.

apotheon
apotheon

"[i]I was under the impression that it also had hard and fast rules, such as attempts to write to certain areas of the file system, registry, etc. would immediately be flagged for approval.[/i]" Oh, it does -- with two caveats: 1. UAC can only stop unauthorized writes to the file system and/or registry if it recognizes them as such. Because it's basically an application behavior filter, however, it only monitors the behavior of applications for what it might recognize as dangerous activity that hasn't already been approved somehow. Anything that sufficiently well disguises itself as system-level access (for instance), or otherwise bypasses that, is free and clear. There [b]are[/b] other protections against certain activities like that (such as MS Windows Vista's non-executable memory addressing ranges), but we don't know how effective that actually is, and it's distinct from (and more limited -- though more strict, as well -- in scope) than UAC. 2. There's also the simple fact that UAC is designed to ignore such activity from certain software that Microsoft wants to allow to do whatever it is designed to do without any UAC interference. The fact that UAC manages access should be a hint. True privilege separation would not allow passwordless execution with administrative capabilities [b]no matter what[/b], unless altered to execute with administrative privilege -- as in the case of suid binaries on systems that support it. UAC itself manages the execution privileges of various applications, however. Here's another way of looking at it: Doing it right would mean that turning off the privilege escalation authorization tool would prevent you from escalating privilege without signing in as an administrative user. Doing it wrong would mean that turning off the privilege escalation authorization tool would allow you to escalate privilege without having to take any specific authorization action at all. Which way does UAC work?

Justin James
Justin James

Put like that, I can definitely see why you'd consider UAC less than bulletproof. I was under the impression that it also had hard and fast rules, such as attempts to write to certain areas of the file system, registry, etc. would immediately be flagged for approval. J.Ja

apotheon
apotheon

"[i]I know a lot of technically competant people who turned off UAC in that first 2 week period, and now they are running without one of the more important security items that were added to Vista.[/i]" I don't think I can agree with that characterization of UAC. It really only seems that important if you assume it really does what it purports to do -- protect administrative access to your computer from malicious code. Unless and until Microsoft actually implements architectural privilege separation, UAC is mostly smoke and mirrors. The most important security feature for a Vista user is an active mind. UAC can, by way of a false sense of security, actually dull the user's senses as regards recognition and/or avoidance of obvious, amateurish threats (basically, the kind of threats that wouldn't just bypass UAC entirely). If this was not the case, every time UAC has "saved" some technically competent user's bacon would be equivalent to a compromise to a technically competent user's XP system. Since the frequency of compromise of such a person's XP system -- or Vista system with UAC turned off -- doesn't begin to compare to the frequency of "Ooh, good thing UAC caught that!" one can only reasonably assume that less care is taken in ensuring the security of their systems because they assume on some level that UAC will take up the slack. Even if it does so [b]mostly[/b], the reduced care taken -- even though it may only be a subconscious reaction to greater reliance on UAC -- can increase vulnerability to exactly the sort of thing UAC is supposed to protect you from, in cases where UAC isn't triggered because of lack of true, architectural privilege separation. UAC is great for people who don't know how to -- or otherwise simply won't -- cover for the lack of a tool like that in the first place. It saves such people from a lot of malware issues for probably a whole two or three months longer than they'd otherwise be "safe". I wouldn't call a delay of the inevitable for the technically incompetent the most important security feature added to an OS, though. In fact, whether it's a net gain or a net loss seems to be in question. "[i]I would be interested in knowing more about how malware can work around UAC. From what I've seen in day-to-day usage, it is extremely strict.[/i]" In essence, UAC is Vista's answer to something like gnome-sudo -- except that, instead of a front end to a tool used to "manually" elevate privileges, it's a heuristic, exception-based mechanism for achieving something equivalent. In case you're not aware, "heuristic" is just fancy terminology for guessing. It's the middle ground between "default allow" and "default deny". Write a program that doesn't match UAC's fuzzy definition of "dangerous", and you're golden. That includes anything that UAC can't recognize as external to the system itself, which is one reason [b]architectural[/b] privilege separation is so important. It's a superficial feature -- not an architectural characteristic of the system -- and as such, the more it gets in your face, the more it seems to be working. Okay, I'll try a different approach. Are you familiar with the difference between a packet inspection firewall and an application layer firewall? I'm talking about the difference between, say, iptables and ZoneAlarm. Given the choice -- would you use an application layer firewall and throw away a packet filtering firewall? I wouldn't. I [b]especially[/b] wouldn't without architectural privilege separation to cover what the application firewall missed. Well . . . an application firewall is to a packet filter firewall as UAC is to architectural privilege separation. In fact, as you get further and further up (where "up" means "more visible") the threat-type stack, your security measures become more like features and less like architectural characteristics, and they all start working more and more similarly under the hood. By the time you get to the level of UAC, it's almost identical to a typical application layer firewall -- except that it doesn't really listen to the network at all like ZoneAlarm. Haven't you noticed the similarity in how ZoneAlarm and UAC behave?

Justin James
Justin James

Luckily (or unluckily), UAC is not obvious how to turn off, especially for a less technically savvy user... but it is a good point. I know a lot of technically competant people who turned off UAC in that first 2 week period, and now they are running without one of the more important security items that were added to Vista. I would be interested in knowing more about how malware can work around UAC. From what I've seen in day-to-day usage, it is extremely strict. J.Ja

apotheon
apotheon

"[i]the only times I see UAC on a system after the first 2 weeks or so[/i]" What about [b]during[/b] those initial two weeks or so -- the period during which many end-users who don't understand security [b]at all[/b] are most likely to reach a saturation point and, in frustration, just turn the damned thing off? . . . to say nothing of the fact that no self-respecting malware even needs to get approved by UAC to execute, since architectural privilege separation still eludes the Microsoft developers.

Justin James
Justin James

Both ask the end user for permission to do something sensitive. While the underlying mechanism may be different, to the end user they are identical. Furthermore, your characterization of UAC is extremely inaccurate. Was it in jest? If not, you are getting your information from an unreliable and uninformed source. UAC does *not* come up to open a Web browser or to open a documents list. In fact, the only times I see UAC on a system after the first 2 weeks or so (when you are constantly tweaking system settings) is to start various tools that must run with full admin priviledges (various WMI tools like DNS Manager, Active Directory management, etc.), and to install/uninstall some apps. On my work PC, therefore, I see it frequently (and appriecate it... I would hate to think that something was accessing Active Directory as administrator without my knowledge!), and on my personal PC, I see it once every few weeks. J.Ja

Justin James
Justin James

That's a really good question. I *suspect* that they don't start writing it until the period is complete. If they do start early, there's only so much they can do before the end of the period, since it is based so much around aggregate data, not individual data points. But yeah, even then, 4 - 5 months IS a bit long to produce it, unless the fundamental data itself is not available, which I know is not the case. J.Ja

oz penguin
oz penguin

I understand, but even November is a long time to wait. Do they not start writing it until after the Jan-Jun period is complete? You also can't rely on copyright, Win2000 was copyright 1999, but not released until 17 February, 2000.

Justin James
Justin James

I see the confusion there, yeah. Chances are, they updated that page, which changes the "published" date. I saw this report in November or early December, and it wasn't a pre-release or embargoed copy, it was one up on a public Web site. Too bad they don't have the date in the document itself (nice to have a timestamp, in case there is corrections later on), other than the 2008 copyright. J.Ja

apotheon
apotheon

"[i]I'd appreciate it if you attribute it like that.[/i]" I'll do so. It's going in my random email signature rotation, and I may well end up quoting it in an article here at TR in the near-ish future.

Saurondor
Saurondor

Thanks for the nice feedback. I really appreciate it. Name is Gerardo Tasistro. I'd appreciate it if you attribute it like that. Cheers, Jerry

Justin James
Justin James

The entire post, top to bottom was excellent, but that quote was the cherry on top. I've already forwarded it to the folks who do the Community Central newsletter. :) J.Ja

Editor's Picks