General discussion

  • Creator
    Topic
  • #2191146

    In my own words…

    Locked

    by justin fielding ·

    blog root

All Comments

  • Author
    Replies
    • #3060249

      Managing IM compliance challenges

      by justin fielding ·

      In reply to In my own words…

      Instant messaging (IM) increases productivity with the
      advantages of instant feedback and reduced communication costs. However, the
      problems IM creates in a business environment are those of liability and
      responsible use. This is particularly true in the Financial Services sector and
      other regulated environments. The Financial Services Authority and similar
      regulatory bodies are of the opinion that IM is no different than e-mail,
      therefore the same requirements for record keeping exist. I think many users of
      IM (including myself) see the conversations as ephemeral, vanishing into the
      ether of time and space. Not so. All IM clients have logging facilities and
      there are also packages available for network administrators which enable
      logging of all conversations passing through the network. If any type of
      dispute arises involving information given via an IM conversation, your
      organization needs to know exactly what has been said. The same can apply
      internally if disputes arise within the company due to improper use. Another
      issue which must be addressed is that of increased vulnerability to security
      threats. Virus writers and scammers are turning their attention to users of IM;
      new Trojans, worms, and phishing attacks are discovered daily.

      For a business–and more specifically–its IT department,
      the control of IM poses a problem. Firstly, do you have a policy on the use of IM?
      If so, how do you enforce this policy and how do you ensure that you are
      fulfilling the regulatory obligations imposed? If you do not have the processes
      in place to log and archive records in a tamper proof and accessible format;
      messaging should be stopped. But how?

      Two methods which I come across frequently are port and IP
      blocking.

      Port blocking: This is very simple; find out what ports are
      used by the IM clients and then block them on your firewall. The problem with
      this is that the clients try to use many different ranges of ports, and these
      tend to increase with every new release. “Well,” you say, “my
      firewall blocks all outgoing connections unless I specifically ask it to pass
      them.” That’s good, but do you allow port 80 (http)? Most of today’s IM
      clients can now work perfectly well through port 80, and if you allow your
      users to surf the net then they can probably IM too. I recently read that the
      latest clients embed traffic data within an HTTP request, meaning that even
      with advanced protocol analysis, they will be very difficult to stop! Smells
      like guerilla tactics…

      IP blocking: Again, very simple in theory, but with its own
      problems. Using netstat along with googled resources, you can find the IP
      addresses which an IM client will connect to. Block all outgoing connections to
      this IP (or subnet) and eventually, once all servers have been found, the IM
      client will be unable to go online. This looks like a better method than
      blocking individual port ranges as it still leaves http open for web access. This
      method still has limitations; in the amount of time it takes for the software
      company in question to make a DNS or client update, a new server is added which
      needs to be blocked. Because this can be hard to keep on top of, I keep the
      major IM clients running on my desktop. In the rare event that one actually
      manages to go online, a simple netstat can track down the new server, which can
      be added to the block list. One of the major IM providers has started to have
      their client program connect to IPs which also host their web services. This
      means that if you block their IM client, you also lose access to these services.
      Not much of a loss, until you realize that you can’t run Windows Update! Looks
      like we’re heading for an all out Guerilla war!

      Lets consider that we have blocked IM (don’t forget to hunt
      down and block services like http://www.e-messenger.net–if
      you don’t, then your ‘users’ will use them).

      The powers that be have decided that they now want to allow one
      of the messenger services. Reversing the measures taken to stop that particular
      IM service won’t be difficult, but you have to find a way to log and archive
      communications. There are many companies offering software to log IM traffic;
      examples would be IMlogic
      and Akonix.
      One of the previously mentioned products runs on a Windows platform, one is a
      plug and play appliance. If you run on Unix/Linux/BSD then take a look in to
      the dsniff set of tools.
      This set of tools can log IM conversations, unencrypted password transactions,
      url requests, and more.

      Our company has decided that public IM services should not
      be used; however for internal communication, IM can be a very useful tool,
      especially when you have offices spread across a large geographical area. Running
      your own internal IM server is easier than you might think. Jabberd is an
      implementation of the common messaging protocol, now known as XMPP (Extensible
      Messaging and Presence Protocol). There are server implementations available
      for various platforms including Windows and Linux; this is also true of client
      applications (I personally use PSI).
      The best thing about this implementation of an internal IM server is that it’s
      free, and there are no complex licensing schemes regardless of how many users you
      want to have on the system. Logs can be archived in files or SQL format, whichever
      you feel more comfortable with. If the configuration seems too complex or you
      would rather use a commercial solution, take a look at Jabber Inc.

      IM in the financial services sector is a very hot topic at
      the moment and promises to continue as new regulations put IT policies to the
      test. Google is just starting to release its IM service, while Microsoft is
      trying to push the use of MSN Messenger within a business environment–the firewall-bypassing efforts made by its
      application backup this policy. New problems for IT departments to solve
      will follow…

      I would be interested to hear reader’s comments on this
      topic; do you know other methods for blocking or logging IM conversations? Let
      me know?

      • #3068895

        Managing IM compliance challenges

        by john.francis ·

        In reply to Managing IM compliance challenges

        We are investigating using our virus protect software to block IM by identifying the various IM executables (at least for the most popular IM software) as “unwanted programs”, thereby removing the executables from the PC.  If they don’t have it, they can’t run it.

      • #3060106

        Managing IM compliance challenges

        by justin fielding ·

        In reply to Managing IM compliance challenges

        Sounds like a good aproach.  How do you plan to stop smarter
        users?  Those who will use a 3rd party application like Trillian
        (there are loads out there) or even an e-messenger.net type of website?

    • #3068758

      Need some free storage?

      by justin fielding ·

      In reply to In my own words…

      While surfing the net I found an interesting little program
      called ?GMail Drive?.  This nifty little Windows shell extension
      will allow you to use your 2GB gmail account to be used as storage.  Files are basically added to your account in
      the form of an email with the file attached; you can access the files as you
      would with any other network drive, it integrates the storage with windows
      explorer.  The web address is http://www.viksoe.dk/code/gmail.htm

      Actually it reminded me of a similar project I saw a while back
      called GMailFS, this allowed you to mount your gmail account as a linux filesystem.  After a quick google search I found this
      projects homepage
      and it seems that it is still being actively maintained.

      I?m sure neither are very secure, reliable or practical;
      still they are a novel use of the free storage being offered to users.

      • #3060866

        Need some free storage?

        by wdmilner ·

        In reply to Need some free storage?

        On the matter of security in reference to GMailFS, you might look at another file system by Valient Gough called EncFS at http://pobox.com/~vgough/encfs.html. It can be used in conjunction with GMailFS to create a remote encrypted file system.

        In visiting the GMailFS project page I noticed an advert for Streamload. It’s a commercial offering that has a free option of 10GB storage (contrary to the advertised “free unlimited”) with 100MB download limit a month. This might be just the ticket for some “quick and dirty” file storage when on the road. Probably not the most robust solution compared to things like strongspace.com but certainly adequate for low volume/security files.

    • #3071536

      Linux in the news & 64-bit

      by justin fielding ·

      In reply to In my own words…

      Reading through this weeks edition of Computer
      Weekly I
      notice that there are frequent references to the use of Linux in the
      enterprise. Kevin Hughes, a nautical equipment
      manufacturer has moved over to 64-bit Linux running Oracle on HP
      Itanium machines. I think Linux offers a considerable advantage
      over Windows when it comes to running on a 64-bit base (the 64-bit
      versions of Windows
      that I have tried didn?t seem too stable). I?m not quite sure why the
      article said that they are leading the way,
      our company (of a similar size) has been running Oracle based
      applications on a
      64-bit hardware and software base for quite some time. I guess it just
      fills some space in the
      magazine! Other mentions included Stirling university who have improved performance
      three-fold and lowered costs. How? Oh that?s interesting, by moving over to HP
      Itanium based servers running Linux; I?m not making accusations here but it?s
      interesting that both of these articles were written by the same author. I wonder if he also works for HP, no surely
      not… Last but not least, I see that Dell are now offering a desktop system
      with NO Windows pre-installation. This
      saves money for enterprises using Linux on the desktop as they would usually
      have to buy a PC pre-loaded with Windows and then wipe it out, which is a bit
      of a waste of a llicence.

      As our company runs all of our core services and systems on
      a Linux base I find it very interesting to see how other companies are
      approaching it?s use. It seems even to companies
      who would traditionally run with Windows; Linux is offering a real
      alternative
      when it comes to high performance 64-bit processing. I?m rather
      surprised that people are choosing
      to go with the Itanium based systems, we have both Itanium and AMD
      Opteron
      64-bit systems. Both run SUSE Enterprise
      server with Oracle 10g. The Opteron
      systems have out performed the Itanium by such large margins that the
      Itaniums
      have actually been put on the shelf as it were. The Opteron based
      systems were also much cheaper than the Itaniums (Truly a fraction of
      the price). I guess the big three providers (IBM, HP,
      SGI) are hoping that people don?t notice the reduced cost and increased
      performance of the Opteron systems so that they can continue to sell
      their
      overpriced, underperforming Itanium based hardware!

    • #3058043

      Is your Wireless infrastructure properly protected?

      by justin fielding ·

      In reply to In my own words…

      Wireless networking is fast becoming a service expected by
      most enterprises. Being able to undock a laptop, walk around the office (for
      meetings, impromptu brainstorms, etc.) and still have instant access to files
      and data sources is seen as vital. For an IT department, the implementation of
      wireless networking is relatively simple and inexpensive. The transfer speed
      offered by wireless hardware is constantly increasing, making it a viable alternative
      to wired LAN in some situations such as small offices with solid floors and
      ceilings.

      This all sound great, but there has to be a catch doesn?t
      there? Well yes, the catch is security.

      We go to great lengths to protect our wired networks from the
      outside world. What would you think if you started working for a company and
      found that they had no firewall protecting their internet facing services? Well,
      the same should apply to wireless services as these face the outside world and
      are more vulnerable than you may think. A report put together by RSA Security in 2004 gives some horrific figures on the use of unencrypted wireless networks;
      this was as high as 72% in Milan! A newer report
      shows that the situation is still not under control–26% of access points in London were found to have
      the factory default settings. I decided to take my own survey and drove around
      a local town for 20 minutes. I picked up 372 individual access points! A
      massive 39% of these were open, 47% WEP encrypted, and 14% WPA protected. Most
      of these were obviously home broadband networks, however a notable number were
      clearly advertising their location, including some businesses.

      There are, of course, simple measures which can be taken to
      protect your network. 128-bit WEP encryption is available on almost all wifi
      equipment. This is as simple as generating a suitable encryption key (there are
      many utilities on the internet like this one)
      and then entering it in your AP’s web interface. This will be enough to stop
      the guy in a coffee shop next-door from connecting to the Internet via your
      network (rather than paying for the local hotspot access–who doesn?t like a
      free lunch?). That?s all very well but will it protect your network from more
      shady characters? No is the simple answer; WEP encryption is easily crackable for
      those in the know. Hackers will be less interested in simply gaining free Internet
      access; they could have much more sinister intentions. First, don’t advertise
      your network to the world. Hide your network SSID (some hardware offers this
      feature), and failing that you should at least use a random SSID rather than
      ?MyCompany.? I know that sounds silly, but you would be amazed how often this
      is the case.

      If possible, use WPA encryption. While this is still not
      impregnable, it is a vast improvement over WEP, and most new equipment will
      allow the use of WPA. Another precaution you can take is to separate your
      wireless and wired networks on to different subnets, placing a firewall between
      them (much as you would with your Internet connection).

      One thing which all network administrators should do on a
      regular basis is check the strength of their own networks. Scan your firewall
      and systems for the latest vulnerabilities or exploits, because you can be sure
      that someone else is doing this for you! The same applies for your wireless
      network–do you know how easy or difficult someone would find it to penetrate? A
      set of tools I have found very useful are those put together by the security group
      remote-exploit.org.
      You can boot from the Auditor LiveCD without the need for installation. You
      don?t need a dedicated notebook–just pick one with compatible hardware, pop in
      the CD, and you’re off (I use my IBM ThinkPad). It seems this set of tools is
      so complete that even the FBI uses it!

      Explaining the theory behind testing WEP encryption is
      beyond the scope of this blog; however, here are several references which will
      explain things. Note that most of them refer to the Auditor LiveCD previously
      mentioned:

      If you don’t already follow security-related press and learn
      about so called ‘underground’ techniques being used by hackers today, I can
      only urge you to do so. The only way to keep your networks secure is to fully
      understand the threats being faced and techniques used.

      • #3046057

        Is your Wireless infrastructure properly protected?

        by conceptual ·

        In reply to Is your Wireless infrastructure properly protected?

        This is solid information, but the vulnerabilities of WEP have been known for months. The FBI demo was news, but should have awakened everyone months ago. It also turns out that WPA has its problems if possible go with WPA2.

    • #3071118

      WPA explored

      by justin fielding ·

      In reply to In my own words…

      I have been doing a little research on the subject of WPA
      wireless protection. A good description
      of WPA along with its relative Pros and Cons can be found here, courtesy of Netgear.

      As confirmed in this article on Wi-Fi Net News,
      WPA has been broken. This refers to the preshared-key
      implementation of WPA (WPA-PSK), however it doesn?t seem that certificate based
      WPA (802.1x RADIUS authentication) is vulnerable to this attack. Having not used RADIUS authentication
      previously, I was interested to see how much work would be required to get this
      up and running. I stumbled upon a great
      three part article from Linux Journal. The link provided is for the third instalment,
      I would suggest reading this part first, just to get an idea of what is
      involved and how it all works.

      RADIUS authentication is an interesting prospect, with the
      possibility of using it for other purposes such as PPTP (VPN) connection auth
      and Windows login. The
      server can be set
      up so that it logs to an SQL database, making reporting quite
      simple. It would be interesting to hear from anyone who has this
      type of system in place.

      • #3071098

        WPA explored

        by jmgarvin ·

        In reply to WPA explored

        Axsome!  Thanks a bunch!

    • #3045210

      HSDPA is go…

      by justin fielding ·

      In reply to In my own words…

      High-Speed Downlink Packet Access (HSDPA) is approaching launch
      in Europe. 
      HSDPA is a third generation (3G) high-speed data service, with a maximum
      bandwidth of 14Mbps!  O2,
      the UK based telecoms company
      have announced that they will be launching the service for the Isle of Man on the 1st of November.  Another high-speed technology which is just
      starting to emerge is WiMax.  This wireless broadband service is now being
      offered in the South East of England?currently offering services up to 10Mbps.  One potential problem for WiMax is the limited
      amount of licensed radio bandwidth available. 
      It is possible for the service to run on unlicensed public frequencies,
      but this raises quality of service issues. 
      HSDPA runs on current cellular infrastructure which bypasses this issue
      and should make rollout faster and less expensive.  There is much debate over which of these two
      technologies will gain the upper hand, I guess only time can really tell.  Still, both technologies are offering
      exciting new prospects in all areas from mobile business solutions to in-car
      entertainment.

    • #3043592

      Get the basics of a secure VPN

      by justin fielding ·

      In reply to In my own words…

      Virtual Private Networks (VPNs) seem to be
      a hot topic lately–a week doesn’t go by without a new article or white paper being
      released on the subject. For many business users, having instant access to data
      while on the move is now seen as a necessity rather than a luxury. Gone are the
      days of slow and troublesome dial-up connections; we’re now in the age of
      broadband! These days high-speed internet access is cheap and offers speeds
      which could only have been dreamt of ten years ago; Wi-Fi hotspots offer access
      from most coffee shops, city centers and airports! Another form of VPN is that
      which connects two private networks, using public networks as a bridge. A
      gateway on each of the private networks faces the Internet, data is then transferred
      between the two gateways via this low-cost public infrastructure. This allows
      branch offices to effectively share data and work together without the
      horrendous costs involved in hiring private lines.

      The advantages of allowing data access via
      public networks are clear: high-speed and low cost. Where’s the catch? Well, as
      per usual the issue is that of security. It’s all very well utilizing public
      networks, but they are just that–public, and since anyone could be viewing the
      data you transmit, we have to assume that they are. Let’s take a look at two
      protocols developed to address security in this area.

      Point-to-Point-Tunnelling
      Protocol (PPTP) was developed by Microsoft to enable remote users to securely
      access corporate networks. It was first introduced in Windows NT 4 and the
      source code was made available so that other third parties could develop
      compatible software. Here’s a full description of PPTP,
      the key point being “PPTP encapsulates the
      encrypted and compressed PPP packets into IP datagrams for transmission over
      the Internet.” One thing
      which makes PPTP a good choice for remote or roaming users is that all versions
      of Windows (NT4 to XP) have an inbuilt client program, meaning there is no need
      for additional software installation. Windows Server can be used; however, for those
      not already using Windows Server, a better solution may be Poptop. Poptop is an open source PPTP server which
      can be hosted on a Linux platform, Cyberguard
      even use Poptop in their embedded VPN solutions.

      IP Security (IPSec)
      is a standard for authenticating and encrypting IP packets; working on the
      network layer to create a secure tunnel between two nodes, via a public
      network. The two main parts of the IPSec standard are the Encapsulating
      Security Payload (ESP) protocol and Internet Key
      Exchange (IKE) protocol. ESP takes care of data encryption and integrity, while
      IKE uses public key or pre-shared secret techniques to authenticate each host
      and set up a secure session. Giants such as Microsoft
      and Cisco
      have adopted IPSec and support its use.

      In a small to medium enterprise, cost is
      normally a big consideration. I personally use OpenBSD to provide for our
      company’s VPN needs–OpenBSD takes a paranoid approach with proactive security
      and integrated cryptography, and best of all, it’s free! All of the tools
      needed for creating point-to-point VPN connections are included as default; Poptop
      is also available in the OpenBSD ports (selection of linux/unix packages ported
      to the BSD platform). The system manual pages give full
      instructions
      on setting up your VPN; this can look a little in-depth and
      over complex but once you have an understanding of what’s happening, it’s
      really quite simple. The inbuilt firewall, Packet Filter, is a very simple but
      powerful, making OpenBSD a good multipurpose platform; DHCP, DNS, FTP, VPN,
      PPTP can all be run with proven reliability and security. The configuration of
      Poptop is a little more difficult; it took me 3-4 days of reading mailing list
      archives and manuals to actually get it working, but now that I know how, it
      doesn’t take long to set up a new server.

      As you can see from a quick Google search,
      there are many companies offering different VPN solutions, all based around the
      same underlying technology. Most of them don’t come cheap and that’s not even
      taking in to account consultancy fees, etc. I hope I’ve shown here that
      implementing a secure VPN solution (whether it be for remote/roaming users or
      interoffice communication) doesn’t have to be expensive or particularly
      difficult.

      If there’s any interest in the topic, I
      would consider writing a tutorial on setting up an OpenBSD network gateway with
      VPN and PPTP. Please post your comments and let me know?

      • #3045708

        Get the basics of a secure VPN

        by akash ·

        In reply to Get the basics of a secure VPN

        Hi..

        I woululd realy be intested in a tutrial on how to get one started. Iam a small business in South Africa and the creation of a VPN using existing providers is extremely expensive. With the security precaustions mention I’m sure the value of such a too to many small buiness locally (in SA) will be tremndous.

        Akash

      • #3045689

        Get the basics of a secure VPN

        by tommy.shepherd.ctr@maxwll ·

        In reply to Get the basics of a secure VPN

        I have enjoyed reading this article.  It was straightforward and to the point.  I am interested in learning VPN but have been too intimidated by some of the other articles I have read.  Can someone suggest an article or tutorial on setting up a VPN and troubleshooting one that is easy reading like this article?

      • #3045423

        Get the basics of a secure VPN

        by oprig_hr ·

        In reply to Get the basics of a secure VPN

        For those looking for open source vpn, I recommend openvpn: http://openvpn.net

        Very easy to setup and a nice Windows installer for clients.

      • #3115200

        Get the basics of a secure VPN

        by lukcad ·

        In reply to Get the basics of a secure VPN

        Hi!

        I just started use it. It is so simple, really. I wonder myself, your article is in time.

        Sincerely, LukCAD

      • #3135949

        Get the basics of a secure VPN

        by blinkr ·

        In reply to Get the basics of a secure VPN

        I, also, would be interested in a tutorial.

        If possible, I would like to see someone create a good tutorial on
        subnetting. I have alot of them by googling, but they still leave some
        holes that need to be cleared up.

        Just my $.000000000000000002 worth!!!

      • #3136400

        Get the basics of a secure VPN

        by jamilkeg ·

        In reply to Get the basics of a secure VPN

        Thanks, yours was a concise yet interesting article. However, a (brief)
        comparison between PPTP and IPSec would have helped more.

        Hopefully you will consider writing the tutorial on OpenBSD
        network gateway with
        VPN and PPTP (and maybe IPSec?), as I am much more familiar with Linux
        (perhaps it would also be a good variant of your article?)

        jamg

      • #3136748

        Get the basics of a secure VPN

        by gario ·

        In reply to Get the basics of a secure VPN

        Blinkr………….

        I think I may have just the presentation on Subnetting…………………..

      • #3137534

        Get the basics of a secure VPN

        by jhoffman ·

        In reply to Get the basics of a secure VPN

        I am a network admin for a small company that provides computer and
        network support to approx. 200 customers and i have found that it is
        deffinatley the case that more and more companies want Remotely
        accessible networks.  I have tried a number of different solutions
        all with their own pros and cons, and I would be very interested in a
        tutorial on using OpenBSD as a VPN solution.  

    • #3044837

      Essential tools of the trade

      by justin fielding ·

      In reply to In my own words…

      I?m sure every systems administrator has their own set of
      tools which they use daily and just couldn?t get by without.  I thought I would spend a little time to let
      people know what I keep in my toolbox, the programs and utilities which I guess
      most administrators in a Unix/Linux environment would find useful.

      #1 Putty
      – The single most useful and most used tool in my box.  If I want to do anything on one of our
      servers then I need putty, due to the lack of an SSH client in Windows of
      course (and running a Linux desktop just isn?t practical).  It?s free and easy to use, what more can you
      ask for!

      #2 WinSCP
      – Again this one is a must, it makes transferring files to and from a Unix/Linux
      server a doodle.  The clutter is minimal,
      you want to get the files, edit them and put them back.  Norton Commander or Explorer-like interfaces
      can be chosen from, drag and drop really saves some time.

      #3 UltraEdit-32
      – The best text editor I have used to date. 
      Throw away notepad, this will allow you to work with unix line
      terminators, highlight code syntax and it automatically creates a .bak file
      once you save changes (can be handy).

      These are my top three tools, all in constant use!

      • #3114321

        Essential tools of the trade

        by dmarston ·

        In reply to Essential tools of the trade

        You can keep UltraEdit, I use Crimson Editor.. all the way.

        It can fit on a floppy disk, do much of the same as UE and…..

        it’s FREEEEEEEEE !!!!!!

        http://www.crimsoneditor.com/

         

      • #3116332

        Essential tools of the trade

        by apotheon ·

        In reply to Essential tools of the trade

        PuTTY? I don’t bother. I use this nifty thing called “OpenSSH”,
        which runs on my work laptop natively. See, I’m running Linux for my
        “desktop” system (Thinkpad + LCD monitor + spring switch keyboard +
        optical mouse with scroll wheel + speakers + docking station makes me
        happy), and it’s entirely practical. In fact, I get a helluva lot more
        done with my Linux system than I ever did with a Windows desktop
        system. I guess if you’re “forced” to use Windows, though, PuTTY is a
        tolerable way to pretend you’re using a unixy OS.

        Here’s my list:

        1. aterm: It’s hard to beat a terminal emulator
          lighter-weight than xterm with scrollwheel-compatible scrollbars
          (unlike xterm’s) and pseudo-transparency.
        2. bash: Who needs graphical file browsers? They’re not even (fully) scriptable!
        3. apt: This is a serious system administrator’s dream. Thank goodness for fast and easy software management.
        4. vim: Little makes me happier than a lightweight
          console-based text editor that is so friggin’ powerful. You can have
          your clicky crap. I’ll take real power and flexibility any day.
        5. ssh: I can manage most of the network from my desk.
        6. Perl: It is [b]the[/b] sysadmin scripting language to beat all others. ’nuff said.
        7. scp: It’s technically part of the OpenSSH toolset, but it deserves its own mention.
        8. Ruby: Whereas Perl is the best sysadmin scripting
          language, Ruby is the best object-oriented scripting language. Just try
          to convince me otherwise.
        9. alias: It’s just a bash builtin, but it’s so darned useful for commonly-used, long command line strings.
        10. ln: Symlinks are my friend. All my admin scripts go in
          ~/src and I symlink to them from ~/bin so I can run them anywhere. This
          also helps with simplifying command names without losing the ease of
          reference to script types by file extension. For instance, I symlink
          ~/bin/fsckoff to ~/src/fsckoff.pl, ~/bin/sfn to ~/src/sfn.pl, and
          ~/bin/jarh to ~/src/japh/hack01.rb.
        11. rdesktop: This is the unix client for Windows Terminal
          Services, and allows access to the Windows application server from a
          Linux system. Once in a great while, someone needs to use PowerPoint,
          after all.
        12. svn: Version control might save your ass some day.
        13. Firefox: Even I need to use a graphical browser from time to time.
        14. Abiword: Once in a while, I need to (shudder) translate to or from MS Word .doc format. Half the fat, twice the performance.

        I’m sure there are about a thousand more such things, but that’s what comes to mind immediately.

      • #3114688

        Essential tools of the trade

        by justin fielding ·

        In reply to Essential tools of the trade

        I dissagree, Linux is great for servers but I dislike it on the desktop
        (that’s just not what it’s good at, IMO).  Putty is not a way to
        pretend you are on a Linux desktop, it’s simply a tool to connect to
        servers via SSH.

        I will say though that I will shortly be moving to a powerbook, the
        power of BSD with a nice windows system on top.  How about the
        best of both worlds? 🙂

      • #3136109

        Essential tools of the trade

        by apotheon ·

        In reply to Essential tools of the trade

        Disagree all you like. I’m working in a place where the Linux desktops are the norm, and the Windows desktops are looked upon with distaste because of their limitations. The “best of both worlds” would be all the functionality of a purely unixy system with all the market penetration of Windows. MacOS X has neither.

        That’s not to say that MacOS X isn’t a good OS. It’s just more the “toy” OS, while systems like Linux and OpenBSD are for people that actually have serious work to do.

        Ultimately, a workstation should be a server under the hood, anyway. This artificial separation of systems into server and client as some sort of inherent set of characteristics is doing people a disservice. The terms “server” and “client” describe transaction relationships, not capability. Ultimately, any workstation computer that isn’t just a thin client should be a server as well: an application server with local client interface software, a network service server with network client software as well, and so on. Luckily, the various unices actually provide exactly that sort of architecture. Without sshd running on the workstations in this company, my job would be about 80% walking around to other people’s desks, and where the few Windows systems are concerned I have to do just that. Without X Server running on the workstations in this company, I wouldn’t be able to test new installs of GUI apps remotely on them or interfering with the work of the people actually using those workstations.

        If the computer in question is just a game machine sitting around at home without any need to get any substantive work done in an enterprise network, you can get by with the severe limitations of a “client-only” OS. Otherwise, you need a server with client software for local use. The alternative is increased administrative overhead, decreased productivity, and nonexistent security architecture.

      • #3137251

        Essential tools of the trade

        by broper ·

        In reply to Essential tools of the trade

        Windows vs. Linux? Come along, fellas. Any good tech can make both work… together.

        Justin, you should investigate either Cygwin, which runs a *nux
        simulation in windows and offers a pretty full CLI suite (including
        ssh!) and it even includes a good X server. Or Windows Services for
        Unix — also free — that hooks into the kernel and offers even more
        genuine *nix functionality within windows.

        I’ve had a problem with WSU causing a weird problem with VMWare (screws
        up USB detection somehow…), so I use Cygwin. It’s really great.

    • #3115340

      It’s virtual reality!

      by justin fielding ·

      In reply to In my own words…

      A tool which I find absolutely vital when setting up or
      testing any new system is VMware.
       For those of you who have not used
      VMware for systems testing or development, I suggest that you give it a
      go.  While the GSX and ESX application
      server platforms are powerful tools, we are interested in VMware
      Workstation.  In a nutshell VM
      Workstation allows you to run multiple virtual PC?s on your desktop, full networking
      support is included allowing you to play through test scenarios quickly taking
      snapshots before making major changes, allowing you to roll back without
      re-installation.  You can also cone
      machines, if you need to test interaction between two Windows XP machines,
      simply install once, clone and then change the machine name / IP details.  Many more features exist and the best way to
      discover them is to have a play with the 30-day evaluation.  You will probably notice later on that I test
      most of my projects and new implementations in VMware; realistically you need
      to allow 1GB of RAM and ~30GB+ disk space, but with hardware being cheap these
      days it?s well worth the expense.

      • #3115284

        It

        by lukcad ·

        In reply to It’s virtual reality!

        Windows 2003(Windows XP) and Visual Studio 2005 ( IIS, MSSQL and ASP.NET inside). This is full kit if you would like to make the virtual reality into your computer. Why some people loves to make virtual problem for yourself by another products? I hate when computer overloaded by a lot of IDE and we spend a lot of time to find right version of project, right ide and right way how to transfer one format ide representation to another.  

        Better look at her shoulders and listen her song from ABBA’s days: really song and she are great.

    • #3115279

      It’s virtual reality!!!

      by justin fielding ·

      In reply to In my own words…

      A tool which I find absolutely vital when setting up or
      testing any new system is VMware.
       For those of you who have not used
      VMware for systems testing or development, I suggest that you give it a
      go.  While the GSX and ESX application
      server platforms are powerful tools, we are interested in VMware
      Workstation.  In a nutshell VM
      Workstation allows you to run multiple virtual PC?s on your desktop, full networking
      support is included allowing you to play through test scenarios quickly taking
      snapshots before making major changes, allowing you to roll back without
      re-installation.  You can also cone
      machines, if you need to test interaction between two Windows XP machines,
      simply install once, clone and then change the machine name / IP details.  Many more features exist and the best way to
      discover them is to have a play with the 30-day evaluation.  You will probably notice later on that I test
      most of my projects and new implementations in VMware; realistically you need
      to allow 1GB of RAM and around 30GB disk space, but with hardware being cheap these
      days it?s well worth the expense.

      • #3115265

        It

        by lukcad ·

        In reply to It’s virtual reality!!!

        Why so a lot of memory? It is real server. I tested the virtual reality by VS on 128Mb memory of computer with WXP one year ago. 😉

        Look at the power of IIS:

        power of iis

      • #3114547

        It’s virtual reality!!!

        by mrostanski ·

        In reply to It’s virtual reality!!!

        Hi!
        I’ve been wondering – you mention XP… how about Win2003? Have you run
        tests with Microsoft’s VirtualPC also? We are using VPCs for computer
        labs (student practice) in our  Academy, and we have _serious_
        problems with Win2003 Server performance, I wonder if vmware results
        will differ..

        Maciej

      • #3117022

        It’s virtual reality!!!

        by justin fielding ·

        In reply to It’s virtual reality!!!

        I can’t say I have had any problems with Windows Server 2003 in a
        VMware VM.  I guess it depends what hardware resources you have on
        the host machine, and what resources the guest OS requires.

    • #3114185

      Video Conferencing woes?

      by justin fielding ·

      In reply to In my own words…

      I have been experiencing some problems with our Video Conferencing
      (VC) equipment. Making calls from unit to unit within the organisation is no
      problem–our sites are all inter-linked by IPSec tunnels so all traffic is
      internal. The problem arises when trying to make a call from one site to
      another, via external IP addresses. The call is made on unit A, and unit B
      rings. When the call is picked up on unit B, the session is not created, no
      audio/video link is created, and unit A does not recognise that the call has
      been accepted. I looked into my firewall configuration, and I had allowed all
      of the port ranges mentioned in the manuals. I also redirected those ports to
      the conferencing unit so that incoming attempts would be delivered directly.

      This all seemed ok, so I also checked the firewall logs while
      attempting to make a call; nothing was being blocked. While this wouldn’t have
      been a problem 3 weeks ago (because all of the VC traffic was internal), people
      now want to receive VC calls from external companies. Doh!

      After a lot of googling and reading mailing list archives, I
      found references to issues with the H.323 set of protocols and firewalls. It
      seems this is also the protocol set used by Microsoft Netmeeting, so there was
      quite a bit of information on the subject. I checked the manuals of our VC
      units and, sure enough, they do use H.323–great :/  There is a sliver lining; as I mentioned
      before, this is a very popular standard. It actually means that VC systems made
      by different manufacturers can still speak to each other. I called one of our
      Sony units from Netmeeting on my laptop and it worked quite well (type conf.exe
      in your ‘Run’ box).

      A bit more googling and I found two open source solutions;
      both are basically proxies which sit between both end points and deal with
      incoming / outgoing connectivity. OpenGatekeeper H.323 Proxy is
      one of these, NMproxy is
      the other. Because of it’s simplicity and BSD compatibility, I have chosen to
      take a closer look at nmproxy.

      I don’t want to modify our existing gateway/firewall
      machines with un-tested software, so I’ll create a lab environment using VMware Workstation as I mentioned in my
      previous blog.

      The Plan:

       Set up a test environment to emulate a Video Conference call
      between two firewalled networks.

      • Build
        two firewalls (OpenBSD)
      • Compile
        and configure nmproxy
      • Build
        two internal clients (Windows XP)
      • Netmeeting
        + Webcams (one on each client) can be used for testing the H.323 proxy

      Thanks to the ‘clone’ feature on VMware, I don’t actually
      need to build 4 separate machines from scratch. I’ll install OpenBSD (to show
      people how simple this OS is to install and configure), compile nmproxy, then
      clone it to create a replica and simply edit the machine configuration (IP
      details, hostname etc). As I use VMware frequently for testing, I already have
      a ‘virgin’ Windows XP image ready to use/clone; I assume everyone reading knows
      how to install Windows.

      I know this may sound a bit drawn out, but if you want to be
      serious about security then you need to test new configurations in a secure
      environment, not just give it a go on live systems and hope for the best!

      Tune in again on Wednesday and we’ll install OpenBSD…

      • #3116374

        Video Conferencing woes?

        by lon jones ·

        In reply to Video Conferencing woes?

        On our Video Conferencing (VC) equipment in the H.323 advanced settings there should be a setting for NAT (Network Address Translation). Setting this on and listing the external IP address for the unit should resolve the problem. If there is an auto setting – you can try that but have found it works best with the actual IP as seen from outside the firewall.

    • #3116393

      Video Conferencing woes? Installing OpenBSD

      by justin fielding ·

      In reply to In my own words…

      Let’s do it:
      Building an OpenBSD firewall

      Ok, first step is to build one OpenBSD firewall. I promised
      before that I would write a tutorial on creating an OpenBSD gateway/VPN server
      if there was any interest. Since there were a few people interested in the
      idea, this can be counted as the initial instalment. While the purpose of this
      article is not to set up a VPN gateway, it will show you how to install OpenBSD
      and therefore, this can be considered a general reference for initial OpenBSD
      installation. I’ll give as much detail as I think is needed, if you haven’t
      installed this before, it can be quite daunting. If there’s anything which is
      unclear and isn’t mentioned in the official FAQ, let me know and
      I’ll cover the area again later on.

      I’m installing from a CD of version 3.7; 3.8 will be out on
      the 1st November but the install procedure won’t change.

      Setting up the VMware virtual machine:

      In VMware Workstation, start the new machine wizard with File > New > Virtual Machine. Select
      the typical configuration, Guest Operating System is Other and version is also Other.
      Give the machine a name–‘OpenBSD A’ in my case–then set the location for
      storing the virtual machine files (any place you have space). For network type
      I’m selecting ‘Do not use a network connection’; I’ll explain why later. The
      default disk size of 4GB will be ok; tick the box ‘Split disk into 2GB files’ as this will stop
      any problems with large files on a FAT filesystem (in case you want to copy the
      image to a FAT formatted disk at some point). As I noted previously, VMware
      will require a lot of disk space and quite a bit of RAM; this test lab will use
      about 16GB of disk space and 320MB of RAM while running, but with a 250GB SATA
      hard disk costing me ?65 (approx. $115) and 1GB of RAM ?50 (approx. $90), this
      doesn’t really bother me. Click finish and you will be presented with your VM
      overview.


      As you can see, this defaults to allocate 256MB of RAM, and that’s
      way too much. We can run OpenBSD on 32MB of RAM without problems. If you click
      on ‘Edit virtual machine settings’ then you can change the memory allocation to
      32MB. We can also now add our network support, the reason I didn’t set this up
      earlier is that we want two network adaptors on different physical networks
      (for all intents and purposes this represents the ‘Internet’ and ‘Internal’
      networks). Still in the virtual machine settings, click on ‘Add’ and the add
      hardware wizard will start. Select Ethernet
      Adaptor
      , then Custom: VMnet5.

      Do this again to add the second adaptor, but this time
      select VMnet6.

      Your virtual machine will now look like this:

      Pop in your CD, power on the virtual machine, and we’re
      ready to go.

      At the boot> prompt
      just hit enter.

      When prompted, just type
      I
      for Install, accept the default terminal type (just hit enter). Select
      your keyboard map, or stick with the default, then type yes when asked if you want to proceed with the install. We now come
      to setting up the hard disk, not as straightforward as a Windows installation,
      but easy once you know how. The default disk will be shown as wd0; accept this
      as the root disk. When asked if you want to use the whole disk for OpenBSD, say
      yes. We will now be dropped in to
      the partition editor where we can decide how to allocate the disk space.

      Simple commands:

      • p ?
        display or ‘print’ the current partition setup
      • d ‘x’?
        delete partition ‘x’
      • a ‘x’
        ? add partition ‘x’

      Take a look at the current partitions:

      > p

      You will see two partitions, a and c. Partition c always
      stays, it simply shows the physical disk. Remove partition a and then print to
      check that it’s gone:

      > d a

      > p

      Now we need to plan our partitions, there is a 4GB disk and
      we don’t plan on installing much more than the base install. I would say to use
      something like:

      /           250MB
      Swap   64MB (twice the
      RAM)
      /tmp     1000MB
      /usr       1500MB
      (allow for source and user installed programs)
      /var      1250MB (logs
      etc)

      So, to create the root partition:

      > a a

      offset: [63]

      size: [8385867] 250M

      Rounding to nearest
      cylinder: 512001

      FS type: [4.2BSD]

      mount point: [none] /

      The offset and FS type should be left as default (just hit
      enter). Next the swap partition (swap is always b), don’t worry about the FS
      Type, it will always offer swap as the default for partition b. You can’t use c as this is the disk, so from b move on to d. Once you have made all of your partitions, view them ( > p ) and they should look like
      this:

       

      Confirm by typing:

      > q

      Write new label?: [y]
      yes

      When prompted to confirm the mount points simply type done and you will pass to the next
      stage. OpenBSD will show you the partitions which you have chosen to create and
      as you whether you want to proceed, of course the answer is yes. You will now see the partitions
      being created and formatted.

      When asked for the system hostname, I have chosen to call
      this GatewayA, accept the default of
      configuring the network now (this gets it out of the way). We have adaptors le1 and le2; lets go with the default and configure le1 first:


      As you can see, I have set le1 to be out virtual internet network and le2 will represent our internal network. The nameserver and default
      route would normally be those provided by your ISP or those of your internet
      router. Don’t edit hosts with ed and don’t do any manual configuration. Set the
      root passwords and you will be asked where to install from; simply type c for (c)drom and then keep the default
      options for the device name and file path.

      The package selection screen is shown next, by default all
      of the essential package groups are selected, all those with ‘x’ at the
      beginning relate to x-windows, as we don’t want these installed, we simply type
      done to continue. You will confirm
      that you are ready to install and then the packages will be copied from the
      disk. A second chance to install sets will be given; simply hit enter to accept
      the default (done}. Do the same for
      any following questions, except whether you expect to run x-windows–the answer
      to that one is no.

      Set your time zone (in my case Europe/London).

      That’s it, done. You now have to remove the CD, reboot and a
      fresh OpenBSD installation has been completed! That wasn’t too bad was it!

      In next week’s instalment we will finish the gateway
      configuration, compile / install nmproxy and then clone the gateway to create GatewayB.

    • #3136391

      The office is open?

      by justin fielding ·

      In reply to In my own words…

      An article in Computer Weekly mentioned that Bristol council will be
      migrating 5500 desktops to Sun?s
      StarOffice suite
      .  Even taking in to
      account the cost of converting existing documents and two half-day courses for
      users, they stand to make massive savings over Microsoft Office. 

      StarOffice is developed by Sun, here is their sales
      description ?Enhanced usability, compatibility, interoperability, new XML File
      Format, and more developer features and tools all combine to make StarOffice 8
      the best office suite value by far.?  The
      cost is $69.99, a heck of a lot cheaper then Microsoft, but how about something
      that?s free?  OpenOffice is the open
      source suite which StarOffice is based on, take a look here to see what you
      don?t get with OpenOffice (nothing that I would miss).

      OpenOffice2 is now available
      for download! 

      Give it a go?

    • #3137288

      VC project continued: gateway configuration/installing nmproxy

      by justin fielding ·

      In reply to In my own words…

      Ok, so we now have an OpenBSD gateway, let’s make it useful.

      I took the nmproxy source code and created an ISO with it (I used UltraISO; you may have your own preferred method). It’s then possible to mount the ISO, making it appear as a drive in your VM.

      You can then mount the drive in OpenBSD:


      Create a directory and copy the source code:

      # mkdir /usr/src/nmproxy
      # cp ?R /mnt/cd/nmproxy /usr/src/

      Now compile:

      # make ?f ./Makefile.OpenBSD

      If you don’t get any nasty errors, install:

      # ./nmproxy_install

      There you go, now we just need to set up Packet Filter, edit a few configuration files, and we are ready to clone. I will assume from now on that you are familiar with Linux and the vi editor. If not, then look here.

      First let’s allow IP forwarding by editing /etc/sysctl.conf and removing the # comment in front of net.inet.ip.forwarding=1. Save the file and now open up /etc/rc.conf search for pf=NO and change it to pf=YES.

      You can pretty much follow the default setup for PF; the following lines need to be added for nmproxy:

      # Redirect port 1720
      rdr proto tcp from any to any port 1720 -> 127.0.0.1 port 1720

      # Nmproxy specific rules. Note that the port number ranges look strange
      # because of the way ranges are specified.
      pass in proto tcp from any to 127.0.0.1 port 1720 flags S/SA keep state
      pass in proto tcp from any to any port 10199><10210 flags S/SA keep state
      pass in proto udp from any to any port 10199><10260

      Nothing too taxing there. Give the VM a reboot, and the changes made should take effect. We can now check that nmproxy is running and the firewall is letting connections through:

      # telnet 168.1.1.1 1720

      All is well, and the connection succeeded.

      Now we need to clone the machine. Shut down and we will start.

      Select the VM menu and then Clone? to start the cloning wizard.

      Most options can be left as default; when you get to the following screen, you must select Create a Full Clone:


      The new clone can be called OpenBSD B; locate it wherever you like. You should now have something like this:


      <!–[if !vml]–><!–[endif]–>Start up the new VM and we will change the configuration to make this system ready. I have decided to call my second firewall GatewayB.testdomain.com, the internal network address is 10.2.1.1, and the external one is 168.1.1.2. Ideally, we would re-generate the ssh keys, but I don’t think this is necessary for a test system. 

      Files which need to be edited are:

      /etc/hosts                     Hostnames
      /etc/hostname.le1        IP configuration of internal interface
      /etc/hostname.le2        IP configuration of external interface
      /etc/nmproxy.conf       NMproxy configuration
      /etc/pf.conf                  Firewall configuration (change IP details of networks)
      /etc/myname                The system hostname
      /etc/mygate                  Default route/gateway

      All of these files are self explanatory–nothing complex at all. After we have edited these files, a quick reboot will put everything into action.


      Check that the interfaces have taken the new IP details:

      # ifconfig ?a 

      If your changes don’t seem to have taken effect, check that you saved the files after editing!

      If we start up the original VM, we should now be able to telnet into port 1720 of that machine to verify that we have communication between the two:


      That’s all for now, next week we will finish this off by creating a team consisting of our two firewalls and two Windows XP VM’s. We will also look at some of VMware’s more advanced networking features and finally test nmproxy!

    • #3118941

      A fresh look at Linix

      by justin fielding ·

      In reply to In my own words…

      Over the last few year I have used Linux as my desktop
      operating system on and off.  This has
      normally been in very short spells, I never found a distribution which I liked
      enough to keep me from going back to Windows. 
      I tried SuSe
      (Their Enterprise Linux offering is great), I always seemed to get hung up with library files not
      being found etc.  Fedora (previously RedHat) was pretty
      good, that was back at version 2; I recently tried release 4 and had no end of
      problems trying to configure my WiFi card, not so good.  A few days ago a colleague of mine showed me Ubuntu, he couldn?t say a bad word about it
      so I have installed it myself.  Installation
      was fast and clean, the base system (which still includes major packages like OpenOffice,
      Gimp etc) installs from one CD?additional software can be downloaded (including dependencies)
      using the included package manager. 
      Despite the fact that my WiFi card is not natively supported by Linux,
      an included tool allowed me to easily install the Windows driver (with ndiswrapper).

      So far I haven?t got a bad word to say about it, let?s see
      how long that lasts!

      • #3118899

        A fresh look at Linix

        by jmgarvin ·

        In reply to A fresh look at Linix

        What didn’t work in Fedora with your wifi card that worked in
        Ubuntu?  I’m just curious, because I like to track down Fedora
        bugs for my students.

      • #3118494

        A fresh look at Linix

        by justin fielding ·

        In reply to A fresh look at Linix

        I tried both the Netgear WG511T and Proxim Orinoco Gold, both of which
        use the MadWiFi drivers.  This was the 64bit FC4, I tried rpm’s
        and compiled from source but neither worked, the cards showed up as
        wifi0, but iwconfig could not configure it.  After a few hours I
        gave up and rebooted in to Windows (say what you like but at least it
        works!) 🙂

      • #3118080

        A fresh look at Linix

        by jmgarvin ·

        In reply to A fresh look at Linix

        ndiswrapper is far superior to Madwifi and I think that is why it failed.

      • #3117937

        A fresh look at Linix

        by justin fielding ·

        In reply to A fresh look at Linix

        I think in saying that you show a complete lack of understanding.
        Please tell me how to put the card in to monitor mode while using
        ndiswrapper, then maybe your comment will be valid.

      • #3131173

        A fresh look at Linix

        by jmgarvin ·

        In reply to A fresh look at Linix

        Hummana?  You never said that the card needed to be in “monitor mode.” (which I assume you mean you need it is non-promiscuous mode)  You just said it didn’t work.  To get a wireless card to work and connect it makes FAR more sense to use ndiswrapper.  If you check my blog you’ll see I have a setup by step to setup ndiswrapper.  I would also guess that you had a depricated version of wireless-tools installed, since you said it was Fedora Core 2.  Try Fedora Core 3 or 4 for better support and more up-to-date libs and tools.

        Madwifi has its uses, but is buggy and not as robust with card support as ndiswrapper.  While I understand you like Madwifi, it doesn’t seem to be supported very well anymore and it seems to have somewhat died.

        If you explain what you need, I’d be more than happy to help…but you need to take a different tone. 

         

      • #3130710

        A fresh look at Linix

        by justin fielding ·

        In reply to A fresh look at Linix

        As I said previously I was trying FC4.  I have used ndiswrapper in
        the past, it’s very straightforward, however I don’t consider this to
        give fully operational WiFi due to the lack of monitor mode options
        (which are needed for security testing, network monitoring etc). 
        There is no issue currently as I am using Ubuntu which has proven to be
        very satisfactory.

      • #3131686

        A fresh look at Linix

        by stress junkie ·

        In reply to A fresh look at Linix

        Welcome aboard the Linux happiness train. It doesn’t matter which
        distro you use. Whatever works is the right choice. If you need to know
        what applications are used for various tasks just start a discussion
        about Linux apps. I don’t remember anyone ever starting such a
        discussion. It would be a nice change from Windows vs. Linux or Bush:
        Good or Evil.

      • #3149939

        A fresh look at Linix

        by apche2004 ·

        In reply to A fresh look at Linix

        This article shows a lack of understanding of license issues and how they affect linux. The blame for various hardware (in my case the broadcom 43xx wireless chipset) not working in linux lies with the manufacturers.

        In broadcoms case they refused to build a linux driver and refused to release the specs so the open source comunity could write one. What was found was a router manufacturer using linux as the OS in thier routers that used the broadcom chip. They had BROKEN the GPL license by selling thier product with linux and a proprietary driver. They eventually had to release the code or face legal action. GPL 1, Broadcom 0 🙂

        So, the fact that you keep selling out and going back to windows actually serves to hinder the development of linux drivers and applications. If more people insisted on using linux this would force the hardware and software manufacturers to get off thier asses and release code.

        Using ndiswrapper and the windows wireless driver or for example the nvidia driver have tainted your Ubuntu distribution. This in fact is a direct contradiction to the Ubuntu philosophy.

    • #3118766

      Happy Birthday!

      by justin fielding ·

      In reply to In my own words…

      Firefox is celebrating it?s
      first birthday! That?s right, on the 9th of November the groundbreaking Firefox web-browser was one
      year old with over 106 million downloads.
      It?s estimated that Firefox
      currently serves 9% or surfers, with 85% being held by Internet Explorer. That sounds small, but when you consider that
      all other browsers combined make up the remaining 6% (which would Netscape etc),
      plus the fact that Internet Explorer comes pre-installed with Windows, it?s not
      a bad feat in 12 months. I have used Firefox since its release
      and could never go back to Internet Explorer.
      Being a developer in the past I would always prefer Firefox for testing as it
      followed W3C standards much more closely that
      Microsoft?s offering. If you haven?t tried
      it yet then give it a go?It?s free so there?s nothing to lose!

    • #3131278

      Setting up a test environment for videoconferencing, part 4

      by justin fielding ·

      In reply to In my own words…

      Continuing from last week, we now have two OpenBSD gateways
      which are able to talk to each other. In this final instalment, we will use the
      team feature of VMware to group each Gateway with a Windows XP machine, and then
      try to initiate a call from one network to the other.

      I am assuming that you have already installed your Windows
      XP machines and have given them IP addresses 10.2.1.2 and 10.1.1.2 with the
      gateway/DNS addresses as 10.2.1.1 and 10.1.1.1 respectively.

      In VMware, start the new team wizard File > New >
      Team…


      Add the Virtual Machines as above, then move on and create three LAN segments:

      Now configure the Virtual Machines to connect to those LAN
      segments as follows:

      Finish the wizard and we will now look at a more interesting
      feature. Open the team Settings and continue to the LAN Segments tab. We
      can see our three LAN segments earlier defined: LAN1 is our ‘Internet’ segment,
      so we can rename it. We can also have this segment emulate a 1.5-Mb leased line
      with 1% packet loss (now you are seeing that this can be a great tool for
      testing and emulating various network conditions). The other two segments can
      be renamed Internal LAN1 and 2 or left as they are.


      The group can now be started. This is where you will notice
      a slowdown if your host machine does not have a good helping of RAM. Our setup
      here is using 384 MB of RAM (64×2 + 128×2); you will have probably noticed in
      the preferences that you don’t need to have this all in physical RAM. You can
      have some swapped to disk; however, this is very slow. I would recommend
      placing all of this in physical RAM.

      The first thing to do once all machines are up is to ping
      the opposite counterpart. From Windows XP A ping Windows XP B and
      vice versa. This should work without problems (turn off the Windows firewall;
      the BSD gateway provides the firewall), but if you do have trouble, check the
      firewall rules in /etc/pf.conf on
      each gateway.

      The nmproxy configuration file is /etc/nmproxy.conf. Configuration
      is not difficult–we simply need to specify the local network, and map the
      default forward rule (full details here).

      I used two USB Webcams–one connected to each XP machine. Connecting
      the camera to your VM is simple: Give the desired machine focus, then plug the
      USB camera into your host. VMware should conne ct automatically to the USB
      device and bridge it to the VM. If not, you can go to VM > Removable
      Devices > USB Devices > (your webcam)
      to connect it manually.

      Within Windows XP we now need to start Netmeeting. Most
      people don’t realise that this is still installed by default, as there are no
      links created. Type conf.exe in your Run dialogue and
      there it is!


      Once this is running we can try to initiate a call from one
      machine to the other. Type the IP address of the second machine in your call
      box and hit the telephone button. The counterpart Netmeeting instance on the
      second machine should ring!!!

      This is where things become a little disappointing; the
      second machine did receive the call, however, when pickup was selected, the
      call did not connect. The first machine didn’t even recognise that the call was
      accepted. I checked all of the settings in both pf.conf and nmproxy.conf
      with no obvious errors. I then also use the pflog device and tcpdump
      to see if any connections were being blocked by the firewall, but unfortunately
      this was not the case. It seems that nmproxy collects the connections but does
      not then initiate the audio/video streams. I emailed the author who was not
      helpful in the slightest. After posting questions on a few popular networking
      forums, it seemed that nobody had successfully managed to get this working.
      After that I decided that it’s not worth wasting more time trying to flog a
      dead donkey, so I’m looking at alternative gatekeeper solutions.

      I hope this series of posts has been useful to people. We
      haven’t found a great H.323 video proxy, but we have gone over basic
      installation and configuration of OpenBSD as well as introducing the use of
      VMware to speed up testing and troubleshooting. In total, the practical
      implementation of this test environment took me about 3-4 hours. This would
      have been a little less if I had used a pre-prepared generic install of OpenBSD
      to start from (as I did with the XP workstation VMs). I personally think this
      is much more sensible than trying out untested code in a live environment; it’s
      also much faster than building a physical test environment due to the cloning
      feature.

      If any readers manage to get nmproxy running, or have done
      so in the past, please let me know!

    • #3132419

      Who?s at the door?

      by justin fielding ·

      In reply to In my own words…

      Intrusion detection is a vital part of any firewall, it?s
      all very well to block traffic, but how do you know what is being blocked and
      what is going through on your open service ports (SMTP, IMAP, DNS etc)?  Snort is
      the most popular Intrusion Detection system around.  It?s offered as an open source project, with
      a subscription available offering enhanced rules libraries.  Snort is highly configurable, with various
      plug-ins for download.  Quite a few
      commercial firewalls run snort under a custom web interface (look
      here
      )!  The documentation is good and
      I have compiled this on both OpenBSD and SuSe Enterprise 9 without problems.  Is anyone else using Snort, how are you
      finding it?

    • #3117402

      Full Speed Ahead!

      by justin fielding ·

      In reply to In my own words…

      I received
      my Dell M170 today, this is a replacement for the Acer Ferrari 3400 which
      despite being very nicely designed, was nothing but trouble.  Dell laptops come highly recommended, this
      being their top spec machine, it should make multi-tasking much faster.  With VMware in mind 2GB of DDR2 RAM and a
      100GB/7200rpm Hard Disk were chosen, I have yet to test this (I?m still going
      through the slow process of installing applications and transferring data) but
      I am expecting a considerable improvement over my current setup.  My only gripe so far is the ?tacky? design,
      but then machines with this power are aimed at gamers?Luckily the coloured LED?s
      in the speakers and fan ports can be turned off!  Is anyone else running VMware on a
      notebook?  What kind of setup are you
      using?

    • #3122277

      IBM Director install hits snags: Is it worth it?

      by justin fielding ·

      In reply to In my own words…

      With
      increasing numbers of servers in various locations around the world, monitoring
      things such as disk space, load, and network status can be a bit of a headache.
      This is especially true in a non-windows server environment. There are quite a
      few offerings which will (or claim to) solve problems of control faced by
      administrators. The obvious options come licensed with your servers, either IBM
      Director
      or HP Insight
      Manager
      . As most of our servers are from IBM, I’ll take a brief look at
      what Director offers: 

      ?
      An easy-to-use, integrated suite of tools with consistent look-and-feel and
      single point of management simplifies IT tasks

      ?
      Automated, proactive capabilities that help reduce IT costs and maximize system
      availability

      ?
      Streamlined, intuitive user interface to get started faster and accomplish more
      in a shorter period of time

      ?
      Open, standards-based design and broad platform and operating support enable
      customers to manage heterogeneous environments from a central point

      ?
      Can be extended to provide more choice of tools from the same user interface

      That’s the marketing blurb, but what does it mean in
      English? Well I installed IBM
      Director
      , which was provided on CD’s with some of our servers. Here, I came
      across the first problem, and I was quite surprised by it! The agent (this goes
      on the servers to be monitored) is included on the CD in a variety of formats
      for different types of servers. I tried to use the provided installation
      scripts to install the agent packages (system monitoring, service monitoring,
      RAID configuration, etc.)–shock, horror–they failed!

      The problem was quite obvious: In the installation scripts,
      package files were referenced with caps, for example, DirectorAgentPackage.rpm, but the actual name of that file was directoragentpackage.rpm. Linux/Unix
      file systems are case sensitive; therefore, the files could not be found. I
      thought it quite surprising that a large corporation like IBM would make such
      an obvious error, which would have been found if they decided to test the
      scripts at least once (which I would do, personally, if I were planning to make
      thousands of CDs to include with my products). Anyway, to overcome this, I
      simply copied all of the files to the hard disk and then edited the installation
      using the correct file names. It installed without any problems after that. I
      came across the same problem with the server package installation–the same fix
      required.

      My second issue came while installing the client application
      on my computer, after installation, and on my first attempt to log in to the
      server. I was informed that my client version did not match the server version.
      I had to download the correct client version from their Website, which again
      left me wondering who on earth would distribute incompatible software versions
      together!

      With these problems overcome, I finally got to start looking
      at the software. The RAID manager worked very well, but then the IBM RAID
      manager can be used standalone without IBM Director. To be honest I wasn’t very
      impressed with anything else, it was a pretty drawn out and slow process just
      to set monitoring of disk space. I didn’t find the interface very intuitive and
      very quickly felt that I didn’t really want to deal with this on a day-to-day
      basis.

      Next week I’ll take a look at Nagios; this is an open-source solution
      offering server monitoring and alerting.

      • #3122913

        IBM Director install hits snags: Is it worth it?

        by ideallypc ·

        In reply to IBM Director install hits snags: Is it worth it?

        What does IBM have to say for itself?  Surely, they must have some response to such a lowly, uncomplimentary review of their product.  Is this “IBM” the same famed “International Business Machines”?  If so, I would think twice about running my business of this stuff!  The only thing you forgot to include on your review was the numeric score.  But you were clear and unless I am reading it all wrong, the score would be either a zero or a one on a scale of 1-5.  If you hadn’t mentioned the product or the company, I would expect a review of this level on a knock off brand, not IBM.  Seriously, how are we to take anything that they sell in the PC Workstation/Server market seriously if there was zero quality control on the reviewed product line? 

         

    • #3122852

      Honeystick

      by justin fielding ·

      In reply to In my own words…

      I came across a great project by the UK Honeynet Project.  For those who have not heard of  ?honeynets? yet, to put it simply they are
      small, often virtual networks which are created purely for the purpose of diverting
      attention from real networks and learning about techniques/exploits in the process.  A prime concern is making sure that once an
      intruder has penetrated your honeynet, they are not able to get back out and
      use your resources for activities such as denial of service and spamming.  The Honeynet
      Project
      provides Honeywall,
      this suite of tools provide a solid basis for any honeynet project.  Getting back to the point, I found an
      interesting guide which describes the creation of a fully operational honeynet
      on a 2GB USB key!  Here?s the link, I?m working on one at
      the moment!

    • #3043975

      Need VOIP?

      by justin fielding ·

      In reply to In my own words…

      We are currently looking at setting up a VOIP PBX to allow IP dial-in conference calls. One solution looks to be Asterisk–An open source PBX offering just about every feature you will find in commercial solutions. Asterisk@home is a project which aims to make a basic install easier, this project provides a CD which will deploy a fully functional Asterisk PBX with the underlying OS and a fully operational web interface to ease configuration and administration.

      You may wonder why go to this effort rather than using a simple program like Skype? Unfortunately it?s a case of compliance, followed by security?Add to this the fact that call quality with Skype degrades quickly when there are more than three callers, it?s not much good for commercial use.

    • #3122119

      Network Monitoring: Round two!

      by justin fielding ·

      In reply to In my own words…

      Right, last week we looked at IBM Director, and I wasn’t very impressed. Let’s now take a look at Nagios and see if it’s any better. Nagios is an open source host, network, and service-monitoring system, used by many large corporations and even some government agencies! The very honestly named propaganda page will fill you in on where and how Nagios is being used.

      Nagios is a very flexible system with masses of potential; here is the official word on what it can do for you:

      • Monitoring of network services (SMTP, POP3, HTTP, NNTP, PING, etc.)
      • Monitoring of host resources (processor load, disk and memory usage, running processes, log files, etc.)
      • Monitoring of environmental factors such as temperature
      • Simple plugin design that allows users to easily develop their own host and service checks
      • Ability to define network host hierarchy, allowing detection of and distinction between hosts that are down and those that are unreachable
      • Contact notifications when service or host problems occur and get resolved (via email, pager, or other user-defined method)
      • Optional escalation of host and service notifications to different contact groups
      • Ability to define event handlers to be run during service or host events for proactive problem resolution
      • Support for implementing redundant and distributed monitoring servers
      • External command interface that allows on-the-fly modifications to be made to the monitoring and notification behaviour through the use of event handlers, the web interface, and third-party applications
      • Retention of host and service status across program restarts
      • Scheduled downtime for suppressing host and service notifications during periods of planned outages
      • Ability to acknowledge problems via the web interface
      • Web interface for viewing current network status, notification and problem history, log file, etc.
      • Simple authorization scheme that allows you restrict what users can see and do from the web interface

      Monitoring of host resources and network services is exactly what I’m looking for–add instant notification, and we are really cooking. Knowing that a problem exists ‘before’ the helpdesk phones go crazy with users complaining is a major benefit. Most of the time, issues can be resolved without many users noticing at all!

      The support is good with Online Documentation, FAQ’s, Mailing Lists, and Forums.

      So, installation and configuration–how was it? I currently have one server with Nagios installed, which is going to be the main monitoring station, but it’s currently only monitoring itself. We use SuSe Linux Enterprise Server, and Nagios is actually in the applications repository, so it can be installed with the YAST management tool. However, I took preference to compiling and installing Nagios from source, meaning I will have a better understanding of what’s actually going on. The install was uneventful, everything working as described in the installation documents. Configuration of the service monitoring wasn’t too difficult; I went for disk space, CPU load, and Memory Usage. These tools were in the form of plug-ins which are basically small scripts executed on the host and feedback information. Nagios can use SNMP to gather information; this will be very useful for monitoring server health, especially combined with HP’s Integrated Lights Out (ILO) and various other devices that offer SNMP availability.

      All things considered, I am very happy with Nagios. It’s free, fully-featured and well supported. The installation went without a hitch–more than can be said about IBM Director. Setup and configuration will be time-consuming with a steep learning curve (my SNMP skills need to be developed); however, this is well worth the effort and won’t take long to return this investment of time.

      I hope this proved useful, I would love to hear from anyone else using Nagios.

      • #3124480

        Network Monitoring: Round two!

        by wblundell ·

        In reply to Network Monitoring: Round two!

        It sure sounds like you are an Open Source bias’ed kind of guy.  I have been working with IBM Director since it was Netfinity Director many years ago.  I find it very powerful and provides many features that Nagios cannot do.  One of the specific features that we like is the ability to alert on all type of hardware failures or even PFA’s.  When was the last time that Nagios alerted you with a predictive failure of a hard drive, and then supplied you with the FRU number to replace it with.

        We have over 100 IBM servers monitored and it works very well for us.

        WB

      • #3214303

        Network Monitoring: Round two!

        by jeff ·

        In reply to Network Monitoring: Round two!

        I’ve heard some good things about Nagios, but if you are really an open-source oriented guy, you should check out the new tool released by Zenoss. This company was started by an associate of mine several months ago, and their product is designed to compete directly with Nagios, while providing features that Nagios may have overlooked (and, of course, it’s free).  I’d be interested to hear what you think about their product.

    • #3129301

      Asterisk Update

      by justin fielding ·

      In reply to In my own words…

      Just an update on the Asterisk PBX project.  I now have a working PBX, this is only configured for internal calls (at the moment we only need this for conference calls).  The installation is pretty fast using the Asterisk@Home install CD, CentOS is the underlying distribution installed (this is basically RHEL with the RedHat bits removed) which allows easy updates via YUM.  I initially had some issues, although the system seemed to be functioning properly, no sound output was generated (music on hold, menu speech etc).  After a lot of hunting around and a lot of trial and error, this was tracked down to the installed Zaptel PCI card.  We installed the card with a view to later look in to PSTN intergration, however it seems Asterisk was a little unhappy about having the card installed but not used–Removal of the card solved all issues!  The call quality between two softphones is pretty good; today I will test the conferencing facility with 6+ callers.

      • #3127129

        Asterisk Update

        by jc2it ·

        In reply to Asterisk Update

        I have been looking at Asterisk PBX for my parent’s and my In-law’s
        business’. It looks promising. Howmuch will you have into the entire
        project when you are finished, $ & Hours?

      • #3126831

        Asterisk Update

        by justin fielding ·

        In reply to Asterisk Update

        In $ nothing, we had a spare server hanging around and we don’t yet need a PSTN card as we are simply using the system for incomming IP conference calls.  In terms of time, it took me about 3 hours to install and configure the basic system.  I’m sure that to use this as a fully fledged PBX it would take a few days to configure/test.  There is lot’s of documentation, I would recommend taking a read of the guide here to see what’s involved.

        Hope that helps!  Good luck and let us know how you get on if you do decide to go for Asterisk.

    • #3126834

      Microsoft Anti-Virus?

      by justin fielding ·

      In reply to In my own words…

      While browsing I came across Windows Live Safety Center
      (yes I know centre is spelt wrong, American of course!).  This is in beta and claims the following:

      • Check for and remove
        viruses
      • Learn about threats
      • Improve your PC’s
        performance
      • Get rid of junk on your
        hard disk

      I would be interested in hearing from someone who has tried
      this, I don?t really trust Microsoft to keep my computer virus free (I?ll stick
      with Norton Anti-Virus).  I do however
      use the Microsoft
      Anti-Spyware Beta
      application which I cannot fault, it works much better
      than any other Anti-Spyware/Malware programs which I have used.

      • #3128039

        Microsoft Anti-Virus?

        by geraldkiii ·

        In reply to Microsoft Anti-Virus?

        After reading your article I tried out Windows Live Safety Center. After about an hour Windows Live came back and told me my computer was virus free. Which it is because I have three different AV protections on my computer and this is a fresh image.  Also told me that I had no OPEN ports on my network. Which is true except port 80.  Told me my hard disk was fragmented and began to defragment my machine.  I do not dislike the service nor do I feel compelled to go to Windows Live Safety Center website everyday to scan my computer. Rather leave that to my zone alarm security suite. 

    • #3129757

      Helpdesk solutions

      by justin fielding ·

      In reply to In my own words…

       
      There comes a time within a growing company where the IT
      support operation goes from a few ‘Tech’ guys keeping users happy, to a
      reasonably sized department having ever more demanding requests piled on by
      users. Efficiency can be increased by having certain support staff cover key
      areas of the companies needs (Network/Server support, Desktop support, Software
      support, etc), but without the proper underlying structure in place, making
      this work can seem like fighting a losing battle. 

      An essential tool in creating a basis for this structure is
      of course a helpdesk/call logging system. I?m sure everyone who has worked in a
      medium to large corporation has used at least one helpdesk solution, and while
      that means staff need to log how/where their time is used (which may be
      unpopular with less productive members of a team), the statistics provide a way
      to justify budgetary requirements and departmental expansion.

      Our company is currently looking at some of the helpdesk
      solutions on the market; I thought I would comment on our findings so far. We
      are, of course, challenged in the fact that we run a Linux/Unix-based network, and
      most of the systems out there are geared towards Windows Domains and require a
      Windows Server to operate on.

       

      Three popular systems that we have considered implementing
      are Touchpaper,
      SupportWorks and Topdesk Enterprise. In this
      installment, I’ll tell you about the first one, Touchpaper.

       

      One of our team members has previously used Touchpaper,
      hence our investigation into what it would do for our organisation. The
      Helpdesk system has many optional plug-in applications (most of which can also
      be used without the helpdesk suite), the most interesting of these being NMS
      (network monitoring) and AUtoserve (service/process monitoring). These modules
      are well thought out and, when used with Helpdesk, they will automatically log
      a call and assign it to the correct operator (if a router or particular service
      were to go down).

      This all looked good, and according to their sales staff and
      technical advisor, this would all run on a Linux base. Unfortunately, this has proven
      not to be the case. It’s a shame because the power of Touchpaper seemed to be
      its compatibility and integration of other modules. LANDesk integration, fully-automated
      asset discovery, and the modules previously mentioned would have been a great
      help. We also felt they were trying to take us for a ride with their quote. The
      software cost was a tad under five figures, but the consultancy and setup
      charges took the total price to five times the software cost! (All in ?GBP too,
      not $USD.)

    • #3124700

      Linux on the desktop

      by justin fielding ·

      In reply to In my own words…

      A recent article in
      Computer Weekly
      stated that the quality of application support and browser
      plug-in support is hampering users? enthusiasm for Linux as a desktop operating
      system.  Office productivity tools were
      deemed the most critical component by 51% of those reviewed, 38% said that
      browser plug-in support needs improvement with 10% of those saying this is a severely
      inhibiting factor.

      I find I can get by with Linux on the desktop, OpenOffice,
      Firefox and Thunderbird provide all of the vital functions.  I can even play World of Warcraft using the
      brilliant Cedega of Transgaming!  I will agree with the 38% who claimed browser
      plug-in support needs work, that?s a very annoying issue when it appears (grrrr
      I don?t want to have to reboot in to another OS just so I can read a certain
      webpage!).  Application support isn?t too
      bad, I guess for the enterprise this is a bigger issue, but may commercially
      supported products are available, SunOffice for example; Yes you have to pay
      for this, but then businesses shouldn?t expect to get something for
      nothing!  This is still a cheaper option
      that MS Office + Windows licence.

      Any opinions on this topic? 
      What are you pet hates with Linux on the desktop?  Do you use Linux desktops in an enterprise
      environment?  Let us know!

    • #3121101

      Nagios configuration made easier

      by justin fielding ·

      In reply to In my own words…

      While starting to configure Nagios with multiple remote
      hosts, I came to appreciate just how complex the setup can be with a multitude
      of files.  Luckily I have found a web
      based administration for Nagios which can be used to add/modify hosts and
      backup configuration.  NagiosQL is a php/mysql based
      system, the project is open source and a good deal of online support is
      available in the form of FAQ?s Forums and Wiki pages.  I?m going to install NagiosQL this week which
      should hopefully speed up my deployment of Nagios in our company.

    • #3197015

      Helpdesk solutions continued

      by justin fielding ·

      In reply to In my own words…

      Last time, I was talking about the various helpdesk
      solutions that we are considering for our company, and I gave you an overview of
      our findings on TouchPaper. Today, I’ll tell you about the other two. I have
      used Hornbill Supportworks in a
      previous company; the interface is nice and the process of logging and updating
      calls is very intuitive. Asset discovery is not included in the helpdesk
      package (it’s a separate, plug-in/product), however, manual input of asset data
      is allowed. We ran a two-week trial of Supportworks, and I had problems in
      getting the shared mailbox function to work with our mail server. Everything
      worked with POP3, however IMAP did not. I?m sure this could have been resolved
      with Hornbill over time; however, we only had a two-week evaluation so we stuck
      with the POP3 config and got on with testing.

      We were quite happy overall with Supportworks. The only
      massive negative was that this system again needs a Windows Server base. It?s
      quite disappointing that Hornbill don?t make this available for a Linux-base
      system, considering that its major components include Apache and MySQL!

      TOPdesk
      is a lesser known helpdesk environment, and none of our team had used this
      before. We found it on a Google search! We had our sights on TOPdesk
      Enterprise, which is completely platform-independent, the core services being
      coded in Java and database options of MSSQL or Oracle. The user interface is completely
      Web-based. At first this didn?t seem very attractive, because Web-based
      interfaces are normally quite cumbersome, slow, and awkward to use (form
      submissions, drop-down boxes, and text fields). I must now say that the
      interface of TOPdesk was the most intuitive Web interface I have used. Cutting-edge
      programming techniques and lateral thinking have clearly been put into action
      with a pleasing result.

      The pricing structure of TOPdesk is pretty fair; most of the
      systems features are broken down into modules, meaning you only pay for what
      you want. One suggestion from their sales rep was that we could purchase the
      basic system and then take additional modules later on. This would mean
      deployment and costs could be staggered over time. TOPdesk also did not base
      their prices on the number of operators and the number of end users; the total number
      of users–be they ‘end’ users or operators–was all that needed to be
      considered. LANDesk integration is, of course, available; reporting is also
      included and very simple to work with using the report generator. One gripe I
      have about TOPdesk is that it requires Internet Explorer; Firefox compatibility
      is apparently coming soon.

      All things considered, helpdesk systems are much of a
      muchness. The features offered are quite similar and there doesn?t seem to be
      one package which shines forth above all others. Taking this into account, our
      decision will probably be based on user interface, system compatibility, and of
      course, price. TOPdesk looks to be doing well (seeing as it is the only
      platform-independent option so far). Any comments from current TOPdesk users,
      or do you have any suggestions on other systems, which we should be looking at?

      • #3197737

        Helpdesk solutions continued

        by mktg78 ·

        In reply to Helpdesk solutions continued

        Have you looked into Parature yet? It sounds very similar to what you are finding with TOPdesk, but with a few more modules that we needed like realtime Chat. We just signed up with them, and their UI is extremely easy to use! Take a look at http://www.parature.com

        – Thomas

      • #3127356

        Helpdesk solutions continued

        by justin fielding ·

        In reply to Helpdesk solutions continued

        Thanks, I’ll take a look 🙂

      • #3088053

        Helpdesk solutions continued

        by a.cugun ·

        In reply to Helpdesk solutions continued

        Hey I just found this post through a search and I would like to tell you that we appreciate your kind comments about the TOPdesk Enterprise GUI. We put a lot of effort into making it a usable web application and we’re quite proud of it.

        Firefox support is done and should be hitting the next release some time soon. We like to use open technologies where we can but our priorities are determined by customer demand. The reality is that some 95% of our customers demand IE so we made that work first.

        Anyway, if you have any questions feel free to contact us.

        Alper ?ugun
        TOPdesk Systems Integration

    • #3197014

      Helpdesk solutions continued

      by justin fielding ·

      In reply to In my own words…

      Last time, I was talking about the various helpdesk
      solutions that were considering for our company, and I gave you an overview of
      our findings on TouchPaper. Today, I’ll tell you about the other two. I have
      used Hornbill Supportworks in a
      previous company; the interface is nice and the process of logging and updating
      calls is very intuitive. Asset discovery is not included in the helpdesk
      package (it’s a separate, plug-in/product), however, manual input of asset data
      is allowed. We ran a two-week trial of Supportworks, and I had problems in
      getting the shared mailbox function to work with our mail server. Everything
      worked with POP3, however IMAP did not. I?m sure this could have been resolved
      with Hornbill over time; however, we only had a two-week evaluation so we stuck
      with the POP3 config and got on with testing.

      We were quite happy overall with Supportworks. The only
      massive negative was that this system again needs a Windows Server base. It?s
      quite disappointing that Hornbill don?t make this available for a Linux-base
      system, considering that its major components include Apache and MySQL!

      TOPdesk
      is a lesser known helpdesk environment, and none of our team had used this
      before. We found it on a Google search! We had our sights on TOPdesk
      Enterprise, which is completely platform-independent, the core services being
      coded in Java and database options of MSSQL or Oracle. The user interface is completely
      Web-based. At first this didn?t seem very attractive, because Web-based
      interfaces are normally quite cumbersome, slow, and awkward to use (form
      submissions, drop-down boxes, and text fields). I must now say that the
      interface of TOPdesk was the most intuitive Web interface I have used. Cutting-edge
      programming techniques and lateral thinking have clearly been put into action
      with a pleasing result.

      The pricing structure of TOPdesk is pretty fair; most of the
      systems features are broken down into modules, meaning you only pay for what
      you want. One suggestion from their sales rep was that we could purchase the
      basic system and then take additional modules later on. This would mean
      deployment and costs could be staggered over time. TOPdesk also did not base
      their prices on the number of operators and the number of end users; the total number
      of users–be they ‘end’ users or operators–was all that needed to be
      considered. LANDesk integration is, of course, available; reporting is also
      included and very simple to work with using the report generator. One gripe I
      have about TOPdesk is that it requires Internet Explorer; Firefox compatibility
      is apparently coming soon.

      All things considered, helpdesk systems are much of a
      muchness. The features offered are quite similar and there doesn?t seem to be
      one package which shines forth above all others. Taking this into account, our
      decision will probably be based on user interface, system compatibility, and of
      course, price. TOPdesk looks to be doing well (seeing as it is the only
      platform-independent option so far). Any comments from current TOPdesk users,
      or do you have any suggestions on other systems, which we should be looking at?

    • #3121289

      Nessus 3 released

      by justin fielding ·

      In reply to In my own words…

      Version 3 of Nessus
      has now been released.  For those
      unfamiliar with Nessus, this tool
      used by security professionals and hackers alike is one of the best
      vulnerability scanners out there.  It is
      estimated that Nessus is used by
      over 75,000 organisations around the world, auditing the security of business
      critical devices and applications.  New
      plugins are constantly being released, meaning Nessus stays up to date with current
      trends and newly discovered issues.  For
      a modest fee a direct feed plugin is also offered which will ensure that your
      scanner is as up to date as possible, email support is also offered to direct
      feed customers.

      Nessus 3 is
      currently only available for Linux, however Windows and MacOSX platforms should
      be supported in early 2006.

    • #3124135

      Switzerland opens up!

      by justin fielding ·

      In reply to In my own words…

      A big story of the week has been the Swiss governments?
      decision to opt for Novell?s Suse Linux as
      it?s standard server operating system. It
      had been reported that over 300 public sector servers will be rolled over,
      replacing both Microsoft Windows and Unix platforms. This move highlights that governments and
      local authorities have become much more willing than the private sector to
      adopt open-source platforms?The total cost of ownership over time outweighing concerns
      of potential patent issues. Novell also recently
      closed a deal with the UK?s
      National Health Service worth over $39m.

    • #3197365

      Setting up VPN tunnels with OpenBSD: Tutorial 1

      by justin fielding ·

      In reply to In my own words…

      As previously promised, I’ll now cover the basic setup of
      VPN tunnels between two OpenBSD gateways. So as to save wasting time on
      unnecessary repetition, the basic installation of OpenBSD can be revisited both
      in my previous blog and the OpenBSD FAQ pages.

      I taught myself how to configure this system by reading the
      vpn man page, plus some trial and error. I hope to simplify this process for
      readers, giving clear examples through every step. Another very useful VPN
      implementation is PPTP (VPN dial-in). I’ll cover the installation and
      configuration of PPTP later, which proves more troublesome than a basic IPSEC
      tunnel. I’m starting with two clean OpenBSD installations; each has two network
      interfaces installed and the following configuration:

      Gateway A:

      Hostname – vpnA
      Domain – test.com
      Interface1 – 10.1.1.1/255.255.255.0
      Interface2 – 20.1.1.1/255.255.255.0

      Gateway B:

      Hostname – vpnB
      Domain – test.com
      Interface1 – 10.2.1.1/255.255.255.0
      Interface2 – 20.1.1.2/255.255.255.0

      As you can guess, Interface1 is to be connected to the
      internal network, Interface2 is simulating out our(?) Internet connection.

      Once the basic setup of the gateway machines is done
      (referring to the install guides and above guidelines)?there is surprisingly
      little which needs to be done to get a VPN tunnel up and running. The following
      files will need to be edited. I’ll explain how and why as we go along:

      /etc/sysctl.conf
      /etc/rf.conf
      /etc/rc.conf.local
      /etc/rc.local
      /etc/pf.conf
      /etc/isakmpd/isakmpd.conf
      /etc/isakmpd/isakmpd.policy

      There are two methods of authenticating the two gateways in
      order to setup the VPN tunnel: manual or automatic. Manual keying requires that
      you manually generate keys, security associations and then configure the IPSec
      flows. Automatic keying does all of this for you (you never would have guess
      that!). I have only used automatic keying?it’s easier to configure than manual
      keying and has worked flawlessly, so I see no reason to switch to manual
      keying. I guess it comes down to a matter of preference. Instructions and
      explanations of manual keying can be found on the vpn man page.

      So, to get started, we must first make sure that IP
      forwarding is allowed (any gateway machine will require this, whether it runs
      VPN tunnels or not). This option is found inside the file /etc/sysctl.conf
      along with activation of Authentication Header (AH), and Encapsulating Security
      Payload (ESP) protocols. AH protocol provides replay protection, integrity, and
      authentication. ESP provides the same functions, with the addition of confidentiality;
      securing everything in the packet which follows the IP header. For a more
      detailed explanation take a look at the ipsec man page.

      Next, edit /etc/sysctl.conf. The following lines need to be
      modified/added:

      net.inet.esp.enable=1
      net.inet.ah.enable=1
      net.inet.ip.forwarding=1

      > vi /etc/sysctl.conf

      As discussed in my previous blog, packet filter is the
      built-in firewall of OpenBSD. We need to make sure this is enabled at boot. In
      the same configuration file, we will need to enable a daemon called ISAKMPD.
      ISAKMPD is the automatic keying daemon which handles the creation of our IPSec
      tunnels, authentication between the hosts, and so on. Full details can
      surprisingly be found on the isakmpd man page.

      The file is /etc/rc.conf.local, which may not exist; if not,
      then create it. The following content needs to be entered:

      pf=YES
      isakmpd=YES

      > vi /etc/rc.conf.local

      We also need to edit /etc/rc.conf to allow ISAKMPD to start
      on bootup:

      isakmpd_flags=??
       

      That was pretty easy! Next time, we’ll configure ISAKMPD to
      set up our VPN tunnels.

    • #3083042

      New open source standards defined

      by justin fielding ·

      In reply to In my own words…

      Berkeley, Carnegie Mellon,
      the Georgia Institute of Technology, Illinois,
      the Rensselaer Polytechnic Institute, Stanford and Texas universities have united with for major IT suppliers
      to create a new set of common standards for the development of open source software.  It?s hoped that these standards will serve to
      protect open source software projects from third parties looking to enforce
      patents. IBM?s senior vice-president of technology and Intellectual property,
      John Kelly, hopes that this will lead to greater commercial pickup of open
      source software.

      • #3082874

        New open source standards defined

        by alcaptony ·

        In reply to New open source standards defined

        Example?

      • #3082861

        New open source standards defined

        by jaqui ·

        In reply to New open source standards defined

        Great!!
        just what we need a third set of standards for developing open source software.
        or is it the fourth set now?

        why can’t all these groups get together and come up with one standard for this instead of creating competing standards?

      • #3082857

        New open source standards defined

        by justin fielding ·

        In reply to New open source standards defined

        I guess it always comes down to some kind of power struggle :-/

    • #3081679

      A tough year for security

      by justin fielding ·

      In reply to In my own words…

      2005–It?s almost over and what a tough year it was for both
      network administrators and end users in terms of security.  Leaks of financial data left over 50 million
      accounts open for exploitation, phishing attacks have been steadily increasing
      and bot nets have continued to grow.

      While worm attacks have subsided we are seeing increasing trojan
      infection taking hold with the help of client side exploits and vulnerabilities
      (mostly in Internet Explorer).  Once
      infected, most of these machines will become part of a bot net, these are used
      for various shady activities including the stealing of sensitive data (credit
      card details etc), spamming and distributed denial of service attacks.

      It?s not just the Internet underworld who have caused a stir
      this year, Sony BMG has came under heavy pressure when it was discovered that their
      DRM (Digital Rights Management) software acted exactly as a malicious rootkit
      would, installing itself without giving any notification to the user and
      leaving no evidence of it?s activities. 
      Sony BMG has been forced to withdraw CD?s incorporating the software in
      question and proposed settlement for six cases filed against it in
      the Southern District of New York.

      What will 2006 bring us?  I won?t make any wild predictions, we?ll just
      have to wait and see!

      Wishing everyone a happy new year.

    • #3080953

      Setting up VPN tunnels with OpenBSD: Tutorial 2

      by justin fielding ·

      In reply to In my own words…

      In part
      1 of this tutorial on setting up VPN tunnels with OpenBSD, I went over
      authenticating the gateways with automatic (or manual) keying, began editing
      some of the files that need to modified, and enabling the ISAKMPD daemon.
      Now, let’s configure ISAKMPD to set up our VPN tunnels. This is a little more
      complex than the previous steps. It’s quite important that you understand what’s
      going on at this point. Two files need to be created/modified at this point. They
      are /etc/isakmpd/isakmpd.conf
      and /etc/isakmpd/isakmpd.policy,
      click on the links to view the full man page for these files. Basically isakmpd.conf is the general
      configuration file for the ISAKMPD daemon; isakmpd.policy
      sets the acceptable security policy for key exchange.

      Let’s first look at /etc/isakmpd/isakmpd.policy.
      The contents of this file will be the same on both gateway machines. Here are
      the contents of this file:

      Keynote-version: 2
      Authorizer: “POLICY”
      Conditions: app_domain
      == “IPsec policy” &&
      esp_present
      == “yes” &&
      esp_enc_alg
      != “null” -> “true”;

      Now we can consider /etc/isakmpd/isakmpd.conf,
      this will be different on both gateway machines. This file contains
      instructions on which IP address the daemons should listen on, VPN end points,
      internal network details for each end, encryption types, and of course, a pre-shared
      key which will be used to authorize the remote gateway.

      Here are the contents of
      /etc/isakmpd/isakmpd.conf for vpnA:

      # Filter incoming phase 1 negotiations so they are only
      # valid if negotiating with this local address.
      [General]
      Listen-On=              20.1.1.1
      # Incoming phase 1 negotiations are multiplexed on the
      # source IP address. Phase 1 is used to set up a protected
      # channel just between the two gateway machines.
      # This channel is then used for the phase 2 negotiation
      # traffic (i.e. encrypted & authenticated).
      [Phase 1]
      20.1.1.2=           vpnB
      # 'Phase 2' defines which connections the daemon
      # should establish. These connections contain the actual
      # "IPsec VPN" information.
      [Phase 2]
      Connections=            VPN-A-B
      # ISAKMP phase 1 peers (from [Phase 1])
      [vpnB]
      Phase=                  1
      Transport= udp
      Address= 20.1.1.2
      Configuration= Default-main-mode
      Authentication= aaf5dc2122288ff01485329b2f51902d63874a9c
      # IPSEC phase 2 connections (from [Phase 2])
      [VPN-A-B]
      Phase=                  2
      ISAKMP-peer= vpnB
      Configuration= Default-quick-mode
      Local-ID= vpnA-internal-network
      Remote-ID= vpnB-internal-network
      # ID sections (as used in [VPN-A-B])
      [vpnA-internal-network]
      ID-type=                IPV4_ADDR_SUBNET
      Network= 10.1.1.0
      Netmask= 255.255.255.0
      [vpnB-internal-network]
      ID-type=                IPV4_ADDR_SUBNET
      Network= 10.2.1.1
      Netmask= 255.255.255.0
      # Main and Quick Mode descriptions
      # (as used by peers and connections).
      [Default-main-mode]
      DOI=                    IPSEC
      EXCHANGE_TYPE= ID_PROT
      Transforms= 3DES-SHA,BLF-SHA
      [Default-quick-mode]
      DOI=                    IPSEC
      EXCHANGE_TYPE= QUICK_MODE
      Suites= QM-ESP-3DES-SHA-SUITE

      And for vpnB:

      # Filter incoming phase 1 negotiations so they are only
      # valid if negotiating with this local address.
      [General]
      Listen-On=              20.1.1.2
      # Incoming phase 1 negotiations are multiplexed on the
      # source IP address. Phase 1 is used to set up a protected
      # channel just between the two gateway machines.
      # This channel is then used for the phase 2 negotiation
      # traffic (i.e. encrypted & authenticated).
      [Phase 1]
      20.1.1.1=           vpnA
      
      
      # 'Phase 2' defines which connections the daemon
      # should establish. These connections contain the actual
      # "IPsec VPN" information.
      [Phase 2]
      Connections=            VPN-B-A
      # ISAKMP phase 1 peers (from [Phase 1])
      [vpnA]
      Phase= 1
      Transport= udp
      Address= 20.1.1.1
      Configuration= Default-main-mode
      Authentication= aaf5dc2122288ff01485329b2f51902d63874a9c
      # IPSEC phase 2 connections (from [Phase 2])
      [VPN-B-A]
      Phase=                  2
      ISAKMP-peer= vpnA
      Configuration= Default-quick-mode
      Local-ID= vpnB-internal-network
      Remote-ID= vpnA-internal-network
      # ID sections (as used in [VPN-B-A])
      [vpnA-internal-network]
      ID-type=                IPV4_ADDR_SUBNET
      Network= 10.1.1.0
      Netmask= 255.255.255.0
      [vpnB-internal-network]
      ID-type=                IPV4_ADDR_SUBNET
      Network= 10.2.1.1
      Netmask= 255.255.255.0
      # Main and Quick Mode descriptions
      # (as used by peers and connections).
      [Default-main-mode]
      DOI=                    IPSEC
      EXCHANGE_TYPE= ID_PROT
      Transforms= 3DES-SHA,BLF-SHA
      [Default-quick-mode]
      DOI=                    IPSEC
      EXCHANGE_TYPE= QUICK_MODE
      Suites= QM-ESP-3DES-SHA-SUITE

      Now these are in place, we need to change the permissions
      and owner/group. If we don’t do this, then the daemon will refuse to read the
      configuration files. The following commands will do the trick:

      > chown root:wheel /etc/isakmpd/isakmpd.conf
      > chmod 0600 /etc/isakmpd/isakmpd.conf
      > chown root:wheel /etc/isakmpd/isakmpd.policy
      > chmod 0600 /etc/isakmpd/isakmpd.policy

      In the
      next and final instalment, I’ll go over editing the Packet Filter rules.

      • #3080413

        Setting up VPN tunnels with OpenBSD: Tutorial 2

        by gkhael ·

        In reply to Setting up VPN tunnels with OpenBSD: Tutorial 2

        Could there be a posibility to create separete vpns that come form the same network.
        One for administrative uses and the other for user access, the setuo would generate 2 different tunnels.

      • #3078279

        Setting up VPN tunnels with OpenBSD: Tutorial 2

        by justin fielding ·

        In reply to Setting up VPN tunnels with OpenBSD: Tutorial 2

        I think this ?could? be possible however I can see
        numerous problems occurring if trying to implement it with the ISAKMPD
        automatic tunnelling method (two tunnels with the same addressing on each side?
        Routing issues and so on). More to the point I can?t really see why you
        would want to do this or what advantage you would gain, maybe it would be more
        useful to have PPTP availability on the remote gateway, allowing an
        administrator to gain remote access from any location, while having the usual
        office traffic running through the IPSec tunnel?

    • #3095634

      Secure or insecure, that is the question?

      by justin fielding ·

      In reply to In my own words…

      There seems to have been quite a lot of debate in the security
      arena over the past few day on the topic of Windows vs. Linux as a secure platform.  As you can guess it?s a hot topic with many
      heated opinions on either side.

      ComupterWeekly
      state that Linux/Unix bugs outnumbered Windows flaws three-to-one with 45%
      of all vulnerabilities.  These figures
      were provided by CERT, from browsing their
      website it isn?t clear if they are funded by any party which may sway their independence.

      An article on Slashdot,
      the security research group, mentions a story from the Globe and Mail.  This states that “During August, 67
      per cent of all successful and verifiable digital attacks against on-line
      servers targeted Linux, followed by Microsoft Windows at 23.2 per cent. A total
      of 12,892 Linux on-line servers running e-business and information sites were
      successfully breached in that month, followed by 4,626 Windows servers.”
      The reactions to this have been ongoing and
      with some passion.  The biggest concerns
      over this article were firstly that the research was funded by Microsoft, the
      second being that the context in which these breaches are used or recorded can
      pervert the reality.  The number of Linux
      servers running e-business and information sites is much greater than that of
      Windows, also if taken in a broader context, each Windows machine (be that a
      server or home user) which is infected by an internet worm like SLAMMER.
      NIMDA. CODE RED. BUGBEAR. BLASTER etc is actually fully breached and
      potentially much more of a concern than a defaced or broken website.

      I think as with any statistics these will always show what
      the organisation behind the collection and processing of them wants to
      show.  I don?t think there will ever be
      universal agreement on which system is better, which is more secure etc.  Having worked with both I would personally go
      for Linix/BSD, however I?m sure many readers would go the other way.

      • #3096406

        Secure or insecure, that is the question?

        by davemori ·

        In reply to Secure or insecure, that is the question?

        I take them with a large dose of skepticism.  They are interesting figures, but nobody really knows for sure, and as you pointed out, research funded by specific vendors tends to come to conclusions that are acceptable to the vendor.

        I remember the 2003 Total Cost of Ownership for Windows 2000 Servers versus LinUX study funded by Microsoft and executed by Gartner Group.  Windows server won (surprise) but only when you factored in keeping a Windows 2000 server on line as-is for five straight years without upgrades. 

        A truly successful attack might not be found for some time.  It depends upon how they attacked (DoS and alterations of the web site content tend to be discovered rapidly, copying of confidential data may not get discovered for a long while) 

        Not all attacks get reported, therefore, even organizations like CERT don’t have complete stats.

        Enterprise IT is often reluctant to report, because it announces to the world that they have vulnerabilities, and this can lead to follow up attacks.  Announcement of attacks is also politically embarrassing within a corporation or a hosting company.

        The size of a company also influences whether or not they report.  Individual user PCs and small to medium businesses tend to dramatically underreport, often because they don’t know how to engage CERT, etc.  What is reported is apt to just be the tip of a proverbial iceberg.

        Limiting the analysis and reporting to just intrusion and just servers and ecommerce systems is also not a representative analysis of how secure or how vulnerable a system is.  The number of desktop PCs that get infected by viruses and worms each week has to definitely be greater for Windows than LinUX — if for no other reason, just because there are more desktop/laptop Windows PCs and more Windows viruses and malware than there are for LinUX.  Also, no one can convince me that Internet Explorer is secure and that the Microsoft Java VM is without issues, even if the operating system can have vulnerabilities patched.

        The other issue is that hacking into a system is not the only way to compromise the data.  People steal backup tapes, etc. all the time.

        To me, secure means the server, the desktop, the SAN, backup and archives, the whole darn thing.  Despite the bragging and posturing on all sides, the truth is that both Windows and LinUX are not as secure as they should be if you continually have to patch and buy products and services to make that system secure.

        I sincerely doubt that there is any such thing as a completely secure system.  Even a 100% standalone system with ridiculously sophisticated physical, hardware and software based security can still have security breaches.  It happens all the time.  In the end, you can never adequately protect against deliberate sabotage by an authorized employee or authorized user.

        Hackers tend to go after (1) what they know, (2) what they like to go after, (3) systems that are likely to have something interesting on it.   The Windows hackers probably vastly outnumber the LinUX hackers, but there were an awful lot of third world countries with very knowledgeable AT&T and Sun UNIX hackers during the days when UNIX was only on large iron.   That knowledge would have been very portable to LinUX.

         

         

         

         

         

    • #3080438

      Mobile Computing

      by justin fielding ·

      In reply to In my own words…

      While browsing the net I came across some great Mini-PC?s made
      by a company called SD-Omega.  Various flavours of microscopic PC?s are
      offered, from Celeron based ?book size? systems to the much more powerful
      Pentium4 based systems running at over 3Ghz! 
      I doubt these would make for a very practical PC system, they could make
      great media PC?s or when combined with a touch screen monitor and software like
      GpsDrive, a great in-car navigation
      and entertainment system could be knocked together!

    • #3078282

      Setting up VPN tunnels with OpenBSD: Tutorial 3

      by justin fielding ·

      In reply to In my own words…

      In Part 2 of the tutorial, I took you through the configuration of files for the ISAKMPD daemon. Now, all that’s left to do is edit our Packet Filter rules so that the VPN traffic can pass through. I’ll assume we are starting from a blank rule set. We only allow traffic to pass from one gateway to the other, not to any hosts on the internet. All traffic between the two private networks and the two gateways needs to be passed; this will be on device enc0 (IPsec tunnel). UDP traffic between the two gateways will be allowed on port 500 (the key exchange) and encrypted data will need to be passed between the two gateways (ESP protocol).

      pf.conf for vpnA:

      GATEWAY_A = “20.1.1.1”
      GATEWAY_B = “20.1.1.2”
      NETWORK_A = “10.1.1.0/24”
      NETWORK_B = “10.2.1.0/24”

      int_if=”le1″
      ext_if=”le2″

      # default deny
      # $ext_if is the only interface going to the outside.
      block log on { enc0, $ext_if } all

      # Pass encrypted traffic to/from security gateways
      pass in proto esp from $GATEWAY_B to $GATEWAY_A
      pass out proto esp from $GATEWAY_A to $GATEWAY_B

      # Need to allow ipencap traffic on enc0.
      pass in on enc0 proto ipencap from $GATEWAY_B to $GATEWAY_A

      # Pass traffic to/from the designated subnets.
      pass in on enc0 from $NETWORK_B to $NETWORK_A
      pass out on enc0 from $NETWORK_A to $NETWORK_B

      # Pass isakmpd(8) traffic to/from the security gateways
      pass in on $ext_if proto udp from $GATEWAY_B port = 500 to $GATEWAY_A port = 500
      pass out on $ext_if proto udp from $GATEWAY_A port = 500 to $GATEWAY_B port = 500

      pf.conf for vpnB:

      GATEWAY_A = “20.1.1.1”
      GATEWAY_B = “20.1.1.2”
      NETWORK_A = “10.1.1.0/24”
      NETWORK_B = “10.2.1.0/24”

      int_if=”le1″
      ext_if=”le2″

      # default deny
      # $ext_if is the only interface going to the outside.
      block log on { enc0, $ext_if } all

      # Pass encrypted traffic to/from security gateways
      pass in proto esp from $GATEWAY_A to $GATEWAY_B
      pass out proto esp from $GATEWAY_B to $GATEWAY_A

      # Need to allow ipencap traffic on enc0.
      pass in on enc0 proto ipencap from $GATEWAY_A to $GATEWAY_B

      # Pass traffic to/from the designated subnets.
      pass in on enc0 from $NETWORK_A to $NETWORK_B
      pass out on enc0 from $NETWORK_B to $NETWORK_A

      # Pass isakmpd(8) traffic to/from the security gateways
      pass in on $ext_if proto udp from $GATEWAY_A port = 500 to $GATEWAY_B port = 500
      pass out on $ext_if proto udp from $GATEWAY_B port = 500 to $GATEWAY_A port = 500

      Ok that’s the firewall sorted out. One more edit, and we should be done. We want to add a route from network 10.1.1.x to 10.2.1.x and vice-versa, and we want this to happen automatically each time the host is rebooted. To make sure this happens, we need to add the command to /etc/rc.local.

      For vpnA:

      route add ?net 10.2.1 10.1.1.1

      For vpnB:

      route add ?net 10.1.1 10.2.1.1

      Right, that should cover everything.  Reboot both gateways and you should be able to ping the opposing network:

      Don’t forget, if you don’t want to use OpenBSD as your main gateway, but you would like to use it for VPN tunnelling, this should not pose a problem. Simply make sure the correct port redirections are made on your firewall and put a route for the remote network on your current gateway which points to the OpenBSD host.

      It would be great to hear if anyone found this guide useful.  If so then I will also write a guide to getting other essential services like PPTP, DNS and DHCP running on the OpenBSD platform.

      • #3099212

        Setting up VPN tunnels with OpenBSD: Tutorial 3

        by eclypse ·

        In reply to Setting up VPN tunnels with OpenBSD: Tutorial 3

        We’ve been using OpenBSD to do this for several years now and it works very well. I like the way they focus on security and that you have everything you need for a firewall/DNS/DHCP/VPN/www server that is secure right out of the box. The only problems we used to have were that the routing or arp tables would get hosed (it would just be unable to add a new arp entry or a new route) and you’d have to reboot the box every few weeks, but this seems to have been corrected in 3.7., so we have a pretty stable environment. Plus they support hardware crypto cards that are pretty cheap to obtain, so this makes OpenBSD a great fit for the essential network services that need to just run and not give you any hassle. =)

    • #3079387

      IBM alliance expands

      by justin fielding ·

      In reply to In my own words…

      Novell and RedHat have joined IBM?s strategic alliance,
      their highest tier partner status.  IBM
      have said that the strengthened alliance will allow users to obtain standards
      based Linux hardware, software and service through integrated channels.  Customers will be able to purchase one and
      three year Linux support subscriptions for the Linux operating system on both
      IBM and non-IBM hardware.  Suse Linux
      Enterprise Server will certify IBM?s version of Apache Geronimo?an open source
      J2EE application server Websphere Community Edition.

    • #3077353

      Fedora Core 5 will include Mono

      by justin fielding ·

      In reply to In my own words…

      Despite opposition from RedHat it has been announced the
      Fedora Core 5 will include both the Mono runtime and Mono applications like F-Spot and Beagle.  Mono provides a base to develop and run .NET
      applications on Linux and other Unix based operating systems (MacOS X, Solaris
      etc) and is fast becoming a leading choice for application developers.  With the upcoming Windows Vista making
      increasing use of the .NET framework it could mean that in the future running
      Windows applications on a Linux platform becomes a realistic option.

      • #3109971

        Fedora Core 5 will include Mono

        by bwogi ·

        In reply to Fedora Core 5 will include Mono

        It better be true. Infact its so amazing to see how beautifully fast the technology is moving. Forget about the wars. very healthy they are.

      • #3109950

        Fedora Core 5 will include Mono

        by justin fielding ·

        In reply to Fedora Core 5 will include Mono

        Yes, it’s a good thing, I was really impressed with some of the Mono applications I had tried.

    • #3079079

      Remote Access: PPTP VPN with OpenBSD Tutorial, part 1

      by justin fielding ·

      In reply to In my own words…

      Following up on my previous series on implementing VPN
      tunnels with OpenBSD, I thought I should cover the configuration of another VPN
      implementation, PPTP. PPTP stands for ‘Point to Point Tunnelling Protocol.’ This
      allows users to ‘dial-in’ to access files or services on the internal corporate
      network, from any Internet connection. The great thing about PPTP versus other
      remote ‘dial-in’ types of VPN is that Microsoft Windows
      (95/98/Me/NT/2000/XP/Vista) has a PPTP client built in, which means
      administrators don’t have to deal with any additional client software and the
      problems that normally accompany it.

      By far the most popular Open-Source PPTP server offering is Poptop. Poptop has the following features:

      • Microsoft compatible
        authentication and encryption (MSCHAPv2, MPPE 40 – 128 bit RC4 encryption)
      • Support for multiple client
        connections
      • Seamless integration into a
        Microsoft network environment (LDAP, SAMBA) using RADIUS plugin
      • Works with Windows
        95/98/Me/NT/2000/XP PPTP clients
      • Works with Linux PPTP client
      • Poptop is, and will remain,
        totally free under the GNU General Public License

      While there isn’t source for OpenBSD on the Poptop project
      page, a port of Poptop is made available in the OpenBSD
      packages archive. I’m going to run through installing and configuring Poptop on
      an almost clean OpenBSD 3.7 installation; in fact, it’s the exact same system
      which I have just used in the IPSec tutorials.

      I found the Poptop package here.
      While I should use the UK
      mirror, it’s slow and often incomplete, and the German mirror sites are usually
      fast and exact! Note that this is the package for OpenBSD 3.7. If you’re using
      another release of OpenBSD, then be sure to get the package from the correct
      branch. I don’t think there would be a problem but the packaging system may
      have been modified between releases.

      Getting Poptop running is not as simple as it initially
      sounds. We have to go through the following process:

      1. Recompile
        BSD Kernel for GRE support and additional tun devices.
      2. Create
        additional tun devices.
      3. Install
        package.
      4. Configure
        Poptop to run with full strength encryption.
      5. Allow
        Poptop traffic through the firewall.

      I know recompiling the Kernel can sound quite scary to
      someone who hasn’t done this before. It did to me. This was required when I
      first performed a Poptop installation with OpenBSD 3.6. I don’t know if it’s
      still required, but as far as I can tell it is, (if anyone knows otherwise then
      please let me know!). You don’t need to do this for every system built. I did
      it the first time and then kept a copy of the new kernel to use on later
      installs.

      The following process is just one way in which Poptop can be
      configured, but I’m sure there are others. I found this quite difficult the
      first time with various mailing lists and forum posts giving conflicting
      information. Hopefully, this guide brings all of the correct information
      together into one place.

      First of all, copy and unzip the system source files to your
      /usr/src directory. I won’t go in to too much detail with explaining simple
      actions like this, I’m assuming by now most people following these tutorials
      are pretty comfortable with performing basic operations in BSD. The source will
      be located in files called src.tar.gz, and sys.tar.gz, either located on your
      installation CD or downloaded from the OpenBSD FTP servers.

      # tar ?xzf src.tar.gz ?C /usr/src/
      # tar ?xzf sys.tar.gz ?C /usr/src/

      Move to the platform independent config directory and create
      a copy of the GENERIC config file:

      # cd /usr/src/sys/conf
      # cp ./GENERIC ./Custom-Poptop-build

      Now we need to edit the config,

      # vi ./Custom-Poptop-build

      First comment out the inbuilt GRE support:

      #pseudo-device  gre            # GRE encapsulation interface

      Secondly increase the number of tun devices to match the
      maximum number of concurrent users you expect to have connected. I have set
      this to 50, which is much more that I will ever need (I would say 10 is enough
      for my needs):

      pseudo-device   tun     50       # network tunneling over tty

      Now lets rebuild the kernel; we need to create a copy of the
      platform dependent configuration file:

      # cd /usr/src/sys/arch/i386/conf
      # cp ./GENERIC ./Custom-Poptop-build

      Edit this config file to point to the previously modified platform
      independent config:

      # vi ./Custom-Poptop-config

      Replace:

      include ?../../../conf/GENERIC?

      With:

      include ?../../../conf/Custom-Poptop-build?

      Now start the building process:

      # config ./Custom-Poptop-build
      # cd ../compile/Custom-Poptop-build
      # make depend && make

      Hopefully you shouldn’t get any nasty errors thrown up. Once
      the build process has completed you should find the kernel (filename is simply ‘bsd’)
      with the size 4.9MB. Let’s now replace the default kernel:

      # cp /bsd /bsd.old
      # cp./bsd /bsd

      Now a reboot will verify that all is working okay. After
      logon you should see the name of your new kernel (Custom-Poptop-build) to the
      right of the timestamp. Well that’s the kernel recompiled; it wasn’t as tricky
      as it sounds was it? That’s enough for one installment. In the next one, we’ll
      continue with creating the additional tun devices that you’ll need, and then
      actually installing and configuring the Poptop package.

    • #3099555

      Life with Nagios

      by justin fielding ·

      In reply to In my own words…

      Just an update on my experience so far with Nagios.  After playing around with various front-ends
      for easy configuration (which I previously mentioned), I wasn?t happy with the
      setup they had given and found I was often forced to setup functions which I
      didn?t want, just to stop the configuration tool from throwing up errors and
      let me proceed.  Back to square one, I
      had to bite the bullet and set aside quite a bit of time to properly read the
      manuals and fully understand the configuration process?Lots of time, boring
      manuals, but in the end well worth it.  I
      now have Nagios configured to monitor quite a few of our servers and alert when
      various system variables reach warning or critical levels.  This has proven to be very useful for pinpointing
      intermittent problems with our mail server, with alerts coming in because of
      mail queue size, number of processes, memory usage and so on; it really helps
      give a wider view of the big picture and lets me know exactly what to look for
      in system logs.  I have even found that I
      can fix some problems pre-emptively before they become a problem (before users
      notice), examples being low disk space, jammed mail queue due to spam attacks
      etc.

      Overall I would say not to be put off by what first looks to
      be a complex configuration, once understood it?s pretty simple and the rewards
      are well worth the initial effort.  Has
      anyone else had success with Nagios?

      • #3099401

        Life with Nagios

        by jdpadro ·

        In reply to Life with Nagios

        Question, how long did it actually take to complete the installation? And which distribution did you use for the implementation?

      • #3099933

        Life with Nagios

        by justin fielding ·

        In reply to Life with Nagios

        I decided to go with ubuntu for the base system, the apt usage means for easy installation of apache/php plus updates; I have really started to like this distribution.

        The actual nagios install (e.g. install and set up to the point of it monitoring itself and one other host with the nrpe plugin) took me roughtly one working day, maybe 6-7 hours, however that was starting with the example config files, then breaking everything down and creating my own config structure with host and service directories which means adding new hosts etc is much easier.  If I did it again it would probably tak a couple of hours, starting with my current layout as a base even faster than that.

      • #3094230

        Life with Nagios

        by bobby1041 ·

        In reply to Life with Nagios

        I’ve had a great experience with Nagios.  The only issue that I have with Nagios is adding comments or scheduling downtime…I cannot fully get those features to work…I get the following error “Sorry, but you are not authorized to commit the
        specified command. Read the section of the documentation that deals with authentication and
        authorization in the CGIs for more information.”
         

        But hey, other than that I have it running on a Suse Linux 9.3 Pro, PIII 700mhz, 256 mb ram.  It is monitoring 11 hosts and 35 services.  It is a great monitoring tool, but you just have to put a good day or more into getting it up and running and configure hosts and services.  

        The most critical service that I check is both of our company websites, the check_http command does this and what I do is have it alert me on my Verizon cell text email address.  This gives you the ability to be advised and fix issues before your boss notices them.

      • #3094161

        Life with Nagios

        by justin fielding ·

        In reply to Life with Nagios

        Exactly, our smtp server is being a bit temperamental at the
        moment, mainly due to frequent attempts by spammers to throw junk at it.
         Nagios has meant that we can be alerted and then deal with the problem
        before any of our end users notice.  We are of course working on a
        replacement smtp implementation with greater protection from such attacks,
        however in the mean time at least our users have no perceived downtime 🙂

    • #3259696

      WiFi speeds up

      by justin fielding ·

      In reply to In my own words…

      Although many hardware manufacturers are offering 108Mbps
      products, these in practise rarely operate above 50Mbps (TomsNetworking
      tests
      ).  High speeds are currently
      offered using MIMO (Multiple Input Multiple Output) or similar technologies,
      this basically means that the devices use multiple channels to send/receive data,
      this is pretty controversial as it means more traffic in busy areas and is
      pretty much seen as marketing spin to keep people upgrading while real benefits
      are questionable, especially as various manufacturers standards wiffer and are incompatible
      with each other.

      802.11n is now on it?s way, this will offer true standards
      for high speed WiFi with Broadcom announcing it?s Intensi-fi chips are now
      available in sampling?These have a maximum throughput of 300Mbps!  Marvell, another manufacturer of WiFi chips
      has announced that it also has 802.11n chips ready for testing, these have a
      potential for working at up to 600Mbps!

      Speeds like this will truly open up WiFi as a realistic
      alternative to wired networks for high bandwidth applications, Voice Over IP
      and even Video Conferencing.

    • #3258476

      Remote Access: PPTP VPN with OpenBSD Tutorial, part 2

      by justin fielding ·

      In reply to In my own words…

      In last week’s tutorial installment on PPTP VPN, we
      recompiled the kernel. The next step is to create the additional tun devices and
      finish installing and configuring Poptop.

      Let’s get started: tun0 ? tun3 exist by default, so create
      additional devices with the following:

      # cd /dev
      # sh ./MAKEDEV tun?

      Where ? is the device number, I need to go through from tun4
      – tun49 to create the 50 concurrent devices I enabled in the kernel.

      Flying along now, we can get down to installing the Poptop
      package. Download the package from the repository of your choice and install
      with:

      # pkg_add poptop-1.1.4.b4p1.tgz

      A few errors are shown but they aren’t anything to worry
      about. Let’s get down to the Poptop configuration. The first file to edit is
      /etc/pptpd.conf:

      option /etc/ppp/ppp.conf
      # IP address of your server-side PPP endpoint:
      # (An unused IP address on your internal LAN)
      localip 20.1.1.2
      # IP address range to use for your PPTP clients:
      # (Unused IP addresses on your internal LAN)
      remoteip 20.1.1.200-250
      # IP address of external LAN interface:
      # (The IP which a remote users client will connect with)
      listen 10.21.7.63
      pidfile /var/run/pptpd.pid

      Now /etc/ppp/ppp.conf needs to be configured to handle
      encryption via a loop back:

      loop:
            set timeout 0
            set log phase chat connect lcp ipcp command
            set device localhost:pptp
            set dial
            set login
            set mppe * stateful
            # Server (local) IP address, Range for Clients, and Netmask
            # Use the same IP addresses you specified in /etc/pppd.conf :
            set ifaddr 20.1.1.2 20.1.1.200-20.1.1.250 255.255.255.255
            set server /tmp/loop "" 0177
      loop-in:
           set timeout 0
           set log phase lcp ipcp command
           allow mode direct
      pptp:
           load loop
           # Disable unsecured auth
           disable pap
           disable chap
           enable mschapv2
           disable deflate pred1
           deny deflate pred1
           disable ipv6
           accept mppe
           enable proxy
           accept dns
           # DNS Servers to assign client 
           # Use your own DNS server IP address :
           set dns 20.1.1.100 
           # NetBIOS/WINS Servers to assign client 
           # Use your own WINS server IP address :
           set nbns 20.1.1.100
           set device !/etc/ppp/secure

      We need to create the file /etc/ppp/secure and add the
      following content:

      #!/bin/sh
      exec /usr/sbin/ppp -direct loop-in

      Chmod the file after creation:

      # chmod u+x

      The file /etc/ppp/ppp.secret holds usernames and passwords
      for your dial-in users. The format is quite simple:

      username       password       *
      username       password       staticip
      username       password       *

      This file needs to have chmod 0400
      performed on it after editing. The * denotes that this user will be
      automatically allocated a free IP address; you can alternatively specify a
      static address for this user.

      It’s nice to have any PPP log messages sent to it’s own log
      file, as this makes debugging easier and keeps things tidy. Add the following
      lines to /etc/syslog.conf :

      !ppp
      *.*                    /var/log/ppp.log

      Remember to create ppp.log and reload syslogd:

      # touch /var/log/ppp.log
      # kill ?HUP (syslogd PID)

      Just as a hint, find the syslogd process ID with ps aux. There will be two syslogd processes running, so you
      need to use the one running as root.

      Poptop can be launched manually, the ?d switch will enable
      debug output.

      # /usr/local/sbin/pptpd -d

      To start Poptop automatically during boot, the following
      lines should be added to /etc/rc.local:

      if [ -x /usr/local/sbin/pptpd ]; then
          echo -n " pptpd";    /usr/local/sbin/pptpd -d
      fi

      I would recommend doing this as it would be easy to forget
      to start the daemon after rebooting and takes no effort to set up.

      Our last consideration is the firewall (Packet Filter). We
      need to allow inbound tcp connections on port 1723 on the external IP, inbound
      and outbound connections of type gre on the external IP, and also all traffic
      to tun* devices:

      # PPTP Rules (VPN Dial in)
      pass in quick on $ext_if proto tcp from any to $ext_if port = 1723 modulate state
      pass in quick on $ext_if proto gre from any to $ext_if keep state
      pass out quick on $ext_if proto gre from $ext_if to any keep state
      pass in quick log on tun0 all
      pass out quick log on tun0 all
      pass in quick log on tun1 all
      pass out quick log on tun1 all

      Now all that’s left is to test it. Reboot the machine to
      make sure that everything is started cleanly. Now, we just need to create a
      PPTP client connection and make sure it actually connects.

      I’m using Windows XP as an example. Start the New Connection
      Wizard, and select the option ‘Connect to the network at my workplace’. The
      next option to select is ‘Virtual Private Network connection’ rather than
      Dial-up connection. Enter any name for the connection; the suggestion is ‘Company
      Name’. There is an option at this stage to have an initial connection dialed before
      making the VPN connection. I prefer to disable this option, but the choice is
      yours. At the next step, enter the IP address or hostname of your Gateway
      machine; this is the address seen by the outside world. In our example, this is
      10.21.7.63, the IP specified in /etc/pptpd.conf with the listen directive.

    • #3257600

      Remote Access: PPTP VPN with OpenBSD, part 2

      by justin fielding ·

      In reply to In my own words…

      In last week’s tutorial installment on PPTP VPN, we recompiled the kernel. The next step is to create the additional tun devices and finish installing and configuring Poptop.

      Let’s get started: tun0 ? tun3 exist by default, so create additional devices with the following:

      # cd /dev
      # sh ./MAKEDEV tun?

      Where ? is the device number, I need to go through from tun4 – tun49 to create the 50 concurrent devices I enabled in the kernel.

      Flying along now, we can get down to installing the Poptop package. Download the package from the repository of your choice and install with:

      # pkg_add poptop-1.1.4.b4p1.tgz

      A few errors are shown but they aren’t anything to worry about. Let’s get down to the Poptop configuration. The first file to edit is /etc/pptpd.conf:

      option /etc/ppp/ppp.conf
      # IP address of your server-side PPP endpoint:
      # (An unused IP address on your internal LAN)
      localip 20.1.1.2
      
      
      # IP address range to use for your PPTP clients:
      # (Unused IP addresses on your internal LAN)
      remoteip 20.1.1.200-250
      # IP address of external LAN interface:
      # (The IP which a remote users client will connect with)
      listen 10.21.7.63
      pidfile /var/run/pptpd.pid

      Now /etc/ppp/ppp.conf needs to be configured to handle encryption via a loop back:

      loop:
            set timeout 0
            set log phase chat connect lcp ipcp command
            set device localhost:pptp
            set dial
            set login
            set mppe * stateful
            # Server (local) IP address, Range for Clients, and Netmask
            # Use the same IP addresses you specified in /etc/pppd.conf :
            set ifaddr 20.1.1.2 20.1.1.200-20.1.1.250 255.255.255.255
            set server /tmp/loop "" 0177
      loop-in:
           set timeout 0
           set log phase lcp ipcp command
           allow mode direct
      pptp:
           load loop
           # Disable unsecured auth
           disable pap
           disable chap
           enable mschapv2
           disable deflate pred1
           deny deflate pred1
           disable ipv6
           accept mppe
           enable proxy
           accept dns
          # DNS Servers to assign client 
           # Use your own DNS server IP address :
           set dns 20.1.1.100 
      
      
           # NetBIOS/WINS Servers to assign client 
           # Use your own WINS server IP address :
           set nbns 20.1.1.100
           set device !/etc/ppp/secure

      We need to create the file /etc/ppp/secure and add the following content:

      #!/bin/sh
      exec /usr/sbin/ppp -direct loop-in

      Chmod the file after creation: 

      # chmod u+x

      The file /etc/ppp/ppp.secret holds usernames and passwords for your dial-in users. The format is quite simple:

      username       password       *
      username       password       staticip
      username       password       *

      This file needs to have chmod 0400 performed on it after editing. The * denotes that this user will be automatically allocated a free IP address; you can alternatively specify a static address for this user.

      It’s nice to have any PPP log messages sent to it’s own log file, as this makes debugging easier and keeps things tidy. Add the following lines to /etc/syslog.conf :

      !ppp
      *.*                    /var/log/ppp.log

      Remember to create ppp.log and reload syslogd:

      # touch /var/log/ppp.log
      # kill ?HUP (syslogd PID)

      Just as a hint, find the syslogd process ID with ps aux. There will be two syslogd processes running, so you need to use the one running as root.

      Poptop can be launched manually, the ?d switch will enable debug output.

      # /usr/local/sbin/pptpd -d 

      To start Poptop automatically during boot, the following lines should be added to /etc/rc.local:

      if [ -x /usr/local/sbin/pptpd ]; then
          echo -n " pptpd";    /usr/local/sbin/pptpd -d
      fi

      I would recommend doing this as it would be easy to forget to start the daemon after rebooting and takes no effort to set up.

      Our last consideration is the firewall (Packet Filter). We need to allow inbound tcp connections on port 1723 on the external IP, inbound and outbound connections of type gre on the external IP, and also all traffic to tun* devices:

      # PPTP Rules (VPN Dial in)

      pass in quick on $ext_if proto tcp from any to $ext_if port = 1723 modulate state
      pass in quick on $ext_if proto gre from any to $ext_if keep state
      pass out quick on $ext_if proto gre from $ext_if to any keep state
      pass in quick log on tun0 all
      pass out quick log on tun0 all
      pass in quick log on tun1 all
      pass out quick log on tun1 all

      Now all that’s left is to test it. Reboot the machine to make sure that everything is started cleanly. Now, we just need to create a PPTP client connection and make sure it actually connects.

      I’m using Windows XP as an example. Start the New Connection Wizard, and select the option ‘Connect to the network at my workplace’. The next option to select is ‘Virtual Private Network connection’ rather than Dial-up connection. Enter any name for the connection; the suggestion is ‘Company Name’. There is an option at this stage to have an initial connection dialed before making the VPN connection. I prefer to disable this option, but the choice is yours. At the next step, enter the IP address or hostname of your Gateway machine; this is the address seen by the outside world. In our example, this is 10.21.7.63, the IP specified in /etc/pptpd.conf with the listen directive.

      That’s the final step. Initiate the connection and enter a username/password from the ppp.secret file.

      Once the connection is made you should be able to find your locally allocated IP in the VPN Status window, and you should also be able to ping an internal address (in my example 20.1.1.1 responds just fine).

      I hope this has been an easy to follow guide to configuring PPTP access using OpenBSD and Poptop. If you have any problems following this guide then let me know.

      • #3271483

        Remote Access: PPTP VPN with OpenBSD, part 2

        by cruel_symbol ·

        In reply to Remote Access: PPTP VPN with OpenBSD, part 2

        How do I setup poptop so that only a single instance of username/password can exist at a given time. What i mean to say is, lets say in the ppp.secret file I have username JOHN with password PASSWORD. While a client is connected using the username/password JOHN/PASSWORD combination, another client cannot use that username/password combination

      • #3228984

        Remote Access: PPTP VPN with OpenBSD, part 2

        by jbordeau ·

        In reply to Remote Access: PPTP VPN with OpenBSD, part 2

        Just to add some information about configuration OpenBSD 3.9

        Add /etc/sysctl.conf : net.inet.gre.allow=1

        And reboot or “sudo sysctl -w net.inet.gre.allow=1”

        So you will probably some warning message about IPV6 … don’t worry.

      • #3227138

        Remote Access: PPTP VPN with OpenBSD, part 2

        by justin fielding ·

        In reply to Remote Access: PPTP VPN with OpenBSD, part 2

        Yes that’s right, and the kernel only needs to be recompiled to increase the number of tun devices– the GRE modifications are no longer needed.

      • #3227122

        Remote Access: PPTP VPN with OpenBSD, part 2

        by jbordeau ·

        In reply to Remote Access: PPTP VPN with OpenBSD, part 2

        Thanks Justin,

        With your sample, it’s allright ! thank you again !

        Else, if you add PF configuration, you need to had a rules to permit PPTP ip range to go to on internal network.

        To complete this sample, try FWBUILDER ! http://www.fwbuilder.org

        Very nice interface and run on Windows GUI with Putty. Like as FW1 Gui.

    • #3257551

      Technical hitch!

      by justin fielding ·

      In reply to In my own words…

      As you have probably already guessed we are having a few
      technical hitches with the blog software. 
      The first part of the PPTP tutorial seems to have vanished, I?ll shortly
      post a link to the entire article?published on TechRepublic.com

    • #3094158

      Keep an eye on things?

      by justin fielding ·

      In reply to In my own words…

      I came across a very useful little set of perl scripts
      packaged together as ?Logwatch? (http://www2.logwatch.org:81/index.html).
       This wonderful set of tool will parse
      your logs and then email a daily report with all of the important statistics
      and data in one place.  The package is
      easily configured and can be used to parse archive logs too; meaning even high
      throughput systems (e.g. email) can be monitored and reported on.  I found this while snooping around for an
      application or script to extract useful information and statistics from our
      postfix logs, due to some stability issues we have been experiencing.

      There are loads of similar log analysing programs out there;
      I would be interested to hear people?s recommendations?

      • #3109052

        Keep an eye on things?

        by carol ·

        In reply to Keep an eye on things?

        This link does not work; the page does not come up, server cannot be found. I would like to see this article and PERL code and resulting report.

      • #3108821

        Keep an eye on things?

        by justin fielding ·

        In reply to Keep an eye on things?

        Link works for me!

    • #3094152

      PPTP Update

      by justin fielding ·

      In reply to In my own words…

      After those technical hiccups you can now find the entire
      PPTP series in one place:

      Learn to install and configure a PPTP VPN connection with open source Poptop

    • #3109479

      Unison

      by justin fielding ·

      In reply to In my own words…

      While reading through one of my regular industry papers I
      came across the file synchronisation tool UnisonUnison, created by Benjamin Pierce of the University of Pennsylvania is a great tool for synchronising
      two copies of a directory structure in different locations, there is a massive
      advantage held over simple tools like rsync which is the ability to update
      changes made to both structures (so long as there is no conflict).

      We have an ongoing CRM project which requires extensive data
      synchronisation over many sites, this has proven to be a bit of a nightmare with
      complex scripting based around rsync.  It
      will be interesting to see if Unison can simplify the process.

    • #3109188

      Public sector overtakes Financial services?

      by justin fielding ·

      In reply to In my own words…

      In 2004 the IT contractor market grew by an estimated 30%
      and 2005 was predicted to follow suit with a high demand in financial
      services, telecoms, and–most notably–in the public sector
      (government). IT contractors used to have a reputation as loner
      geeks-for-hire, whose only business value was their ability to keep the
      complex systems running for the “real” business users. Today, that
      perception has changed. Businesses now see IT contractors as much more
      essential to the organization as a whole. They are valued as workers
      with specialized skills that complement in-house resources and have a
      significant impact on the organization’s strategy and commercial
      goals.Traditionally, the public sector has not been able to match the
      private sector (particularly in financial services) in terms of salary;
      recently however, this has changed. Due to increased pressure to
      complete high-profile projects, such as the UK’s National Programme for IT,
      the public sector has upped the stakes and closely matched the
      offerings of private sector employers. Combine this with the
      opportunities available in the public sector: the high-profile projects
      which would look good on anyone?s CV, increased flexibility, and
      greater diversity, and the public sector looks more and more inviting! Giant group
      has specialised in limited company development and umbrella payroll
      services for the contractor market since 1992. Managing director
      Matthew Brown comments: ?The financial services sector has
      traditionally been the principal growth driver for the industry. When
      things start moving there, times are generally good but what is
      particularly encouraging about the current market is that the public
      sector, which tends to be more insulated from cyclical trends, is
      providing additional ballast.?

      Research carried out by Giant suggests that the public sector has
      now overtaken even the financial services sector, retaining 23% of IT
      contractors compared to 21%. This may not be a lasting trend, however,
      with 28% of contractors identifying the financial services industry as
      a likely mover, compared to 19% favouring the public sector. This could
      be a telling sign of things to come, with Basell II
      being predicted to increase demand in the financial sector for 2006. As
      previously mentioned, the telecoms sector remains a key player with 15%
      of contractors speculating that it will generate the biggest number of
      new contracts in the coming year. Consumers’ appetite for high-speed
      internet and mobile services is still increasing, 3G is becoming more widely accepted with a much better range of handsets filtering into the market.

      All in all things are looking interesting and the competition
      between these major sectors has resulted in driving wages upwards?let’s
      see what surprises 2006 has in store?

    • #3135221

      All talk?

      by justin fielding ·

      In reply to In my own words…

      Most of you will have noticed in the last few days at least
      that the Kama Sutra worm (AKA Nyxem-E or Blackmal) has been hyped as the first big security threat
      of 2006.  The worm, as with all other
      worms will try to spread itself by scanning an infected users contacts and
      mailing itself out to them, it also spreads itself across unsecured shares and
      tries to disable firewall/anti-virus products. 
      Pretty standard stuff?the worm destroys DOC, XLS, MDB, MDE, PPT, PPS,
      ZIP, RAR, PDF, PSD and DMP files on the 3rd of each month by
      replacing any data with the string ?DATA
      Error [47 0F 94 93 F4 K5]?. 
      Due to the fact that we have quite a few roaming users, moving between
      multiple international offices, hotels and so on, I was expecting to have at
      least one or two infected users.

      We actually had no reported infections or any strange
      behaviour; is all of this talk simply scaremongering on the part of anti-virus
      firms?  Has anyone had problems with this
      worm, or any similarly hyped ?outbreaks??

      • #3135190

        All talk?

        by master3bs ·

        In reply to All talk?

        I haven’t seen any infections here.  I keep our systems up to date, but there are a handful of laptops that could have been effected.  Nothing so far.

      • #3133017

        All talk?

        by apotheon ·

        In reply to All talk?

        I certainly haven’t seen any infections at home. Then again, I don’t have any Windows systems running at home.

    • #3096804

      Firefox has problems?

      by justin fielding ·

      In reply to In my own words…

      Firefox 1.5
      has been officially released, my older version of Firefox notified me and downloaded
      the update by itself.  To be honest there
      isn?t much difference to previous versions in a day to day use, however there
      are some problems?  Since the update my
      notebook has inexplicably crashed, and quite often slows down considerably.  It took me a little while to attribute these
      problems to Firefox, it was only
      while trying to track back and see what may have changed that I realised it
      could be Firefox.  Reading the support forums and newsgroups it
      seems many others are having similar problems?If you are using the previous
      version without problems then I wouldn?t bother upgrading just yet!

      • #3096756

        Firefox has problems?

        by kevinmsmith22 ·

        In reply to Firefox has problems?

        My only issue with firefox 1.0.x is that sometimes when you shut it
        down <“X” it out> itll leaves a copy active, this happens mostly
        when i use Java apps, when you restart the browser you are asked to
        choose a profile since the default one is in use. The only fix is to
        stop it via task mananger. Ive found this in all versions from 1.0.4.
        Annoying? Yes. Make me stop using it? No . I Stll prefer it to MSIE.

        Kev

      • #3096716

        Firefox has problems?

        by plynn ·

        In reply to Firefox has problems?

        For Ffox, they advised people to make a backup of their bookmarks, but to remove the older version, before installing the newest one. The main problem people had was that the “add-ons” were not all compatible.

        I’ve been using Firefox for a few years now & prefer it to IE, as well. I’ve been interested in Ff & the Mozilla platforms, too, so have been “lurking” in their forums. I’ve joined several & can post questions, as well. There are always a few people online who are willing to help with answers. Each time I’ve posted, I’ve had a reply within an hour!

        I’ve made a habit of not upgrading anything when a new version first comes out, as there will always be some kind of problem that will show up, no matter who makes the product! If I’m interested in something, I try to research it first & then make the decision according to the reports I’ve read about it.

        Another habit of mine that hasn’t failed me yet … use an upgrade disk as a fresh install instead of adding it on top of the older program. It will normally ask for the original disk for confirmation, but that will be all it wants & will install a healthy, clean version with no problems.

      • #3092758

        Firefox has problems?

        by baketown83 ·

        In reply to Firefox has problems?

        I have not had any problems with the newest version yet.  But I sure would like to know if there are any confirmed problems.

      • #3093507

        Firefox has problems?

        by dl9 ·

        In reply to Firefox has problems?

        I seem recall reading somewhere that you should uninstall version 1 before installing version 1. 5 (for FireFox and Thunderbird).

        I’ve installed FireFox 1.5 (and now 1.5.0.1) on about a dozen computers without any problems. Based on earlier versions, I think you would be prudent to uninstall FireFox, reboot, and install the latest version from a fresh download. I would sometimes run into problems upgrading earlier versions of FireFox – but I learned my lesson and almost always go through this uninstall-install routine (although since we started useding version 1.5 the automatic upgrades have gone very smoothly. Just a problem with some extensions — and within a few days there are updates available).

        But when going from version 1 to 1.5, I suspect you’ll be better off with a fresh installation. Also be sure to clean out your temp files and run a fix on your registry (as in Norton Utliities or System Mechanic, or any of a dozen freeware programs that fix registriy errors — if you’ve never done this before, I’ll bet there are hundreds of errors in the registry) after uninstalling the older version and before rebooting and installing the new version of FireFox.

        You might also want to consider deleting all of the extension files too and then installing them in the newest FireFox.

        Best of luck.

      • #3093345

        Firefox has problems?

        by apotheon ·

        In reply to Firefox has problems?

        One of the benefits of moving primary OS activities to Debian, for me, has been software upgrades. When Debian rolls out a new version of something, it’s thoroughly tested and stable, and the upgrade goes smoothly and without incident. I had a couple of minor issues with a Firefox upgrade once on Windows. I’ve never had any such problems on Debian GNU/Linux. In fact, it’s so smooth that I don’t even notice what version I’m using any longer. I had to click Help > About Mozilla Firefox to check the version number (I’m currently using 1.0.7 on this system: apparently 1.5 is too unstable currently for the Debian Etch/Testing team, and for that protection I’m grateful, though if I really wanted to I could always install 1.5 by some other means).

      • #3093267

        Firefox has problems?

        by hrmelendez ·

        In reply to Firefox has problems?

        On the other side, I must say that I have had no problems with the new version of Firefox, neither on Windows XP nor Mac OS X.  In fact, it seems to run a bit faster on both machines.  On Windows, however, I have experienced the same kind of problem after installing other software such as Real Player…. Maybe this is an OS characteristic….

      • #3133018

        Firefox has problems?

        by apotheon ·

        In reply to Firefox has problems?

        Good point . . . an OS characteristic plays a part in this. Specifically, the use of a monolithic flat file database in RAM (the form the Windows Registry takes while the system is running) for systemwide configuration, with shared configuration entries for multiple applications, leads to conflicts and instability. The more installing and uninstalling you do, the more corrupted the Registry gets over time. This can lead to system crashes, et cetera.

        Still, that doesn’t change the fact that some applications (or, more to the point, some combinations of applications) can cause more problems than others, and it’s possible that the Firefox 1.5 upgrade falls into that category of applications on Windows. I wouldn’t know — I’ve never used a Firefox version greater than 1.0.x on a Windows system, thus far.

      • #3092403

        Firefox has problems?

        by dmiller7 ·

        In reply to Firefox has problems?

        I have had redirect problems on secure pages that I did not have before the upgrade.  Also when using IE, it has not problems either.

      • #3133721

        Firefox has problems?

        by malcolm.smith ·

        In reply to Firefox has problems?

        I’ve noticed that if you do a lot of downloading that Firefox starts to grind to a halt and my system would “hang”.  I located the problem to the download manager.  Firefox must keep scanning the list within the download manager and the bigger the list the longer Firefox pauses.  Regular cleanups of the download manager fixes the prob and I never have any trouble now.

      • #3133720

        Firefox has problems?

        by richard.s ·

        In reply to Firefox has problems?

        The latest Firefox 1.5 and 1.501 seem to “hang” while any animated graphics load. Because these are usually unwanted adverts, this is doubly annoying.

        Otherwise, Firefox is a great browser. All web sites (especially government sites) should be made compatible with it!

      • #3133706

        Firefox has problems?

        by brian.lovell ·

        In reply to Firefox has problems?

        One thing worth checking is the download facility. If this hasn’t been cleared recently it can slow down Firefox considerably. If the computer hangs up during this operation there are instructions on the Mozilla site for deleting the file from the windows subdirectory.

      • #3133700

        Firefox has problems?

        by macworm ·

        In reply to Firefox has problems?

        Firefox 1.5.0 had some problems with memory leaks which are fixed in the latest release(1.5.0.1). Also the more extensios you have will slow down loading. Check your extensions(Tools, extensions), remove any you no longer use, get the latest updates, check out that they all still work – some extensions may stop working as they conflict with others but changing the load order can fix this(right on the extension and move). It’s also a good idea to ‘clean up’ your dowloads folder when finished with a download and empty the folder when you close Firefox(tools,options,privacy,download history)- – hope this helps

      • #3133691

        Firefox has problems?

        by graham ·

        In reply to Firefox has problems?

        Yes I have to agree that I have noticed that my system has similar problems.  When refreshing pages for no reason the system goes to dead slow and stop.  I usually close Fifrefox down and restart, but that doesnt cure the problem. What do they say at Mozilla?

      • #3133682

        Firefox has problems…

        by oromis ·

        In reply to Firefox has problems?

        On my Win98SE system, I updated FF to 151 from 107. My plugins/extensions have since been failing to update. I plan on completely removing FF and doing registry fixes, then re-installing FF and trying to re-apply the extensions, before giving up and going back to 107… but Firefox has otherwise earned its position as my secondary browser right next to Opera. Between the two, I no longer use IE for anything but WindowsUpdate.

      • #3091748

        Firefox has problems?

        by jfischer712 ·

        In reply to Firefox has problems?

        In reply to Richard.S about slow downloading of graphical advertisements, what opinions does anyone have about Adblock or Adblock Plus extension for Firefox? I just hear about them a few days ago and I have been using the Adblock Plus extension together with the Adblock Filterset.G Updater. So far, it has been great at blocking advertisments, especially in Yahoo mail. If you encounter an advertisemeent that is not yet automatically blocked, you just right click on it to get rid of it.

      • #3091648

        Firefox has problems?

        by uplinkspider ·

        In reply to Firefox has problems?

        Maybe I?ve missed it but has anyone noticed that the ?Switch
        Proxy? feature disappeared after the update?  Also, the update wasn?t optional, it was a
        forced update. The update downloaded without user permission then it followed
        with instructions to restart ?Fire Fox?. Outright undermining!

      • #3253039

        Firefox has problems?

        by mckinnon1 ·

        In reply to Firefox has problems?

        I’ve had similar problems since updating but the most annoying one is the fact that there is still a instance of FF running after closing it. This dosent happen consistently but when it does it is annoying.. As for Ad blocking I have modified my Hosts list to point certain sites to my loop bsck address. Blocks most of the annoying ads! Do a google search to find lists

      • #3254450

        Firefox has problems?

        by denodave ·

        In reply to Firefox has problems?

        I just finished reading a note on Langalist (Fred Langa) dated January 5th that says there is a product called Firetune for Firefox which you can download at http://http://www.totalidea.com/freestuff4.htm

        The person who submitted this had the same experience and this download cured it.

      • #3254432

        Firefox has problems?

        by denodave ·

        In reply to Firefox has problems?

        I just finished reading a note on Langalist (Fred Langa) dated January 5th that says there is a product called Firetune for Firefox which you can download at http://http://www.totalidea.com/freestuff4.htm

        The person who submitted this had the same experience and this download cured it.

      • #3254307

        Firefox has problems?

        by vaspersthegrate ·

        In reply to Firefox has problems?

        Firefox crashes on me every 10 minutes or so. I post lots of comments at lots of blogs, so I speed through the web. One thing that always causes it to crash is going to my Gmail account. Weird huh?

        I got so disgruntled, I dl’d Avant Browser, which was very fast, much faster than the oddly sluggish Firefox, or Fried Fox, but then I got a stupid PDF file caught in the Adobe Reader in the Avant, so I was in a mouse trap. No exit. The damn thing is still in that Avant Browser like a virus. I deleted all Adobe programs from my computer.

        What is good to try next?

      • #3253972

        Firefox has problems?

        by sterling “chip” camden ·

        In reply to Firefox has problems?

        I’ve been using Firefox 1.5.0.1 for a couple of weeks now.  I didn’t have any earlier version installed.  No problems with crashes or hangs.  It runs great for me (on Windows XP SP2) — in fact it out-performs both IE 8 and Opera 8.5.1: http://www.chipstips.com/microblog/index.php/post/41/

    • #3092749

      The evolution of spam from annoyance to serious crime

      by justin fielding ·

      In reply to In my own words…

      Spam. We all hate it! Spam affects us all whether we are
      responsible for maintaining e-mail systems or are simply e-mail users. In a
      more sinister vein, spam can have serious repercussions for businesses, and
      financial institutions are particularly vulnerable to the variety of spam that
      helps feed identity theft and fraud.

      On January 24th 2004 (over two years ago) Bill
      Gates (of Microsoft fame) announced while speaking at the World Economic Forum “Two
      years from now, spam will be solved” (CBS News);
      Mr Gates predicted that spam would become a thing of the past with sender
      authentication playing a major role?He also predicted the creation of billing
      systems to allow the charging of fee’s on a per-mail basis. I remember
      wondering at the time what type of drugs he was taking; surely nobody could be
      foolish enough to think spam would be that easy to combat! I was obviously not
      alone, as a poll by techweb
      found that “more than 80 percent of IT security
      professionals surveyed at a security show in London don’t think Bill Gates’ promise to
      kill spam within two years is doable.” Two years later spam is still rife;
      well done, Mr Gates. Enough said.

      Over the past few years, the methods used
      to spam people have changed dramatically since the introduction of stricter
      anti-spam laws (see http://www.spamlaws.com)
      and successful prosecution of a few large spam operations ( 1.
      2.),
      but far too few in total. It is widely thought, however, that current anti-Spam
      legislation won’t work
      , not only because of the changing approach of
      spammers, but also because the harm caused by spam has been questioned and the
      ability of authorities to enforce such laws is in doubt. Without uniform and
      complete international cover, spammers will just move to a region where
      authorities are happy to profit from, or at least turn a blind eye to their
      activities. This can be seen from the fact that China
      and Russia
      are in second and third place when it comes to countries of origin for spam. Surprisingly,
      the USA
      still holds the number
      one spot
      –interesting! Maybe if we could link profits from spam to
      terrorism, the problem would be solved much more quickly?

      So, as the noose tightens, spammers have
      turned to more underground methods of plying their dark trade. Simply moving
      their operation to localities where they won’t be prosecuted is one tactic;
      however, another more sinister and much harder to combat approach has been the
      use of Botnets. Botnets are
      basically farms of end user computers, and sometimes servers (all running “Windoze”,
      of course), which have been infected by worms or Trojans. While these malicious
      programs sometimes perform an annoying or destructive task (such as deleting
      files or mailing themselves to all of your contacts), the other and more
      subversive function is to open up your computer as a drone, normally without
      any clues given to the computer’s owner. These drones will then connect back to
      the mothership and allow the dark powers that be to take control, often
      installing an SMTP relay for the sending of more spam or worms, and a small Web
      server that will be used to host the (fake and fraudulent) Web sites advertised
      in the spam that they send out. Quite often the drones will also be scanned for
      any useful information which can be used in other fraudulent activities (credit
      card/bank details) or to aid identity theft.

      Botnets are often massive, with tens of
      thousands of drones in each, and some have been reported to contain over a
      hundred thousand. The combined power of these smaller entities is huge,
      allowing immense numbers of unsolicited and often fraudulent mail to be
      delivered. The combined bandwidth of these networks also allows for distributed
      denial of service attacks to be launched, the likes of which can bring major
      enterprises to their knees. Spam has come a long way from those unsolicited
      advertisements that used to arrive a few times per day; it is more and more
      intertwined with serious fraud, extortion, and organised crime.

      That’s our first look at the
      changing methods used by spammers to enter your inbox. Next week, I’ll continue
      this topic with a look at the real effects spam has on business, over and above
      the basic annoyance caused to our users who have to delete them.

      • #3092035

        The evolution of spam from annoyance to serious crime

        by roborat ·

        In reply to The evolution of spam from annoyance to serious crime

        The ISP’s around the world need to work together on this one.  Basically any ISP that receives a complaint about spam should contact the “offending” ISP.

        The offending ISP can then: 1) contact the spammer and tell them “hey do something or we will disconnect you.  Make it the individuals responsibility for protecting themselves from Botnets, there are plenty of security software around now.  2) If it continues, shutdown the client.  3) Have their company locked out by the legitimate ISPs.

        Any customers the offending ISP has will either need to move or not have access to the rest of the Internet.  The offending ISP will soon not be able to do didly-squat.

        Hopefully spammers will eventually run out of places to hide.

        Make it the responsibility of individuals and companies.

      • #3092032

        The evolution of spam from annoyance to serious crime

        by roborat ·

        In reply to The evolution of spam from annoyance to serious crime

        The ISP’s around the world need to work together on this one.  Basically any ISP that receives a complaint about spam should contact the “offending” ISP.

        The offending ISP can then: 1) contact the spammer and tell them “hey do something or we will disconnect you.  Make it the individuals responsibility for protecting themselves from Botnets, there are plenty of security software around now.  2) If it continues, shutdown the client.  3) Have their company locked out by the legitimate ISPs.

        Any customers the offending ISP has will either need to move or not have access to the rest of the Internet.  The offending ISP will soon not be able to do didly-squat.

        Hopefully spammers will eventually run out of places to hide.

        Make it the responsibility of individuals and companies.

      • #3092030

        The evolution of spam from annoyance to serious crime

        by roborat ·

        In reply to The evolution of spam from annoyance to serious crime

        The ISP’s around the world need to work together on this one.  Basically any ISP that receives a complaint about spam should contact the “offending” ISP.

        The offending ISP can then: 1) contact the spammer and tell them “hey do something or we will disconnect you.  Make it the individuals responsibility for protecting themselves from Botnets, there are plenty of security software around now.  2) If it continues, shutdown the client.  3) Have their company locked out by the legitimate ISPs.

        Any customers the offending ISP has will either need to move or not have access to the rest of the Internet.  The offending ISP will soon not be able to do didly-squat.

        Hopefully spammers will eventually run out of places to hide.

        Make it the responsibility of individuals and companies.

      • #3092025

        The evolution of spam from annoyance to serious crime

        by roborat ·

        In reply to The evolution of spam from annoyance to serious crime

        The ISP’s around the world need to work together on this one.  Basically any ISP that receives a complaint about spam should contact the “offending” ISP.

        The offending ISP can then: 1) contact the spammer and tell them “hey do something or we will disconnect you.  Make it the individuals responsibility for protecting themselves from Botnets, there are plenty of security software around now.  2) If it continues, shutdown the client.  3) Have their company locked out by the legitimate ISPs.

        Any customers the offending ISP has will either need to move or not have access to the rest of the Internet.  The offending ISP will soon not be able to do didly-squat.

        Hopefully spammers will eventually run out of places to hide.

        Make it the responsibility of individuals and companies.

        Feel free to pick holes in this approach.

      • #3091924

        The evolution of spam from annoyance to serious crime

        by justin fielding ·

        In reply to The evolution of spam from annoyance to serious crime

        Not sure that will cut it; it’s not always in the ISP’s best interest to cut off their (paying) customers, especially when that customer is just a clueless PC user who happens to be infected with a worm/bot. Fines would be one way to force the ISP’s hand, however this would be hard to enforce and evidence would be hard to back up. People could block the ISP’s, therefore rendering their email service useless, however again this would be hard to enforce internet wide although there are attempts at this angle of attack. I’ll look more in depth at ways in which we can try to fight spam in a later installment of this blog.

    • #3093599

      Recovered from Disaster!

      by justin fielding ·

      In reply to In my own words…

      This week we had a true test of our disaster recovery
      procedures.  At 11am on Monday, water
      started pouring out of our comms room ceiling! 
      True disaster!  Luckily the deluge
      narrowly missed our server racks but started to pool under the floor and didn?t
      take long to start spreading across the room. 
      I instantly started to shut down servers, most critical first, and then
      had to cut power to the entire floor via the main breaker.  The water kept coming for 2 hours, the floor
      was soaked with pools of water underneath the raised floor.  We waited to access the extent of the damage
      and potential downtime and decided that it didn?t warrant switching to our
      offshore site?within 4 hours of the flooding we had relocated servers providing
      essential services to another part of the building, re-routed the internet
      links and had all vital services running. 
      A professional flood recovery team was called in, the source of the
      water was found to be a faulty toilet overflow which had been leaking for weeks;
      this was pooling underneath the ground floor until it finally found a way to
      break down in to the basement!  The water
      was pumped out, industrial fans and dehumidifiers were set-up and left overnight.  By Tuesday morning the room was dry, we
      removed each server and checked for any signs of water or condensation, luckily
      none were found.  By Tuesday lunchtime,
      within 24 hours of the incident?services were back up and running as usual.

      We were very lucky that the water didn?t come down directly
      above our racks, this would have been a true disaster.  Thankfully the fact that we shutdown all services
      and then cut power to the floor as quickly as possible meant we suffered no
      hardware damage or data loss.  Phew!

      It just goes to show that disasters do happen, so make sure
      you have plans in place which you have tried and tested.

    • #3133686

      PC Tax?

      by justin fielding ·

      In reply to In my own words…

      I?m not sure if international readers will be aware of the BBC/TV
      licence scheme?there may be similar setups in other countries.  To put it simply, every household with a TV
      set capable of receiving broadcasts must pay a yearly TV licence fee.  This fee then funds the BBC which makes and broadcasts free programs,
      some mainstream and movies, but also some subsidised programs of limited
      interest but defined as specialist/niche and therefore good for diversity.  A lot of people are opposed to having to pay
      this when they already pay ?40 ($70) for monthly cable/satellite subscriptions
      and have no interest in viewing BBC broadcasts. 
      It seems things may be getting worse; I recently came across an article
      on theregister which
      suggests that this could be replaced by a generic ?PC Tax?!!!  Why? 
      Well apparently because a PC has the ability to receive a live or almost
      live video broadcast, it can technically be defined as a TV.  Obviously the BBC are looking for ways to
      ensure their survival, being non-commercial they need to have some way of
      forcing money out of as many people as possible.  We won?t see this happening any time soon, ?The
      Government reckons changes to the license fee will not be needed until 2017,
      when the BBC’s next royal charter expires.?; I?m not sure what the public
      reaction would be to this, maybe something similar to the Poll
      Tax riots
      ?

      • #3080828

        PC Tax?

        by apotheon ·

        In reply to PC Tax?

        I feel for you. What a bunch of crap. Of course, there’s already a PC tax, but it’s not paid to the government. It’s paid to Microsoft. Any time you buy a PC from a major vendor as a private party, even if you request it have no OS installed, Micorosoft is getting paid for an OEM OS license.

    • #3253234

      True cost of spam to business

      by justin fielding ·

      In reply to In my own words…

      As we discussed last week, spam has moved away from being
      the irritating, but harmless unsolicited e-mail, towards a more sinister and
      threatening problem. Spammers now use worms to infect both end user systems and
      servers?which are then used to either act as a proxy for fraudulent activities
      (fake websites, spam relays, etc.) or extract private data for organised
      criminal activities, such as identity theft. This poses a threat for home
      users, but a greater threat is posed to corporations as they have a duty to
      protect the privacy of customers? data.

      The classic focus point, when discussing the cost of spam to
      a business, is the lost productivity of staff?meaning the time them to identify
      an e-mail as junk and then delete it. This doesn?t sound like it would really
      have much of an impact, but research that I stumbled across from 2004 (nucleusresearch)
      estimated that in one year the productivity cost per employee is about $1934! Updated
      research carried out at the beginning of 2005 (InformationWeek)
      estimated that the annual cost due to loss of productivity is about $21.58 billion!
      Now that?s a lot of money. Even if your organisation filters spam somewhat
      effectively, most employees will check their personal e-mail accounts a few
      times per day, thus having to deal with spam on systems that are out of your
      control. Interestingly, the associated costs of spam, and the number of spam e-mails
      delivered varies largely depending on geographical location.

      An article from the beginning of 2005?this time based on UK
      statistics (Personneltoday)?suggested
      the total cost to UK businesses was ?1.3 billion, with a cost per user of ?374
      ($598)?much lower than the previous estimate of $1934! This research also
      suggested that the severity of spam varied by country, with the UK receiving
      higher amounts than France, Germany, Italy and even China.

      There are other business-related costs incurred because of
      spam; some of these are actually caused by anti-spam systems that mistakenly
      identify genuine e-mail as spam. Dana Blankenhorn discusses these here.

      From the IT department’s point of view, there are different
      costs associated with spam. First, consider the cost of anti-spam software or
      solutions. While there are perfectly good open source implementations (in fact,
      many commercial products are based on these with a little additional eye
      candy), the majority of companies go for a commercial solution. I?ll look more
      closely at various solutions and methods for fighting spam later on; but here
      are a few quick costs for some commercial systems:

      • Barracuda
        SpamFirewall
        400 + 3yr Updates                 =          $7292
      • Postini ? 500 users                                                       =          $10000 / year
      • Proofpoint Protection
        Server 1.2.1                                =          $10000 / year

      Not cheap, huh? Bandwidth costs
      associated with spam can also be considerable; some companies estimate that as
      much as 50% of their bandwidth usage can be attributed to spam (also take into
      account bounces, file attachments attributed to Worm activity, etc). I wouldn?t
      estimate that this proportion of our internet bandwidth is used by spam traffic;
      however, we can notice when a large spam attack is in progress. Our internet connection
      slows considerably. Additionally, even if anti-spam solutions are used to
      reject junk mail before entering it into your delivery system?this has to be
      processed, scanned (in the case of malicious junk mail or worms), and then
      rejected. This all munches CPU power and generates high memory usage, meaning
      more powerful (and therefore more expensive) hardware needs to be put in place.
      Our company is currently in the process of replacing our SMTP gateway for just
      this reason; it?s struggling to deal with the sporadic and often heavy spam
      attacks where accounts are bombarded with junk. Traditional methods used to
      block known spammers before they even start an SMTP session are now becoming
      ineffective. This is due to the increasing use of botnets to relay the traffic.

      Last, but not least, I thought I should mention another
      potential cost associated with the more malicious form of spam (generally due
      to worms). There have been several high-profile cases recently in which large
      corporations were fined due to the leaking of customers? private information,
      leading to identity theft on a massive scale. With the growing presence of Trojan
      and worm-based proxies, we need to be careful that the combination of an
      unknown (as-yet undiscovered) malicious program, badly enforced desktop
      security, and users’ stupidity don?t end up causing major information leaks. This
      is especially important in the financial services industry as information held
      is more sensitive and more useful to criminals (or rival enterprises) than in
      other industries. Although we can never make sure everything is covered, we
      need to make sure that firewalls and traffic filters are truly used to their
      full ability. Being complacent about security could prove very costly!

      Next week, we?ll take a look at some of the ways in which
      spam is being fought against and how different methods can be used together to
      form an overall anti-spam policy.

      • #3100490

        True cost of spam to business

        by wex ·

        In reply to True cost of spam to business

        Insert comment text here

        check out Darren Brothers “Spam Vampire”

    • #3253430

      We all need Windows?

      by justin fielding ·

      In reply to In my own words…

      I attended a conference today on the subject of Data/Email
      archiving (for compliance) and Storage Virtualisation.  Listening to the various addresses and
      looking at the commercial offerings of the vendors who were present, I was
      amazed how everything is geared for Windows, not one vendor could offer a
      solution for a Linux/Unix platform.  One
      company came close with their email archiving system, this acted as an SMTP
      proxy/relay and archived data as it passed through, the vendor however didn?t
      know whether or not the application was limited to running on a Windows base or
      whether there is a Linux version available (that inspires confifence!).  Symantec had a very nice solution, however
      again they only offer solutions for a Windows base, strange seeing as all of
      their software is developed in .NET and is therefore a great candidate for use
      with Mono.  Does anyone else find that large vendors are
      becoming increasingly dependant on Microsoft infrastructure?  Any suggestions for compliant email and/or
      data archiving solutions which will run on a non-Windows base?

      • #3253406

        We all need Windows?

        by apotheon ·

        In reply to We all need Windows?

        There are a number of ways to very easily brew your own on a Linux platform, particularly if your distro has a fairly comprehensive software archive system associated with it.

      • #3091474

        We all need Windows?

        by justin fielding ·

        In reply to We all need Windows?

        Sure, but who wants to brew their own all of the time?  Wouldn’t it be nice to just buy a system which works out of the box, which you know is fully compliant, easy to administer and of course supported should problems arise?

      • #3101418

        We all need Windows?

        by apotheon ·

        In reply to We all need Windows?

        It’s not such a big deal. Usually, “brewing your own” business communications suite consists of a few installation statements (like apt-get install packagename on Debian), and voila, you’re in business.

      • #3101364

        We all need Windows?

        by jaqui ·

        In reply to We all need Windows?

        With email archiving, it’s a simple matter to configure all the linux based email servers to archive all traffic in or out.

        If you use email lists, which are great for newsletters etc, then one of the most common tools is Mailman list manager, which creates an archive of all traffic by default. [ it does default to publicly accessable archive, but that is easily changed to private access only ]
        any list can be publicly subscribed to or invitation only.

        You can, using nis as login / network system, have all user files copied by default to a single location, for archival purposes.

        It’s true, most of the “Name Brand” vendors don’t support linux at all, but the functionality they are trying to sell exists in all unix os because of the design of them. just no pretty gui tools have been written.

        I.B.M. say they support linux, yet they don’t offer any end user / desktop software for linux, only for windows, so I say they don’t support linux.

      • #3101218

        We all need Windows?

        by cbsitsec ·

        In reply to We all need Windows?

        Archiving with a home-grown system is easy. Using the archive — getting what you need out quickly and purging what you don’t need to archive automatically — is not so easy. I’m surprised someone like Sun, with it’s disk component/partnership, non-MS mindset and messaging product set, doesn’t come up with something for theses SOx (and other regulated) companies where they already have a strong install base. 

      • #3101445

        We all need Windows?

        by dstrickler ·

        In reply to We all need Windows?

        Ours doesn’t 😉

        Before I get chastised for doing a sales job (I promise
        I won’t), your search is going in the right direction. Ask the top Managed
        Services (ASPs) if they run Windows. They’ll bend over laughing. If your going
        to build a large, scalable Email Archiving solution, Windows can’t hack it. Its
        infrastructure just isn’t built to handle capacity needs. So when all you need
        to archive is 3 months of email, it will work, but don’t throw 3 years at it.

        Most companies get about 11 emails/day/user of valid email (assuming
        you’re not archiving spam). 1,000 employees * 11 * 365 is 4,015,000. Think about
        the record storage, the maintenance, the patches, and {gulp} the security
        problems with Windows. And in case your curious, I run XP on my desktop. Windows isn’t a bad O/S, but it isn’t designed to be scalable, no matter what their marketing wants you to believe.

        Keep looking for Unix/Linux solutions. I promise
        it will be a solid platform for long-term, reliable storage.

        Dave Strickler
        MailWise
        http://www.mailwise.com
        “Intelligent E-mail Protection”

      • #3200085

        We all need Windows?

        by alex779 ·

        In reply to We all need Windows?

        Think no. Fogot about MacOS? This is unix-like OS on BSD- kernel based. And there are many other unix-like OS with pretty good interface better than windows. And Open Office > Microsoft Office.

        PS: My favorite place – http://www.mczone.ru – all patches for all game here

    • #3101793

      Encrypted?

      by justin fielding ·

      In reply to In my own words…

      When does encrypted not mean secure? Maybe when your government is working with Microsoft
      to make sure they have ?back door? entry to your data! That?s what has been reported by the BBC
      recently
      !

      The impending release of Windows Vista and it?s BitLocker
      Drive Encryption has been noted by the authorities as a concern, meaning law
      enforcement agencies will find it much more difficult to access data on
      confiscated PC?s. A Cambridge academic Ross Anderson prompted the
      government to look at creating some type of “back door” to bypass
      encryption. The government later
      confirmed that it is in talks with Microsoft, Microsoft also confirmed that
      they ?will continue to partner with governments, law enforcement and industry
      to help make the internet a safer place to learn and communicate?.

      It seems to me that any ?back door? will of course be abused
      and also over time be discovered by people who are not supposed to know about
      them, therefore rendering the whole thing a pointless waste of time. The very concept of creating a ?back door? in
      an encrypted system renders the whole encryption process pointless. Any thoughts?

      • #3101412

        Encrypted?

        by zlitocook ·

        In reply to Encrypted?

        I was posting about how they can read your email, cell phone and get any thing they want from your service provider. And with out even going to any body, like a judge or any thing. Our government can not help the home less, the poor or keep good paying jobs in the states but boy they will spend lots of money on spying. Or send billions over seas to rebuild what we blow up and dont get me started on oil prices.

         If there is a suggestion of an investigation on why gas prices keep going up, suddenly the prices go down. And why is BP british patrolem always has lower prices?

      • #3101721

        Encrypted?

        by scoid ·

        In reply to Encrypted?

        You nailed it.  Of course, all you need are a few judges under your thumb – er – sorry – I meant on your side, and you can open encrypted e-mail messages and e-mail attachments without either sender or receiver knowing about it.  I suppose, with the existing e-mail spying that’s going on, the people reading the messages have found out that there are significant portions of encrypted data they can’t read.  Kind of like the severing that happens whenever a citizen makes an information request under a Freedom of Information act.  Black ink all over the page!

        Technology is neither good nor evil.  The user is always one of the two, however.  How do you think Nobel felt the first time he heard that his invention, dynamite, was used to blow up a human being?

        So when are you going to have your microchip implanted?

      • #3101467

        Encrypted?

        by justin fielding ·

        In reply to Encrypted?

        Oh I’m sure the day will come when every baby has
        their ID chip implanted on birth.  It’s vital for the war on
        terror; whatever that is.

    • #3101285

      Commercial anti-spam solutions: Are they worth the price tag?

      by justin fielding ·

      In reply to In my own words…

      Previously, we looked at the various ways in which spam has
      changed over the last few years and also at the various costs associated with spam.
      As spam becomes more prevalent–even malicious, it is ever more important for
      enterprises to do all they can to prevent it reaching their users’ Inbox. Many
      anti-spam solutions exist?some commercial offerings, some open source?none of
      these are invulnerable to spam. Many will expect to allow a small percentage to
      slip through the net. You will probably find most companies using multiple
      anti-spam measures, each helping to fend off spam in a different way, but
      working together to provide a solid overall solution.

      Let?s look at two commercial products:

      Barracuda
      Spam Firewall

      This is an
      all-in-one device which is compatible with all mail server architectures as it
      sits between the outside world and the mail server at the SMTP level (pretty
      much like an SMTP proxy or mail gateway). Ease of use and simple installation
      are its selling points; it claims to offer the following protection:

      • Denial of service and
        security protection
      • IP block list
      • Rate control
      • Virus check with archive decompression
      • Barracuda virus check
      • User-specified rules
      • Spam fingerprint check
      • Intention analysis
      • Bayesian analysis
      • Rule-based scoring

      Costs are not too bad?a system for 300-1000 active users
      sells for around ?4000 ($6900), which includes three years of updates and a
      three-year instant replacement warranty. I haven?t had a chance to use one of
      these devices, but if Barracuda wants to donate one to me for review/testing, I
      would be more than happy to write something up (subtle hint).

       
      Symantec
      Brightmail AntiSpam

      This is an application which can be run on Windows, Solaris
      or Linux servers (very unusual for Symantec!). Like most anti-spam solutions,
      this application uses filters as its main defence. What makes Brightmail interesting
      is how it does this: Filters are created remotely by Symantec who collect spam
      and generate updated filters based on the content of what has been captured. Every
      5-10 minutes, these new updated filters are sent down to customers’ mail
      gateways for immediate use. This is claimed to be 95% effective, but I haven?t
      seen this in action, so would suggest it may be an optimistic figure. Here?s
      the sales blurb: 

      • 95%
        spam-catching rate
      • 99.9999%
        accuracy rate
      • Automatic
        updates every 5-10 minutes
      • Combination
        of 17 different technologies used (although what these are isn?t
        mentioned)
      • Low
        administration
      • Performance
        and trend reporting 

      If anyone?s using this system I would be interested to hear
      about the results. 

      There are, of course, many different products out there;
      these are just two examples that pretty much describe most of the commercial
      offerings. Underneath, all of these systems are using the same basic principles
      of content matching, IP/DNS checks, Bayesian analysis, fingerprinting, and rule
      based scoring. The combination of these different methods makes for a pretty
      good overall defence; however, as you can see, that comes at a price. There are
      many open source implementations which offer all of these features. Configuration
      is obviously not as simple?you can?t just plug and play. However, for a smaller
      business wanting to save money, or a large enterprise wanting to serve large
      numbers of employees, these may be a viable option. Next week, I?ll take a look
      at these ?free? solutions and how they can be used together to offer an
      effective anti-spam policy.

    • #3100715

      Encryption for Linux

      by justin fielding ·

      In reply to In my own words…

      Following on from talk of Windows Vista encryption and Microsoft?s
      collaboration with authorities to create a backdoor?I have been snooping around
      to see what is offered to Linux users in the way of disk encryption.  Full encryption has been available to Linux
      users for quite some time, however setup has been a trick affair and lots of
      average or desktop users will have kept clear.

      I stumbled across this interesting
      blog
      describing LUKS (Linux Unified Key Setup) / Gnome integration which
      may be available in FC5.  The video
      (click on the picture) shows how easy mounting an encrypted disk can be, this
      is much easier than mounting an encrypted USB key in Windows which normally involves
      executing a program to access the encrypted area.

      Not using Fedora myself (liked it up until FC2 and after
      that decided I prefer Ubuntu), I went looking for some more info, I found this great
      HOWTO
      guide which I have just followed for my 2GB USB key.

    • #3272318

      Suck spammers dry

      by justin fielding ·

      In reply to In my own words…

      I came across this wonderful page today.  It approaches the issue of stopping spam in a
      very different way, by targeting the spammers and destroying their profit
      rather than blocking their emails. 
      SpamVampire is a great idea, sucking the spammers bandwidth to the point
      that it?s costing them more than they make?This must be effective as the
      spammers have gone so far as to give death threats to the creator!  This phone transcript
      was particularly amusing!

    • #3088181

      Fighting spam for free

      by justin fielding ·

      In reply to In my own words…

      After looking at some of the commercial spam-fighting
      products on the market, let?s now take a look at the open source tools offered
      to solve this problem.

      A mail gateway that is configured to filter out spam and/or
      redirect mail for one or more domains to the required mail server can be a very
      useful tool. I have just finished configuring a new gateway for our company so
      will outline the tools I have used to try and filter out spam/virus traffic.

      First of all, the operating system I?m using is Ubuntu; this
      is a server install with no GUI installed and with root activated; for all
      intents and purposes, it may as well be Debian. The MTA (Mail Transport Agent)
      I have chosen is Postfix?I won?t make any
      wild claims that Postfix beats all others. I have used qmail in the past and it
      was okay; however, Postfix is very well supported and there are a multitude of
      tutorials and add-on scripts floating around for it.

      So that?s the basic mail relay/gateway system–what do we
      need to filter out spam and worms?

      • A spam
        filter
      • A virus
        filter/scanner
      • Something
        to link these into Postfix
      • Other
        goodies

      Okay, so let’s deal with the spam filter first. By far the
      most popular spam-filtering software is SpamAssassin. SpamAssassin uses a
      number of different checks to score an email; this score is then taken into
      account, and the administrator can set thresholds at which spam can either be
      tagged as spam or discarded completely. Included in the range of checks are:

      • Content/signature
        based-checks (for example, the mention of Viagra or an all-CAPS subject would
        produce a positive score that is added to the tally for that particular
        mail).
      • Internet-based
        checks with the use of pyzor, razor and dcc (these will match
        the mail and content against known spam).
      • Sanity
        checks (MIME integrity, etc).

      The combination of rules and internet tests gives pretty
      good accuracy; this will filter out a very high percentage of your spam.

      On the virus/worm blocking front, my tool of choice is ClamAV. Again, like SpamAssassin, ClamAV is
      free, well-maintained, and its use is widespread. The updates are free and
      frequent. Freshclam is a system service that will run at defined intervals and
      check/download/install new virus definition files automatically. I have noticed
      ClamAV filtering will sometimes out phishing attempts, as well as classic
      ‘viruses,’ which can only be a good thing.

      Right, so now you need something to hook these up with
      Postfix (or whatever MTA you may have chosen). I have found the best way of
      using the previously mentioned tools with Postfix to be the amavisd-new daemon. It’s easy to
      install (try apt-get install amavisd-new on ubuntu/debian), easy to configure,
      and performance is great. Okay, so you looked at the Web site, and it’s not
      very impressive?no snazzy graphics?but don?t let that fool you. Take the time
      to read the documentation and default config file, and you will see this is a
      very powerful tool.

      With the aforementioned tools you will be able to build a
      pretty resilient anti-spam solution. Assuming that this machine will be acting
      as a gateway and not holding mailboxes (as in my case), you may want to allow
      users outside of your network to authenticate and send mail (roaming users). The
      easiest way I have found of doing this is using pop-before-smtp.
      It basically picks up an imap/pop login from the syslog (which is sent from
      your imap server to the smtp gateway), and then holds a database of the IP
      addresses used to connect. If an smtp client on a non-trusted network requests
      to send mail to a non-local domain, Postfix will check the pop-before-smtp
      database file and allow relay access if the IP is listed. It?s not a perfect
      solution, but is simple and effective. An additional spam countermeasure is greylisting. While by
      itself greylisting will not save your organisation from spam?it?s a useful
      addition to your defence. I have just implemented greylisting for the first
      time, and I’ve yet to see what problems it may cause. I can imagine some issues
      arising with senders who retry from a different IP address; however, over time
      you can identify these and add them to your whitelists. The service I?ve
      decided to use, postfix-gld, has a
      nice feature which will allow all mail from a particular domain once x number
      (defined in your config) of successful greylisted mails pass from it?a nice
      feature that could well save some training time.

      I hope this has been a useful look at some of the freely
      available tools for stopping spam from infiltrating your organisation. These
      are by no means the only ones, but this overview goes to show that fighting spam
      effectively does not require a large budget or deep technical knowledge.

      Have you successfully implemented greylisting? It
      would be great to hear how you have overcome issues associated with it and
      whether or not you decided to keep the system running?

      • #3088212

        Fighting spam for free

        by nees ·

        In reply to Fighting spam for free

        Thanks. That’s very informative. I recently purchased the annual update to Norton at home, and now I have more SPAM than ever!

      • #3084689

        Fighting spam for free

        by jeffaaa10 ·

        In reply to Fighting spam for free

        Enjoyed this information however oddly enough it doesn’t seem like there was any “free” solutions for the home DHCP gateway arrangement. At work ( a financial institution ) I have an IDS in place and a firewall with very effective spam blocking. Not 100% but probably 85-90%. My frustration is when I go home and get on my comcast mail where 1/2 of my mail is garbage. A $1500 firewall is not in my budgete so I was hoping to find some kind of free linux based answer for that environment. I do of coarse run though a Router and NAT and keep everything current, run spybot and Ad-aware, and MS Antispyware. No pop ups here just viagra and stock solicitations 🙁

        Did I miss something, or were all of the free suggestions essentially a server based mail proxy for a dedicated IP/domain as opposed to a simple POP3 client workgroup??

        Thanks,

        Jeff 

         

      • #3084643

        Fighting spam for free

        by justin fielding ·

        In reply to Fighting spam for free

        All of this software is free.  In your situation I guess you could configure a similar machine at hiome which would connect to your comcast account via pop every few minutes and retrieve mail–filtering on your side.  You could then configure your mail client to collect from your own server rather than directly from comcast.  You could also consider getting a better ISP who actually make some effort to stop spam (and don’t originate massive amounts of it)!

      • #3084612

        Fighting spam for free

        by jeffaaa10 ·

        In reply to Fighting spam for free

        Justin,

         

        Thanks for the reply, will dig deeper and attempt to sort it out. Unfortunantely, if you want Broadband in my area, your only choice is Comcast (too far away for DSL whic would be considerably slower anyhow.) So I have to figure out how to put up with what I got 🙁

         

        Thanks again

      • #3086123

        Fighting spam for free

        by romel_jacinto ·

        In reply to Fighting spam for free

        Jeff,

        Don’t feel that you absolutely need to use your ISP’s email.
        There are many,  many mail hosting providers and while you’ll have to pay to get a quality mail provider, their spam control and filtering options are often better than many ISP’s. The cost is reasonable, around $20-40 per year, depending on the provider.
        An additional option would be to purchase your own domain name and you can pick and choose email addresses.

        Nancy McGough has put together a long list of mail hosting providers, although she focuses on those on provide IMAP.

        I used to run my own mail server until about 8 months ago and then I decided to outsource it. I haven’t looked back once.

        Good luck.


        Romel Jacinto

      • #3086111

        Fighting spam for free

        by romel_jacinto ·

        In reply to Fighting spam for free

        Jeff,

        Don’t feel that you absolutely need to use your ISP’s email.

        There are many, many mail hosting providers and while you’ll have to pay to get a quality mail provider, their spam control and filtering options are often better than many ISP’s. The cost is reasonable, around $20-40 per year, depending on the provider.
        An additional option would be to purchase your own domain name and you can pick and choose email addresses.

        Nancy McGough has put together a long list of mail hosting providers, although she focuses on those that provide IMAP.

        I used to run my own mail server until about 8 months ago and then I decided to outsource it. I haven’t looked back once.

        Good luck.

        Romel Jacinto

      • #3086894

        Fighting spam for free

        by bschmidt ·

        In reply to Fighting spam for free

        If you want a client-based spam filter (for home use as opposed to server based) and you use Outlook I’d suggest SpamBayes.  It’s free, it works rather well (cuts about 95% or more of my spam) and is fully integrated with later versions of Outlook.  You just install it, train it against preset folders of spam and non-spam (or train on the fly if you don’t have that available) and the rest it does itself.  I can’t get into the details but if you want something free that works for Outlook that’d be a good option.

        http://spambayes.sourceforge.net/

      • #3086237

        Fighting spam for free

        by gambo.id ·

        In reply to Fighting spam for free

        Hey this is very informative and helpful! I know this will come in handy sometime later for me; morover am new to Linux and would really appreciate if you could help me with some stuffs on getting around with some of the commands.
        Thank you and keep it up!
        zee_id@yahoo.com

    • #3088351

      Issues with gld

      by justin fielding ·

      In reply to In my own words…

      Following up on my previous comments about greylisting, I
      have been having issues with the gld daemon timing out and leaving lots of dead
      processes.  This doesn?t stop mail from
      passing through but will pose a problem in the future.  I tried to contact the authors with the issue
      but as of yet have had no reply.  In an
      attempt to find some resolution I dropped a message in to the postfix-users
      mailing list describing my problem?I had a few responses all telling me that
      they have had the same issue and have moved to using policyd.  I?m currently configuring the installation
      and will report on my findings later.

    • #3085693

      OS-X under scrutiny

      by justin fielding ·

      In reply to In my own words…

      Apples OS-X is a great looking OS with a BSD base
      underneath, the interface is wonderful for non-technical users and the
      flexibility and control gives more hardcore users a nice balance between Linux
      and Windows.  OS-X users have long boasted
      about the lack of vulnerability in the OS, however recently there have been a
      few issues arising?SecurityFocus
      reported that Apple released a patch fixing at least 20 flaws including a vulnerability
      (allowing execution of arbitrary code) in the Safari web-browser.  There has also been concern over the appearance
      of OS-X viruses?could
      it be that the only reason OS-X has enjoyed such a trouble free past was the
      low takeup?  With the increasing usage of
      this operating system I would think we are likely to see increased threat.

    • #3086154

      Data protection vs. individual rights

      by justin fielding ·

      In reply to In my own words…

      What would happen if a senior member of staff approached a
      member of your department and asked for the activities of a certain member of
      staff to be monitored?  Do you have definite
      procedures in place to deal with this type of request?  If the answer to that question is no, even if
      you?re a small company–the consequences could be quite serious.

      So what is the official line on monitoring of staff
      activities?

      In the UK,
      the ?Data Protection Act? and the ?Employment Practices Code?
      would be the main reference points for anyone wanting to know if and how they
      can legally monitor staff activities. The current Data Protection Act
      (1998)
      came into force on 1 March 2000. The act applies to personal data
      (data collected while monitoring staffs’ usage of internet/email, for example,
      could be personal in nature and would therefore be deemed as personal data)?and
      works to protect individuals by giving the data controllers clear guidelines on
      how their data should be handled. There are eight principles set out which
      require that data must be:

      • Fairly and lawfully
        processed;
      • Processed for limited
        purposes and not in any manner incompatible with those purposes;
      • Adequate, relevant and not
        excessive;
      • Accurate;
      • Not kept for longer than is
        necessary;
      • Processed in line with the
        data subject’s rights;
      • Secure;
      • Not transferred to
        countries without adequate protection.

      The act also stipulates the conditions under which processing
      of data may be carried out. For more information on the ?Data Protection Act?
      take a look at this
      website
      .

      Perhaps a more useful (or useable) guide when it comes to
      monitoring of staff activities would be the ?Employment Practices Code??this code
      is regulated and enforced by the Information
      Commissioner?s Office
      ; the same office which regulates the ?Data Protection
      Act? and the ?Freedom of Information Act?.

      The employment practices code and its supplementary guides
      can be found here. Section
      three of the act specifically covers the topic of monitoring in the workplace;
      while the act doesn?t prohibit monitoring, it notes that any monitoring activities
      must adhere not only to the ?Data Protection Act? but also the ?European
      Convention on Human Rights,? which dictates respect must be shown for an
      individual’s private life and correspondence.

      Section five of the quick guide
      covers recommends that it should be considered whether there are alternative
      approaches which could deliver similar benefits while being more acceptable to
      workers. Paragraph 3.1.4 of the Supplementary
      Guidance
      states, ?Workers who are subject to monitoring should be aware
      when it is being carried out, and why it is being carried out. Simply telling
      them that, for example, their e-mails may be monitored may not be sufficient.
      They should be left with a clear understanding of when information about them
      is likely to be obtained, why it is being obtained, how it will be used and
      who, if anyone, it will be disclosed to. The necessary information can be
      provided, for example, through signage in areas subject to monitoring or
      through details given in a staff handbook. Workers should be kept aware of
      existing monitoring, perhaps by reminding them periodically. Where significant
      changes to monitoring arrangements are introduced, they should be told about
      these.? This basically means that unless criminal activities are suspected,
      employees must be fully aware that monitoring is in progress, what form that
      monitoring takes, and how the information collected is being used.

      As can be seen, this area is a legal minefield, which should
      be avoided in most cases?there have been cases of employers being ordered to
      halt unannounced monitoring of Internet usage (this
      case in 2001
      was by a group of federal judges!). Our company has the policy
      that any requests for systems usage, telephone, email, or security logs must be
      submitted to the CEO in writing for consideration.

      It seems that in the States these issues are handled quite
      differently (going on the information here)?I would be interested to hear how these issues are handled from any readers in the U.S. Do you think Europe’s data protection laws are more
      stringent? Is employee monitoring more a matter of routine in the States? How
      do you usually handle requests to monitor staff activity?

    • #3266941

      Novell takes another government contract

      by justin fielding ·

      In reply to In my own words…

      ITWire
      reports that Novell have clinched a deal with the NSW governments Department of
      Commerce?this will place them on the governments preferred open source
      suppliers panel.  Novell will also provide
      consulting services to departments considering a move to open source platforms.

      This underlines a trend being set by governments and large
      organisations; I previously reported on Novell?s success in gaining contracts
      with the Swiss government.  Another ITWire report
      highlights the move these organisations are making away from Microsoft–the US state of Massachusetts has stated that by 2007 all
      Executive Department documents must be stored in either PDF or Open Document
      Format (ODF).  It?s believed that using
      open standards ensures ?official public records are freely and openly available
      for their full lifecycle?, maybe Microsoft?s refusal to adopt ODF is an own
      goal?

    • #3266454

      Eeeek!

      by justin fielding ·

      In reply to In my own words…

      I have just read a thread on the ubuntuforums
      security area?it seems a very worrying security breach has been discovered, it
      should however only be a problem in multi-user environments. 

      The file /var/log/installer/cdebconf/questions.dat contains
      the install logs, here you will find the administrator username and passwords
      entered during installation.  The file is
      world readable plain text, therefore anyone with an account on the system can
      gain root privileges.

      This only seems to effect the ?Breezy? release of ubuntu?a
      fix has been made so make sure to update asap:

      # apt-get update
      # apt-get upgrade 

      That?s all it takes.

      • #3266179

        Eeeek!

        by apotheon ·

        In reply to Eeeek!

        Actually, that’s a problem for anyone on the Internet, too. It ensures that anyone that can get non-root access to your system then has all the necessary information to get root access as well. Then again, on Ubuntu with it’s crappy sudo-only security model, getting non-root access allows root access anyway, eliminating some of the benefit of having a multi-user OS.

    • #3267327

      iSCSI anyone?

      by justin fielding ·

      In reply to In my own words…

      iSCSI is a technology which seems to have been cropping up a
      lot recently?while visiting a conference on the topic of data protection and
      compliance, iSCSI was being pushed as ?the next big thing? in storage.

      So what is iSCSI? iSCSI is a protocol defined by the
      Internet Engineering Task Force (IETF) which enables SCSI commands to be
      encapsulated in TCP/IP traffic, thus allowing access to remote storage over low
      cost IP networks.

      What advantages would using an iSCSI Storage Area Network
      (SAN) give to your organisation over using Direct Attached Storage (DAS) or a
      Fibre Channel SAN?

      • iSCSI
        is cost effective, allowing use of low cost Ethernet rather than expensive
        Fibre architecture.
      • Traditionally
        expensive SCSI controllers and SCSI disks no longer need to be used in
        each server, reducing overall cost.
      • Many
        iSCSI arrays enable the use of cheaper SATA disks without losing hardware
        RAID functionality.
      • The
        iSCSI storage protocol is endorsed by Microsoft, IBM and Cisco, therefore
        it is an industry standard.
      • Administrative/Maintenance
        costs are reduced.
      • Increased
        utilisation of storage resources.
      • Expansion
        of storage space without downtime.
      • Easy
        server upgrades without the need for data migration.
      • Improved
        data backup/redundancy.

      You?ll notice that I mentioned reduced administrative costs;
      I was very interested to find this document prepared
      by Adaptec on the cost advantages of iSCSI SAN over DAS or Fibre Channel
      SAN?most notably the Total Cost of Ownership analysis, stating that one
      administrator can manage 980GB of DAS storage, whereas the same administrator
      could manage 4800GB of SAN storage. Quite an increase!

      Isn?t there going to be a bandwidth issue with all of this
      data flying around? Well, this is a question I had but found the answers in
      this very informative ?iSCSI
      Technology Brief
      ? from Westek UK. Direct
      attached U320 SCSI gives a theoretical data transfer rate of 320Mbytes/s; on a
      standard Gigabit network, iSCSI will provide around 120Mbytes/s; and Fibre
      Channel provides up to 200Mbytes/s, but at considerable cost. 120Mbytes/s is
      probably fast enough for all but the most demanding applications. All
      connectivity between the iSCSI storage and your servers would be on a dedicated
      Ethernet network, therefore not interfering with your standard network traffic
      (and vice versa). If this isn?t enough, 10Gbit copper Ethernet is now pushing
      its way on to the market and costs are falling?this would give a possible
      1Gbyte/s of throughput!

      Most iSCSI devices I have seen give the ability to take
      ?snapshots;? this snapshot will only save changes made to the file system since
      the previous snapshot?meaning you won?t need to put aside huge amounts of
      storage while maintaining the possibility of rolling back to a previous state
      after disaster (data corruption/deletion). Snapshots only take a few seconds to
      perform (compared to hours for a traditional image to be created) and can be
      scheduled for regular, automatic creation.

      I have recently been asked to look at consolidating our
      storage, and iSCSI looks like an innovative, well supported, and cost effective
      way of doing this. The Power iSCSI range from Westek UK looks very promising
      with the option of 10GBit connectivity, Hardware RAID6 (offsetting reliability
      concerns due to SATA disks), plus an option of real-time replication and
      fail-over between two units.

      Have you deployed iSCSI-based SAN within your organisation?
      Do you know of any other iSCSI appliance providers offering innovative
      features? Maybe you decided to go with Fibre Channel instead? What kind of data
      transfer rates do you require for your storage? Do you feel modern SATA disks
      provide good enough performance and reliability or are expensive SCSI disks
      still worth the premium?

      It would be great to have your feedback on this topic
      so leave a comment or two!

      • #3077122

        iSCSI anyone?

        by tim ·

        In reply to iSCSI anyone?

        Here at iStor Networks we have designed a new range of hardware iSCSI target RAID controllers using our own IP Storage ASIC. This offers much higher performance than current (PC Hardware/Software) based appliances along with 8 x 1GbE (LAG capable) or 1 x 10GbE (Optical) iSCSI connections, up to 16 SATA disk channels and all the standard RAID levels, 0,1, 5 and 10. We will be offering RAID 6 in the near future along with basic snapshot capability all in upgradeable firmware.

        I would be more than happy to answer any questions about iSCSI technology, connectivity and implementation. 

    • #3266090

      IBM Germany dropping Microsoft

      by justin fielding ·

      In reply to In my own words…

      Last week I mentioned that the US
      state of Massachusetts
      is moving away from using

      propriatary Microsoft Office formats in favor of the Open
      Document Format–Novell

      also took another goverment contract to add to their growing
      list. Today I came across a story on neoseeker stating

      that IBM Germany will be moving their desktop systems over
      to Linux!

      A quote from the LinuxForum
      2006 Day 2
      is very interesting:

      “Andreas Pleschek also told that IBM has cancelled
      their contract with Microsoft as of October this year. That means that IBM will
      not use Windows Vista for their desktops. Beginning from July, IBM employees
      will begin using IBM Workplace on their new, Red Hat-based platform. Not all at
      once – some will keep using their present Windows versions for a while. But
      none will upgrade to Vista.”

      I wonder if this is a sign of things to come? Will the rest of IBM follow suit?

      • #3074904

        IBM Germany dropping Microsoft

        by jee_grover ·

        In reply to IBM Germany dropping Microsoft

        I think any smart IT professional knows that if it were not Microsoft, then it would be another major OS getting all the flak. People hate success, and popularity, and the fact is that XP and Server can be configured to be very very secure products, its the person in the driving seat that makes the road insecure.

        In regard to this article, I don’t think this is a sign of the future. Microsoft is getting its act together, offering much improved business culture and morale, and fortunately for them I do not think it is too little too late.

         

      • #3074887

        IBM Germany dropping Microsoft

        by pedrorepublic ·

        In reply to IBM Germany dropping Microsoft

        The issue, I suspect is the fact any serious company cannot accept to be forced to do things. It is unacceptable for any big company (I used to work for a major telecoms equipment maker) to sell products based on an OS belonging to another company. When problems occur, and they always do, it’s not acceptable to hear:”A future release will correct this issue”. Costumers are becoming demanding, and closed OSs and proprietary solutions cannot provide you with the required flexibility. Microsoft still has a future with home PCs, but the big players like Oracle or IBM are increasingly going to use Open Source Operating Systems that will be tailored for their applications.

    • #3077203

      policyd updated

      by justin fielding ·

      In reply to In my own words…

      As mentioned previously I have switched from using the gld greylisting implementation to policyd . I had originally used the version of policyd in the ubuntu apt repositories?mainly for ease of installation and maintenance. After reading the policyd mailing list it became very clear that it would be of great advantage to upgrade from the modest v1.55 release in the repository to the latest- v1.73. There are quite a few new features in v1.73 compared to v1.55:

      ? HELO check ? blacklist hosts which use a random HELO
      ? Recipient throttling
      ? DNS based whitelists
      ? Blacklist sender (envelope)
      ? DNS based blacklisitng
      ? SSL and Compression on MySQL connection
      ? MySQL5 compatability

      I cleanly removed v1.55 with ?apt-get remove?; v1.73 compiled with no problems. One thing to remember is that while the README.txt tells you to use ?gmake?, you in-fact substitute this command with ?make? on a Ubuntu system.

    • #3076610

      Cable Management

      by justin fielding ·

      In reply to In my own words…

      Cable management–it must be the bane of every
      network administrator?s life. You can make every effort to patch things neatly
      and keep cables in a particular order, but, it doesn’t take long before patch
      panels turn in to spaghetti junction. I don’t know about you, but for me, the
      most frustrating instance is one where I have a patch to make and one cable
      length is just too short, the next is way too long–this is really annoying and
      creates an unnecessary mess in cabinets.

      Have you tried tracing a cable from a particular patch port to the appropriate
      port in your switch lately? Find yourself tugging on something, hoping that you
      don’t accidentally rip out another 10 cables–tracing the movement through that
      mangled mess and then holding your breath as you unplug hoping it’s the right
      one? If that sounds familiar I’m not surprised, take a look at some of these
      photos I found on Google:

      Example A: http://colossus.net/resell.patch.html
      Example B: http://www.competitivecomputers.com/messy%20closet.gif

      And these from the TechRepublic galleries:

      Example c

      Example d

      It’s clear to see that most of us have a major issue on our
      hands. Simply re-patching one cable can be a difficult experience; now imagine having
      to swap out a switch; not fun!

      With this being such an obvious problem, there must be lots of companies out
      there trying to sell us solutions–some with ‘do it yourself’ kits, some with
      full re-patching/install services (cue A-Team
      theme tune). Let?s take a look at what’s out there.

      We won?t mention the ?no management? option, which is
      the most obvious method and the one that we have either inherited or
      encountered the most often. The most widespread form of management is the
      classic assortment of raceways, clips, ducts, cable management bays, and cable
      ties (even Velcro wraps for the more forward thinking individuals). Most server
      rooms I have seen use this collection of accessories to provide a basis for
      structured cable layout; when planned and set up properly, these can be very
      effective and could make any network admin proud when showing people around his
      (or her) den.

      Example E
      Example e

      Here’s an impressive shot from Matrixforce
      Corporation
      :

      Example F
      Example f

      Well these look great, surely then it’s just a case of
      discipline on the part of anyone needing to re-patch or put in new patches? In
      theory yes, although it seems that many cabinets and panels start off looking
      like those above but soon degrade to the mess of which we’re all accustomed.
      The problem is that while everything looks neat in these installations, often
      adding, removing, replacing or re-routing a cable can become quite a headache.
      Look at the Example F above: extensive
      use of cable ties keeps everything tight and tidy but tracing one of those
      cables, or worse, trying to replace one–that could be a pain. Things are made
      easier by using Velcro cable ties as seen in the Example E. These allow you to simply loosen the bunch, which makes
      tracing a cable much easier (I prefer to use Velcro when possible). One thing I
      have been guilty of in the past is to use the vertical management space to take
      up excess cable. Typically, this is when you need to make a 2m patch and can
      only find 3m cables–I think most people have done this at some time.

      So we’ve looked at some of the messes that can happen and some pictures of the
      classic approach to tidying up. In the next installment, I’ll tell you about a
      couple of companies that I found with some even more innovative approaches to
      keeping your cable management neat and orderly.

      • #3076424

        Cable Management

        by amberhaze ·

        In reply to Cable Management

        It reminds me of a school I was having headaches in for months…. 85 runs on the pannel and the whole network would go into convulsions every couple of hours… but it was flakey… drove us nuts. I finally had had enough of trying to “trace” the problem, didn’t know if it was the server, the gateways, the routers or what was to blame.

        One day we said enough is enough and pulled the whole pannel out. Every last patch.  Over the next day, we braught the server back up and then re-connected runs literally 1 at a time, testing after each one to make sure the natwork stayed stable… with just 10 runs to go, we finally found our problem, One of the teachers had reserected an old computer and attached it to the network with a fixed IP… The worst part was, the IP she chose happened to be the same as one of the gateways.  But what made diagnostics so hard, she only turned it on once in a while.. and only for a few minutes at a time… since she knew the old computers had been “banned” from the network since they were old 486’s

        Anyway, the point of all this… The messier your patch pannels, the harder your diagnostics can become if you have an elusive problem.

    • #3076174

      Fedora Core 5

      by justin fielding ·

      In reply to In my own words…

      Fans of the Fedora distribution will be happy to see that
      Fedora Core 5 ?Bordeaux?
      has been released. The announcement on fedoranews.com is a little weird
      (and I?m being polite), I guess it seemed like a good idea at the time!

      So what?s new in FC5?

      • GNOME
        2.14
      • OpenOffice
        2.0.2
      • KDE
        3.5.1
      • Mono
        support plus applications (F-Spot, Beagle and tomboy)
      • Xen
        Virtualization
      • Apache
        HTTP Server 2.2
      • Enhanced
        SELinux support
      • Kernel
        support for Broadcom 43xx wireless chipsets

      DVD and CD images are available for download via traditional HTTP/FTP
      or via bittorrent (very
      fast). If you want a preview before downloading
      a 3GB image then take a look at this screencast.

      I haven?t been too impressed with the recent releases of
      Fedora, personally favouring Ubuntu; still when I have a little spare time I?ll
      install this latest offering and see how it squares up.

    • #3100126

      Flying junk!

      by justin fielding ·

      In reply to In my own words…

      Completely off topic in terms of IT, however, slightly related. Having had an interest in radio controlled aircraft from childhood, I recently found myself browsing a few r/c hobby sites–it was on one of these sites that I caught mention of a cd-rom powered aircraft!  Well this was pretty intriguing so I started digging around on google and found out what was meant. I was surprised to find that people are using two components from broken/old cd-rom drives to build brushless electric motors for r/c aircraft. The process is extremely simple; the stator and rotor from the old cd-rom drive are stripped out–it doesn’t matter if the motor was damaged/not working as all of the windings are removed anyway.  With the addition of some small magnets, epoxy glue and the appropriate copper wire this can be re-engineered in to a powerful and very lightweight power unit.

      This site gives lots of info. Magnets can be brought here. There
      >are also lots of tips on building these motors found here.  So next time you go to throw away that old CD-ROM or hard disk, just think you may be able to recycle parts of it or at least get a few pounds from someone who wants to buy it on ebay 🙂

      • #3100120

        Flying junk!

        by dawgit ·

        In reply to Flying junk!

        Now that’s interesting.!. It’s still techi, so, in my mind it fits here 🙂  (this is the Tech-Rep, not just IT Rep) I only wish you’d given a link on that info as it sounds like a fun project (as if I don’t have enough to do). Anyway There’s always enough junk, worn-out or CD drives that are just out-dated (as in too slow) laying around, the supply is endless. It’ll keep some out of the garbage problem cycle. Thanks……………………………………………

      • #3265263

        Flying junk!

        by justin fielding ·

        In reply to Flying junk!

        “I only wish you’d given a link on that info”

        I did, I included 3 links (click on the blue text)–tons of info there 🙂

        Have to admit I have already ripped apart one old cd-rom and brought two others on ebay for a few pence!

    • #3265089

      Cable management: new options

      by justin fielding ·

      In reply to In my own words…

      So, last time we looked at the classic approach, but what
      other options are available? To be honest, there aren’t a great deal of other
      options out there–I spent quite some time Googling for innovative new ways of
      tackling patch panels and the related tangle of cables; nothing out of the
      ordinary appeared; most companies offer a continuation of the aforementioned
      clips, ducts and cable tie method of attack. I did come across one interesting
      company called NeatPatch–these guys
      use a combination of forethought (in the design/layout stage) and discipline
      (including having the correct cable lengths) to achieve a very nice end result.
      Their layouts also allow excess cable length to be stored horizontally rather
      than vertically, which inevitably leads to less mess. NeatPatch also claim to
      be the first patch panel system to introduce ‘bend radius compliance’–this
      relates to any bends in your network cable which in turn can introduce
      interference and therefore performance loss on your network (the bend radius is
      related to the wavelength of transmissions).

      All in all the results look pretty good–here are some samples from the
      NeatPatch site:

       

      NeatPatch provided this ‘Cabling
      Guide’
      which has some interesting information in it and is probably worth a
      read if you’re interested in the topic.

      One other solution I found was PatchView from RiT.
      This takes physical cable management to a new level. RiT describe it as an
      ‘Intelligent Physical Layer Management Solution (IPLMS)’. The system is a
      combination of the PatchView management software and smart patch panel units;
      these include an LED display to guide technicians and LED indicators for each
      port. The PatchView software allows all connectivity events/changes to be
      reported to a central network management station, immediately alerting the
      network administrator to any issues arising; the system can even direct a
      technician on what port to connect a new patch and will alert if an error has
      been made!

      These are obviously different approaches to slightly different problems. The
      NeatPatch system addresses the physical problem of cables, mess, and patch
      panel spaghetti; the PatchView system addresses the issue of keeping track of
      which ports should be patched together, changes made, and tracking down
      physical problems. These two systems would probably combine to make a very tidy
      and robust solution.

      For some tips on good practice while cabling in the server room, take a look at
      this weblog (http://www.oreillynet.com/pub/wlg/9263) by Chris Josephes–a sys
      admin for Internet Broadcasting.

      If you have any advice on keeping cabling tidy, tips and tricks or can suggest
      any good cable management products then leave a comment so we can all benefit.

    • #3265683

      Fedora Update

      by justin fielding ·

      In reply to In my own words…

      I mentioned previously that I would post an update on my experience with
      Fedora Core 5 once I had a chance to install and use it. What can I
      say, the install went without a hitch; the installation interface is
      nice and guides a user through the experience pretty painlessly. I had
      no issues with hardware drivers, although this is a pretty standard
      desktop PC without fancy graphics cards etc.

      Once installed and running I have had no problems with the system,
      everything works fine, updates download/install correctly and new
      packages can be installed via the yum utility (similar to apt on Debian
      based systems). All things considered I would say I am much happier
      with Fedora Core 5 than with previous releases 3 and 4 (which drove me
      crazy due to hardware issues).

      That said, it seems a lot of people are unhappy with the state of Fedora
      Core 5–some questioning whether it was really ready to be released (http://www.fedoraforum.org/forum/showthread.php?t=101470). Maybe
      hardware compatibility is pretty hit and miss with any Fedora release?
      I must say that although I have had no issues with Fedoras latest
      offering (this poll gives a less individual overview) I still prefer
      Ubuntu; so far I have installed this on many different hardware
      platforms and not once has an installation failed–the range of packages
      in the universe repository is also much more diverse than those
      available through yum (I couldn’t find VLC for FC5; I installed it in 20
      seconds using apt-get on Ubuntu).

      Has anyone else given FC5 a try yet; any comments?

    • #3104773

      Linux Patch Management ? How do you keep up?

      by justin fielding ·

      In reply to In my own words…

      There are many areas of system
      administration which pose a much bigger challenge to Linux sys admins
      than to our Windows counterparts. One of the biggest areas of
      difficulty I have personally come across is that of patch management.

      Every day new vulnerabilities
      are reported in all kinds of software?be it for Windows, Linux, BSD,
      or proprietary systems; all software suffers from one bug or another
      in its lifecycle, which can prove to be an Achilles? heel, opening
      up the opportunity for exploitation. To you and me that spells
      ?trouble?; the last thing we want is a breach of our networks due
      to an ?old?, known, and perfectly preventable security hole!

      The first question is how to
      keep up with the latest news and alerts regarding newly discovered vulnerabilities,
      bugs, and potential issues? There are many sources of information
      on vulnerabilities that we can use to keep on top of these things, but
      no single source is definitive, so we need to use them together in order
      to keep up. Examples are the infamous SecurityFocus website (and BugTraq), SecurityTracker, CVE and ca (a bit slow compared to the aforementioned).
      RSS feeds are also available from some sources: sans.org offer their @RISK feed which seems
      to be updated weekly, SecurityFocus provide
      an RSS feed
      , as
      do SecuriTeam. Providers of your distribution
      (Debian, RedHat, Suse, etc.) may offer advisory services. RedHat offers this via mailing lists and
      RSS feeds; Suse/Novell e-mails its registered enterprise customers each
      time a critical patch is released; and Debian offers advisories on their website
      as do OpenBSD.

      You will of course need an
      RSS client to take advantage of the RSS/live feed services. I
      personally use Mozilla Thunderbird as my e-mail client?this has built
      in RSS support which is great as it means I don?t need to have yet
      another program running and slowing down my PC. If you don?t
      use Thunderbird then you may want to try a desktop
      ticker like RDFTicker.

      Moving away from the issue
      of vulnerabilities to the wider area of patches and non-critical software
      updates, what are our options? So many programs’ libraries and
      packages which go towards making up our Linux system are scattered all
      over the internet in many different projects?these are developed,
      improved and fixed by various different development groups and are usually
      updated ?as and when? rather than on a predefined roadmap/schedule.
      It would be impossible for an administrator to track each individual
      package, take note of every update made to each of those packages and
      then download/compile the update on each system. Luckily, pretty
      much all major distributions provide a way of keeping systems up to
      date with minimal effort (bar OpenBSD, which only updates a package
      when a security flaw appears or as part of a new release); next week
      we?ll take a look and see what solutions the major players have on
      offer.

      • #3106726

        Linux Patch Management ? How do you keep up?

        by apotheon ·

        In reply to Linux Patch Management ? How do you keep up?

        It’s pretty simple, really. As a Debian user, I can sum it up with one acronym: APT.

        My Debian Etch/Testing laptop shows 21,680 packages in the archive cache. With that kind of breadth and depth of software availability and the universality and ease of software management that the Advanced Package Tool provides, software management is a matter of a few seconds a day. It’d be even less if I was using Debian Sarge/Stable on this machine.

        Many other distributions offer similar tools, albeit with considerably fewer packages in their archives.

      • #3285627

        Linux Patch Management ? How do you keep up?

        by thrash cardiom ·

        In reply to Linux Patch Management ? How do you keep up?

        I run SuSE on a number of computers.  On most of them I use the
        automated online update tool and on a couple I run it manually. 
        It takes very little time or effort to keep them up to date.

      • #3285340

        Linux Patch Management ? How do you keep up?

        by joedcook ·

        In reply to Linux Patch Management ? How do you keep up?

        I run SUSE Linux 9.3 – 10.0 on serveral systems and use the YaST-Online-Update tool. I do each manually whenever the tool informs me that updates are available, but I would not be concerned about setting it to automatic. None of the security updates installed this way have ever broken my system.
        I have used this distribution on my primary desktops since November 2003.

      • #3226308

        Linux Patch Management ? How do you keep up?

        by brendlerjg ·

        In reply to Linux Patch Management ? How do you keep up?

        It’s true that most modern *nix systems have nice updating capabilities. So that’s irrelevant for the most part. What IS worth better understanding is the relative speed and accuracy with which each system gets patches in the system and out to users. Reactive patching leaves a window of exploitable opportunity between the time hackers figure out the vulnerability and the time end-users apply the patches. So speed and accuracy of implementing patches would seem to be an important security metric.

        For example, Edmund DeJesus of searchsecurity.com sampled how fast various Linux distro’s responded to 30 recent security vulnerabilities of various severity. While not entirely scientific, his analysis shows some interesting results. (See his article entitled “Linux patch problems: Your distro may vary” at http://searchsecurity.techtarget.com/).

        According to his analysis (summary table below) Ubuntu and Fedora Core are the best at this while SUSE and Slackware suck? I also read a post somewhere else where somebody applied this to OpenBSD and they are well down the totem pole too. (Although they admittedly don’t focus their limited resources on patching 3rd-party ports, this demonstrates why OpenBSD excels in applications such as firewall and not as a basis for a desktop). If this is accurate (and I have not verified the analysis) one might feel reluctant to use SUSE as a desktop as well!

        Name Free? Owner Score
        Ubuntu Yes Ubuntu Project (sponsored by Cannonical) 76
        Fedora Core Yes Fedora Project (sponsored by Red Hat) 70
        Red Hat Enterprise Linux No Red Hat 63
        Debian GNU/Linux Yes Debian 61
        Mandriva Linux (Mandrake) Yes (plus commercial versions) Mandriva 54
        Gentoo Linux Yes Gentoo Foundation 39
        Trustix Secure Linux Yes Trustix Project (sponsored by Comodo Group) 32
        SUSE Linux Enterprise No Novell 32
        Slackware Linux Yes Slackware Linux 30

      • #3204426

        Linux Patch Management ? How do you keep up?

        by justin fielding ·

        In reply to Linux Patch Management ? How do you keep up?

        brendlerjg I completely agree with what you have posted–I would rate Ubuntu (and therefore Debian) very highly and SUSE has to be right at the bottom.  I noticed some comments above rate SUSE quiet highly, however these sound like desktop users.  Once you start dealing with farms of servers you don’t want to be required to interact after kicking off an update–SUSE frequently stops at random points of the update with demands that certain services must be stopped, restarted etc.  Ubuntu deals with all of this–the update mechanism is fire and forget.

    • #3263555

      SUSE Linux Enterprise Server 10

      by justin fielding ·

      In reply to In my own words…

      SUSE Linux Enterprise
      Server 10
      is due for release in the summer of 2006; so what?s new? 

      AppArmor ? application level security service 

      Xen 3.0 ? includes fully integrated and supported version of
      Xen 3.0 

      Yast2 ? Yast 2 updated to give consistent experience across
      SUSE 

      There isn?t a great deal of new stuff mentioned, not even in
      the sixteen page downloadable PDF offered on their main page; most of the
      content looks like a new marketing spin on old packages.

      AppArmor interested me personally so I did a little digging
      around on the net – what is it and what happened to SELinux???  Well the first question was not that hard to
      find an answer to, a quick search on google came up with this source?AppArmor
      basically allows you to trap an application so that it can only do what you
      allow it to do in the policy definition, nothing more.  I found it interesting to see ?AppArmor is
      integrated with SUSE Linux Enterprise Server 9 SP3 and openSUSE?; so it isn?t
      really a selling point of SUSE Linux Enterprise Server 10!  On the question of AppArmor Vs. SELinux I
      found an interesting journal entry here
      (Thursday, February 9th, 2006). ?Novell, who last year claimed to be the first
      Linux distribution to ship with SELinux technology, suddenly announced that
      they are dropping support for it. To replace it, they bought a product called
      AppArmor and are now asking third party developers to use it instead of
      SELinux? Not only is AppArmor divergent from upstream/community, but it is also
      not suitable as a real alternative to SELinux, because it lacks the flexibility
      and scalability of SELinux to address the full range of security concerns, and
      its limitations are not just in implementation but architectural.??good stuff,
      the full entry is worth a read if you?re interested in the subject.

    • #3106278

      BT Yahoo up to 8Mbit

      by justin fielding ·

      In reply to In my own words…

      BT Yahoo, one of the UK?s largest home broadband
      providers is starting to upgrade existing customers to 8Mbps free of
      charge.  This comes only shortly after
      upgrading customers from 512Kbps to 2Mbps, again free of charge?a sign that BT
      Yahoo are having to offer more ?value for money? in the very competitive field
      of home broadband; other DSL providers have been offering 8Mbps services for
      quite some time now, at a lower cost than BTs puny 2Mbps package.  Of course, customers who are upgraded to
      8Mbps will be tied in to a new 12 month minimum contract?there?s no such thing
      as a free lunch!

      BT Yahoo still imposes limits on customer bandwidth usage;
      20GB/month for 2Mbit users, this will increase to 40GB/month for 8Mbit
      customers.  This is a policy which other
      high bandwidth providers are trying to avoid?I personally dislike the limits, I
      could quite easily hit the limit in a few days!

      • #3105991

        BT Yahoo up to 8Mbit

        by dawgit ·

        In reply to BT Yahoo up to 8Mbit

        Is that the same ‘Yahoo’ as in free ‘Yahoo’ e-mail & IM sevice?   Or, is that a IP on your little island. I haven’t seen that yet in Germany, but here they’re not an IP. 

      • #3105845

        BT Yahoo up to 8Mbit

        by justin fielding ·

        In reply to BT Yahoo up to 8Mbit

        Yes the very same yahoo.  The UK’s Yahoo messenger has a BT partnered voice calling function, BT Broadband users are given some premium Yahoo serives for free; ‘LaunchCast Plus’ for example.

    • #3285813

      WiMax Update

      by justin fielding ·

      In reply to In my own words…

      Back in October
      I mentioned progress being made in the area of WiMax wireless broadband.  One major problem was the limited amount of
      licensed radio bandwidth available?there seems to have been some movement on
      this front.

      The Register
      reports that Intel and Pipex have entered a joint venture to launch a wireless
      broadband service in the

      UK?s
      larger metropolitan areas, London and Manchester to be the
      first.  Intel?s venture capital arm,
      Intel Capital, has invested $25m?Pipex has signed over their 3.6GHz spectrum
      license to the new company (there goes our radio bandwidth problem). 

      I?m not quite sure what?s so groundbreaking about this
      venture?as I reported back in October, SOBroadband
      offer WiMAX class broadband in the UK up to 10Mbps. There has also been criticism of the decision to provide
      services to urban areas-the main point of WiMax services was that it can offer
      broadband in areas unable to receive wired DSL or cable services.

      • #3105533

        WiMax Update

        by dsb ·

        In reply to WiMax Update

        Wimax / Pipex

        I agree, there is nothing groundbreaking about this, we have been providing wireless connectivity for years and wimax-class technology since the licence allowed it a couple of years ago. Some of our subscribers take 34mbit circuits over it. We dont however concentrate on Metropolitan area’s we try and do all areas that we operate, often the demand in rural / outskirt area’s is higher.

        Darren Brown

        OrbitalNet Ltd 

    • #3285400

      Linux Patch Management: How Debian does it

      by justin fielding ·

      In reply to In my own words…

      Last week, we looked at the importance of Patch Management
      and keeping up to date with the most recent happenings in software bugs/fixes. I
      suggested a variety of sources from which the most recent alerts can be found
      and a variety of ways in which to receive this information (Web sites like
      SecurityFocus, Mailing lists and RSS feeds).

      It?s unpractical and unrealistic to expect that
      administrators should patch source code, recompile, and reinstall each time a
      patch is released; this would take most of the administrators? time and prove a
      never-ending battle (the number of Linux admins sectioned to prevent self harm
      would go through the roof!). Considering this, what options are provided for
      solving this problem? Let?s take a look at the Debian-based distributions (from
      now on, by Debian, I refer to any
      Debian-based distribution) and see how they handle this.

      The name of Debian?s package/update manager says it all, apt. The dictionary description
      is close: ?1. Exactly suitable; appropriate: an apt reply.? 2. Quick
      to learn or understand: an apt student.? However Wikipedia gets it spot on. APT
      stands for ?Advanced Packaging Tool?. The main tools used are apt-get and apt-cache; the former allows
      installation, cache updates, upgrades, and removal of packages. All
      dependencies are calculated, and the user is prompted to approve any additional
      packages required to solve these dependencies. The latter tool, apt-cache can be used to search the
      cache (generated with ?apt-get update?) of available packages and show
      information about specified packages. Let?s see them in action:

      http://techrepublic.com.com/i/tr/NL_textfiles/Extract1_0412.txt

      Great! Installing/removing packages just got a whole lot
      easier?dependency hell is no more. How about updating a package which has been
      upgraded, or installing a security patch? APT has a file called sources.list. Funnily enough, this file
      contains a list of sources–one of which is http://security.ubuntu.com
      breezy-security
      . Here, any critical updates and/or patches released by the
      security team are uploaded; when you run apt-get
      update
      , details of these patches are downloaded, then apt-get upgrade will show you which patches are due to be installed
      on your system, and ask for approval to go ahead and apply them, as you see
      here:

      http://techrepublic.com.com/i/tr/NL_textfiles/Extract2_0412.txt

      It?s that easy?I really like the APT set of update tools,
      they couldn?t be much simpler or more effective.

      One thing we need to keep in mind is that Debian isn?t a
      commercially-supported distribution; as such, this means there is no SLA relating to the frequency of updates or reaction
      speed to potential vulnerabilities once they become public knowledge. The Debian security team claim that most
      problems are corrected within 48 hours of being brought to their attention.
      Security advisories are posted to the debian-security-announce
      mailing list–patches are added to the ?security? APT source once available. Patches will continue to be
      released for one year once a distribution’s stable successor has been released.
      Packages are also signed to allow their authenticity to be scrutinised.

      Next week, I will take you through a commercial
      distribution of Linux and see how they deal with patching and updating
      software.

    • #3103710

      Fedora Foundation scrapped

      by justin fielding ·

      In reply to In my own words…

      It
      was announced by RedHat in 2005 that ‘The Fedora foundation’ would be assembled
      as a non-profit organisation with several objectives, mainly serving the Fedora
      community in order to:

      ?
      Provide a non-profit entity to organize and
      manage volunteers.

      ?
      Ensure that the work of these volunteers remains
      free.

      ?
      Provide a fund-raising arm for the development
      and protection of Fedora and related open source projects.

      ?
      Provide an entity for copyright assignment, so
      that what is free is also defensible in a court of law.

      ?
      Fund patent filings for inventors in the open
      source community, so that dedicated individuals can help to build a protective
      patent shield around open source code.

      Over time it has become obvious that people are expecting
      more from this foundation than was ever planned to deliver–the scope of the
      foundation has been expanding so rapidly that it has hindered any decent
      progress.

      So what now?

      The foundation is to be replaced with the ‘Fedora Project
      Board’, this board consists of five RedHat members and four community
      members–In addition to these 9 members, a RedHat appointed chairman has veto
      over any decisions.
      This new board will make all operational decisions
      for the Fedora project including budget and the strategic direction of the
      project.

      For the full story take a look at this post to the fedora-announce list: http://www.redhat.com/archives/fedora-announce-list/2006-April/msg00016.html

    • #3105028

      Microsoft open Port 25

      by justin fielding ·

      In reply to In my own words…

      At last weeks Linux-World conference Microsoft
      announced the launch of Port25, it’s new website to communicate with the
      OpenSource community on the topic of Microsoft?s interoperability
      efforts. Microsoft?s ‘Open Source Software Lab’ houses over 300 servers
      which run more than 15 types of Unix and 50 different distributions of Linux;
      In charge of the lab is Bill Hilf, he previously worked with IBM and was key to
      driving their Linux technical strategy for emerging markets.

      There has been a lot of suspicion of Microsoft?s motives for launching this
      project, and for housing an Open Source Lab at all. The claimed reason is
      to better understand and aid integration of Microsoft and Open Source
      technologies. Reading the blog comments and posts from site visitors,
      it’s mainly negative. Good points are raised:

      How can this be taken seriously when they think NetBSD is a Linux distribution?

      Why should the Open Source community help Microsoft when Microsoft doesn’t help
      them?

      If Microsoft wanted to aid interoperability why not make IE follow w3c
      standards? Make protocol information available to projects such as Samba
      etc?

      If interoperability is a serious goal, how about giving Microsoft Office users
      the chance to save in the Open Document Format?

      The site is a little sparse on content so we’ll just have to wait and see what
      the outcome is.

      Take a look for yourself: http://port25.technet.com/

    • #3103993

      Patch Management – How SUSE do it

      by justin fielding ·

      In reply to In my own words…

      Previously I took you through
      the importance of keeping systems ‘patched’ and up to date; we then
      looked at the apt update mechanism used by Debian–how it resolves
      dependencies of packages and allows for patches to be quickly and easily
      applied. This week, I want to take a look at one of the major commercial
      Linux distributions, SUSE Linux Enterprise Server, and see how they
      deal with the same issues.

      As most of you will know, Novell are behind SUSE Linux Enterprise Server
      (SLES). Many enterprises choose to go with this commercial version due
      to the peace of mind offered by full support, backed by Novell. Support
      means far more than simply being able to ring a call centre in India
      and be driven mad while they pass you from person to person; Novell
      support provides not only installation support and hardware certification/testing,
      they also protect you from possible intellectual property issues which
      could arise in relation to Linux (Microsoft push this risk as a big
      negative factor when dismissing Linux). The last, and for us, most important
      part of the support package is the seven-year lifecycle of the product;
      that means that from the launch of a SUSE Enterprise Linux release (SLES9,
      for example), operating system patches and security updates will be
      available via the SUSE Linux Portal for the length of your support subscription?for
      seven years. This means you don?t need to worry about upgrading to
      the latest release, just so that you can maintain a secure system free
      from vulnerability.

      Although I have recently started to favour Debian-based distributions
      such as Ubuntu, I still use SUSE Linux Enterprise Server for core service-bearing
      machines.

      YaST (Yet another Setup Tool)
      Control Centre is the main hub for all administration work with SLES–from
      here, you can add/remove packages, perform ‘Online Updates’, change
      hardware configuration such as graphics mode, change system preferences
      (partition editing, network settings, hostname, firewall setup, time/date
      etc), control system services, and even manage users. Of course we?re
      interested in the online update functionality, YaST Online Update (YOU
      for short), which you can see in Figure A [http://i.i.com.com/cnwk.1d/i/tr/NL_images/Fielding0419_A.jpg] (click to view).

      YaST Online Update can be configured
      to pick updates from the SUSE Portal, a different HTTP of FTP source,
      CD, DVD, or local Windows / NFS shares. Most people will use the direct
      SUSE Portal, however if you need to update multiple servers there would
      be considerable advantage to mirroring the SUSE repository and performing
      the updates via a local share; this would save considerable bandwidth.

      Not all patches need to be
      applied. Kernel updates, for example, will display a warning before
      installation and give you the chance to skip. This is pretty useful
      if the reason for the patch does not affect you, or you don?t want
      to update due to module dependencies (for example HP CCISS module).
      There is one issue that I?ve come across, which is that at some point,
      other patches/updates may not apply if the Kernel is not up to date.
      If a specific patch requires a service to be restarted, stopped, or
      it requires a configuration modification, then a prompt will be displayed
      with any relevant information and instructions, such as in Figure
      B
      [http://i.i.com.com/cnwk.1d/i/tr/NL_images/Fielding0419_B.jpg] (click to view).

      All things considered, YaST
      Online Update is relatively trouble free, solving all dependencies just
      as Debian?s apt or RedHat?s yum. As I said, the only
      issue I have had with YOU is in applying new updates without updating
      to the latest Kernel patch.

      I approached Novell and asked
      them how long they aim to take in order to patch a vulnerability once
      it?s in the public domain. It was stressed that Novell work hand-in-hand
      with other members of the Open Source community to fix security holes
      as quickly as possible?the time scale varies from a few hours to a
      few days, depending on the severity and complexity. Upgrades to packages
      (e.g., version updates) are not usually provided unless there is a security
      fix in the newer version; however, service packs will sometimes contain
      version updates to add new functionality or compatibility. When I raised
      the issue of Kernel updates/module dependencies (such as CCISS for HP
      Servers), it was mentioned that in the upcoming SUSE Enterprise Server
      10, there will be a new way of dealing with Kernel updates. A dedicated
      Kernel update tool will be added that will check loaded Kernel modules
      and assess their compatibility with the new Kernel?this sounds like
      an interesting development which I?m quite eager to see in action.

      As a general package management
      tool, YOU is not as good as apt due to the lack of available
      program updates; however, as a tool for delivering security patches,
      it is every bit as good–and of course, fully commercially supported.

      Have you been using SLES and
      YOU? How do you compare it to apt or yum? Do you prefer
      to run updates from the SUSE Portal or a local repository?

    • #3287415

      Cross-Platform virus in the wild

      by justin fielding ·

      In reply to In my own words…

      A ‘proof of concept’ (e.g. harmless) cross-platform virus has been
      reported by security firm Kaspersky Lab. The virus, named
      Virus.Linux.Bi.a/ Virus.Win32.Bi.a, does not seem to have any malicious
      function; it does however cause concern as it could mark the beginning
      of a new age where Linux users are no longer safe from the infections
      spread via Windows.

      Files infected with the virus will contain the following text:

      [CAPZLOQ TEKNIQ 1.0] (c) 2006 JPanic:

      This is Sepultura signing off…
      This is The Soul Manager saying goodbye…
      Greetz to: Immortal Riot, #RuxCon!

      If you don’t already have your systems protected by anti-virus software,
      it’s probably about time to start, the following solutions could be
      considered:

      BitDefender for Samba is an interesting product if you are running a
      samba fileserver

      BitDefender for Linux is an on-demand solution for Linux, this is freeware

      Clam AntiVirus is a well respected scanning engine, it can be used to
      perform on-demand scans, on-the-fly mail scanning and even on-access
      scanning (although the last time I checked this was unstable)

    • #3271238

      ‘Linux Snobs’ are the barrier?

      by justin fielding ·

      In reply to In my own words…

      I found an interesting piece on reallylinux
      about the very real
      barrier being created by ‘Linux Snobs’. It makes an interesting read
      and raises some very good points.

      I remember when I first started ‘playing’ around with Linux, It took me
      at least a week to download the ISO’s for mandrake (56k)–I was utterly
      confused by the abundance of different distributions, all claiming to be
      ‘the best’; hardware support was a major headache, It was such an ordeal
      (for a zero experience user) to get online via a modem that I ended up
      connecting via a hub and my Windows 98 PC (using NAT). Ok, so once I
      was online I needed to learn about using this new OS. Great, there was
      an abundance of documentation–however I didn’t really want to spend 4
      hours reading a very comprehensive how-to when my only goal was to get
      my USB mouse working! Not a problem, I went to one of the helpful IRC
      rooms and asked if there was an easier way to get things running
      smoothly; hmm, 2 seconds after asking a question I was kicked and banned
      from the room with some useless comment like ‘Read the manual n00b’ or
      ‘$tick with Windoez’. There were some IRC rooms with more helpful
      members, rooms like #linuxnewbies and #linuxquestions. These did tend
      to get quite crowded and unless the question was relatively
      straightforward then one was often overlooked.

      After this initial experience, lots of wasted time and quite a bit of
      frustration I gave up on Linux for a few years. Later I came back to
      Linux mainly due to necessity–managing our web/db server which was
      running Fedora Core 1; I had no choice but to dig my teeth in and learn
      the slow/hard way. I also installed Fedora on my laptop and worked on
      it as often as possible (even managing to have Dreamweaver running with
      WINE at one point), increasing my exposure to the OS. I found forums to
      be a very good source of information and direct help; the responses were
      also more detailed and quite mild mannered compared to those found on IRC.

      I have found that as I become more experienced, working with Linux
      becomes easier, not because I know the ins and outs of a particular
      program but rather because I have a better understanding of the
      underlying system and how/why thiongs work in the way that they do;
      working as a full time Linux Sys Admin I now also find the advice and
      knowledge shared by more experienced colleagues to be invaluable. I
      find the mailing lists for the specific program or distribution to be
      very helpful and finally there are still many forums such as
      http://www.linuxquestions.org which will normally return a quick answer to any
      question.

      What was your introduction to Linux like? How did you pick up your
      initial survival skills and how did you deal with the sometimes less
      than polite characters in the realm of Linux?

      • #3271225

        ‘Linux Snobs’ are the barrier?

        by tony hopkinson ·

        In reply to ‘Linux Snobs’ are the barrier?

        Haven’t bumped into many snobs. Community wise, we got in so late, the basics are sort of taken as a given though.

        So you see download the following tgz file, rebuild your kernel …

        Then you have to find out what tgz file is, how to unpack it , how to rebuild the kernel … Go through all that and you find it’s not the version of the kernel your distro uses.

        Perhaps we should have started at the bottom of the mountain with the other adventurous types instead of being parachuted onto the summit and then trying to understand the steps involved in climbing up there. Particularly when the pilot makes a bit of a mistake and you end up half way down on your ass.

        Keep buggering in !

        My version of mandrake came on a CD, that solved a lot of problems. Along with google and reread of a Linux guide for complete idiots.

        When it’s so new to you, you even learn something from going down blind alleys. Though it may take a while to figure out what.

        The biggest help, remember at all times Linux is not windows.

      • #3148592

        ‘Linux Snobs’ are the barrier?

        by justin james ·

        In reply to ‘Linux Snobs’ are the barrier?

        This is reason #7,219 that I prefer working with BSD over Linux. The community is simply much more professional. They recognize that every user that gets welcomed and helped with getting their system set up is another person who may be giving back further down the road. I have never once received a “n00b” or “RTFM” from BSD users. More importantly, the documentation is so much better, I so rarely need to actually put a question to the community. I suggest that if you are having problems with a Linux system, and the documentation stinks, go to a BSD site and look there, anything in userland should be identical or nearly identical.

        J.Ja

      • #3149548

        ‘Linux Snobs’ are the barrier?

        by charliespencer ·

        In reply to ‘Linux Snobs’ are the barrier?

        My initial brush with Linux was almost three years ago.  After several weeks, I abandoned it for the same reasons you initially did – no motivating reason to pursue it.  I haven’t had a reason to go back since it’s not required for those technologies we have or will implement.  The only skills I’ve acquired are installing SUSE and RH9, and I couldn’t get Windows back on either of the hard drives I used.  One day I may stick my toe back in the water, or maybe even put my head under, but it probably won’t be this year.

        I too was (and still am) confused by the variety of distributions.  If the only flavor you’ve ever even heard of is vanilla, Baskin-Robbins can be a bit overwhelming.  Unlike ice cream, I dislike the suggestion “Keep trying, you’ll find a distro you like.”  Continuously reloading different operating systems is not my idea of a good time.

        I’ve found rude people in Windows forums too.  I respond to rude Linux respondents the same way I respond to rude Windows people.  I ignore their answers and patiently wait for someone else to post the answer.  The second or third poster often takes the rude one to task for not responding in a useful fashion.  (“RTFB” is not a useful answer if the distro didn’t come with a FB.  MAN pages are written assuming a familiarity with *nix OSs that newbies rarely have.)  Most of the Linux experts hanging out at TR are quite helpful,
        although you can expect to have your distro selection second-guessed on
        a regular basis.  While I was a TR member at the time of my Linux
        experiments, I wasn’t active in the Q&A or forums.  I would have
        benefited from the caliber of advice available here.

        Nothing says “I’m a Linux Snob” better than intentionally misspelling “Microsoft” or “Windows”.  Hey Penguinistas, that was funny in 1996 or so, but it got old in a hurry.  The second sign of a “Linux Snob” is to reply “Switch to Linux!” in response to every Windows question.  There are several links around TR to “Linux is not Windows”.  While it wonderfully explains what Windows users should expect, “Linux Snobs” should read it to learn why “Switch to Linux!” is not the answer to a Windows problem.

        I do often wonder how newbies get advice without a computer with working Internet access.  When I was actively trolling the newbie forums I would read questions about how to get a particular distro / hardware combination on line.  I can only assume they were using a Windows computer to ask their question.

        While I found my local Linux Users Group to be of great assistance via e-mail, I found their meetings to be much less than useful.  Based in the CSCI department of the local university, the two meetings I attended revolved around topics highly unsuited to a newbie (a program to debug programs, an upcoming developers’ conference in Europe).  I don’t recall anyone of the 15 or so in attendance who was not with the university or a software developer.  No one from the corporate world, no hobbists, no SOHOs, and no other newbies.

        Would you mind posting a link to the reallylinux article you referenced above?

      • #3149544

        ‘Linux Snobs’ are the barrier?

        by charliespencer ·

        In reply to ‘Linux Snobs’ are the barrier?

        Oh, one more thing.  Many of us newbies are interested in learning how to use Linux and the applications that run under it.  We aren’t interested in the relative merits of open vs. closed source software, or the business models of companies that sell proprietary software for profit vs. those who make money supporting or adding value to OSS.  If I ask on a cooking web site how to prepare an artichoke, I don’t want the details of whether it was grown with chemicals or organically, or whether the artichoke pickers were union members or illegal immigrants.

      • #3149537

        ‘Linux Snobs’ are the barrier?

        by charliespencer ·

        In reply to ‘Linux Snobs’ are the barrier?

        Oh, one more thing.  Many of us newbies are interested in learning how to use Linux and the applications that run under it.  We aren’t interested in the relative merits of open vs. closed source software, or the business models of companies that sell proprietary software for profit vs. those who make money supporting or adding value to OSS.  If I ask on a cooking web site how to prepare an artichoke, I don’t want the details of whether it was grown with chemicals or organically, or whether the artichoke pickers were union members or illegal immigrants.

    • #3149177

      Saving bandwidth with a local mirror

      by justin fielding ·

      In reply to In my own words…

      Previously, I reviewed both the Debian apt and SUSE YOU update mechanisms. Both systems enable administrators to keep servers up to date with the latest security patches and updates downloaded from internet repositories

      After touring the features of apt and YaST update mechanisms, it?s clear that if you want to update multiple computers within your LAN, a definite advantage will be gained by keeping a local mirror of the update repositories; one update package can be 30 MB+ on its own. If you need to deploy this to 10-20 servers in a farm it doesn?t make sense to download 600 MB from the internet when 30 MB will suffice. The update programs can have their repository location customised; therefore, it makes sense to download the available packages once (to a local mirror) and point your apt or YaST there.

      There are a few things which should be considered while planning your local mirror: first of all, you need to make sure that there is enough disk space available. A Debian mirror can sport over 100 GB of data. Normally, you can get a major saving in disk space by excluding the architectures that are not used on your network: alpha, arm, hppa, m68k, mips, mpisel, powerpc, s390 and sparc are not required by the majority; i386 and ia64 architectures should fulfill most needs. Excluding the unused types will return a massive reduction in wasted space. A second consideration should be the amount of bandwidth consumed by the initial synchronisation with an external mirror. Even with high-speed business lines, the huge amount of data being transferred will likely slow down internet connectivity for quite some time?it?s probably best to run this initial sync overnight or even better over a weekend. Future synchronisation runs will not need to run outside of office hours as the amount of data downloaded will be trivial.

      So, we have reviewed which architectures are used on our network, allocated the required resources (e.g. disk space) and decided on a good time to create our initial mirror?how do we move forward? I?m looking to create a local Debian repository so I have two options: using the repository mirroring script provided here by Debian, or using a script/package called ?debmirror?. I am personally more interested in debmirror as it can also be used to mirror Ubuntu repositories without modification.

      Installation is of course a breeze:

      # apt-get install debmirror

      The default configuration is found in /usr/share/doc/debmirror/debmirror.conf; any directives found in /etc/debmirror.conf will override the default configuration. I would recommend putting a full copy of the file in /etc and modifying this one:

      # cp /usr/share/doc/debmirror/debmirror.conf /etc

      # vi /etc/debmirror.conf

      The configuration file has many customisable options which can be read about in the man pages, the most important are:

      @dists=?? – This specifies which distribution versions should be mirrored.

      @sections=?? – This controls which repository sections will be mirrored; you may, for example, only want to take updates from the main section but not the universe.

      @arches=?? – This dictates which architectures should be mirrored (as previously discussed).

      $post_cleanup=0/1 – Delete files no longer on the remote site, but only after a successful sync (e.g. no errors).

      $check_md5=0/1 – Check the MD5 of files after download to verify their integrity.

      $dry_run=0/1 – Perform a dry run, good to test your settings.

      Once the configuration has been completed, setting off a synchronisation run couldn?t be easier:

      # debmirror /path/to/local/mirror

      Those wanting to use debmirror to create an Ubuntu mirror should take a look at this support post for some guidance. The main variables to take note of are:

      host=au.archive.ubuntu.com (this one for Australia)

      method=http

      root=ubuntu/

      dist=breezy (change to your dist)

      section=main,restricted,universe

      arch=x86

      I hope this has shown how simple the process of configuring a local mirror can be. The debmirror script can be called from crontab at your preferred interval; I would suggest that once per day is more than adequate. If you have any preferred method for mirroring repositories, why not leave a comment and share it with us?

      • #3149036

        Saving bandwidth with a local mirror

        by dawgit ·

        In reply to Saving bandwidth with a local mirror

        Interesting. Some very good info here. Thanks for that. I’ll have to give it a wack as soon as I get some time. Again, Thanks. -d

    • #3151115

      NY county gets serious on WiFi

      by justin fielding ·

      In reply to In my own words…

      The New York county of Westchester has passed a new law in an attempt to
      curb the current growth in identity theft.  Businesses which may store
      personal information in an unencrypted manor are now required to install
      a firewall or change the default SSID of wireless access
      points–penalties for non-compliance range from an initial warning to a
      $500 fine.

      While it is accepted that simple measures like these will not stop
      identity theft, this is seen as a move in the right direction. 
      According to the county’s chief information office, Norman Jacknis,
      while the law was being considered officials picked up over 100
      unsecured access points during a 20 minute drive.

      Public hotspots such as cafes will be required to post a warning sign
      stating “For your own protection and privacy, you are advised to install
      a firewall or other computer security measure when accessing the Internet.”

      This new legislation has drawn attention from other areas of the US as
      well as the UK and Europe.

    • #3150712

      Pro-active security, waste of time?

      by justin fielding ·

      In reply to In my own words…

      The Everdream corporation (http://www.everdream.com) has released a
      service which will encrypt or delete files on a laptop or PC if stolen. 
      The software runs as an agent on the computer–this connects to the
      control centre of Everdream which extracts network location details from
      the computer and if the computer is stolen will either encrypt or delete
      files specified by the customer.  A policy is created by the customer to
      decide whether encryption or deletion would take place, and which files
      are targeted.

      The service costs around $6 per PC.

      The usefulness of such a system is debatable, it assumes that the thief
      is stupid enough to connect to the internet.  Most stolen computers will
      be reformatted as soon as they are switched on; windows logon  passwords
      will prompt this.  If the computer was stolen for the information
      contained on the hard disk, it’s unlikely that it would be connected to
      the internet before all of the desired information has been pulled off.

    • #3163424

      How secure is your data?

      by justin fielding ·

      In reply to In my own words…

      USB flash drives, pen drives, portable hard disks?these tiny,
      high-capacity storage devices have a thousand and one uses. They have rendered
      most other types of portable storage?floppy disks, ZIP drives, and even
      rewritable CDs?pretty much obsolete. Their high capacity and high reliability
      (with no moving/mechanical parts to fail), combined with extreme ease of use
      makes them the ultimate in portable storage. You rarely need to install drivers
      for them, making them true plug and play devices. At the time of this writing,
      a 2GB
      USB2 flash drive
      will set you back ?37?that?s pretty affordable!

      There is no question that USB portable storage has changed
      the way people view portable media and the way in which people work.
      Previously, 3.5″ floppy disks were used to cart work back and forth; any
      IT support staff can tell you that these were less than reliable, and trying to
      explain to users why their disks had become corrupted was no easy task! Any
      files over the 1.4-MB limit of a floppy disk would require either a ZIP drive,
      or later, a writable CD to transfer. The ZIP drive was awkward to carry around,
      requiring drivers to be installed on most PCs, which was often not allowed by the
      Windows policies in place. CDR/RW was a great improvement but the media was
      delicate and still gave some compatibility issues with older CDROM drives not
      being able to read the new disks. USB Flash drives saved us all; however, they
      do now present administrators with a new set of problems, and re-present older
      issues. 

      An interesting report in the
      Los Angeles Times
      a few weeks back highlighted one major security issue
      which has been created with the introduction of USB storage. I?m sure we would
      all imagine military security to be of a high standard when compared to small
      businesses or even large corporations?how shocking it is, then, to see that
      reporters bought USB flash devices from a bazaar 200 yards outside of the Bagram
      military base in Afghanistan. These devices (stolen from the base by cleaners
      and other local workers) contained documents marked “Secret,” which
      named suspected militants, documented U.S. efforts to remove Afghan government
      officials, and a classified briefing on “man portable counter-mortar
      radar” now being used in Iraq. One device also listed over 700 service
      members with their social security details, opening them up to identity theft. 

      This highlights the greatest danger posed by plug and play
      portable storage?data loss. While the spread of viruses and the theft of data
      also pose an issue, there are clear and simple methods to deal with these
      problems. Virus outbreaks are handled by ensuring that on-access scanning is
      enabled on corporate machines and virus definitions are kept up to date. Data
      theft can be hampered by making sure that all machines are password-protected
      and locked while not in use (including once the screensaver has been
      activated). Data loss, however (e.g., lost or stolen USB drives which contain
      sensitive information) is much harder to deal with. One solution is to make
      sure that users are equipped with secure
      storage devices
      . These devices have encryption/conditional access programs
      included, which require a password to access the contained data. If lost or
      stolen, these will be pretty much useless to the new ?owner?. The real problem
      is that unless you equip every user with once of these devices, someone will
      still use their own unsecured device (even if you do equip everyone, they may
      still use their own devices). There are only two ways to stop this?both pretty
      drastic. 

      1. USB
        locks
        ? these little devices basically blank off USB ports, meaning
        that users can?t plug in unauthorised devices. This means, however, that
        if you authorise them to use one USB device, they can basically use any. I?m
        also sure than any smart and determined user would find a way to remove
        these by themselves. 
      1. Windows
        Group Policy
        ? This addition to the Windows Group Policy will allow
        administrators to disable removable media. This seems like a more sensible
        approach as it still allows USB devices like mice to be used. It will also
        mean administrators can exclude certain users from the restrictions.

      All things considered, USB storage has been a godsend for
      both administrators and end users?however, it is important to be aware of the
      risks that they pose and to educate our users.

      What is your company’s policy on the usage of USB
      media? How do you control its use? It would be interesting to hear your
      experiences and opinions.

      • #3163404

        How secure is your data?

        by justin james ·

        In reply to How secure is your data?

        Turn off the Internet while you’re at it! I don’t carry around USB devices to transfer data, I stick stuff on my FTP server at home. Slower, yes, but I do not have to worry about physical devices (I am rather abusive to portable equipment). What is really needed is file system encryption that relies upon the user being actively authenticated against the network, in a transparent to the user (and applications) fashion. That doesn’t help with people emailing/FTPing/whatever data arounf, but it does help with the portable storage problem.

        J.Ja

      • #3159932

        How secure is your data?

        by flecavalier ·

        In reply to How secure is your data?

        The Ultimate in Secure Digital Identity and
        Encrypted Storage

        Stealth
        Stealth MXP? is a Portable Security Device. It is a USB powered, secure, multi-functional
        product with on-board processing for seamless, hardware based encryption, secure
        storage and strong authentication. Stealth MXP provides unprecedented security,
        functionality and flexibility in the management of sensitive corporate information on a
        portable device as well as protection and assertion of personal and corporate digital
        identities.

        Stealth MXP is ideally suited for enterprise security applications such as Single Sign On,
        PKI, encryption, and remote access. It has management interfaces that enable
        organizations to fully control security policies, deployment and usage of the devices.

        As a digital identity device, Stealth MXP interoperates with virtually any security
        infrastructure providing unprecedented flexibility and mobility of digital credentials. It
        protects these credentials with strong user authentication (biometric, password or both)
        which allows systems to enforce 2 or 3 factor authentication. Stealth MXP is also ready for
        the emerging digital identity meta-system as a WS-Trust Portable Security Token Service
        (PSTS) capable of issuing SAML tokens for secure claims based identity transactions to
        target services. Stealth MXP offers a host of general purpose, industry standard, cryptographic
        services, including random number generation, key generation with internal or external entropy,
        encryption and decryption, signature generation and verification, one time password, and secure hash.
        Stealth MXP also offers transparent encryption (AES 256) of portable mass storage in a ?stick? format. The
        Stealth MXP uses a patent pending communication protocol that provides true portability on any environment that
        supports USB mass storage.

        Stealth MXP technology is available in other hardware versions for those who do not need the portable mass
        storage or biometric capability. These device alternatives are Stealth MXP? Bio Token, which has no user mass
        storage, and Stealth MXP? Token, which does not have mass storage or biometric authentication.

        http://www.mxisecurity.com/stealth_mxp/

    • #3162996

      Bots use in CyberCrime increasing

      by justin fielding ·

      In reply to In my own words…

      As mentioned while looking at spam filtering in previous blogs, botnets
      (networks of ‘bots’) are being noted as an increasing source of
      cyber-criminal activity. A securityfocus article
      (http://www.securityfocus.com/brief/195?ref=rss) quotes shocking figures
      from The Messaging Anti-Abuse Working Group; as many as 7% of PC’s
      worldwide are infected and working as active bots, that’s around 47
      million. They also estimate that bots now send 70% of all spam–this is
      an increasing problem for System Administrators as it means many more
      residential broadband IP addresses are being blacklisted by projects
      like SpamHaus. I have had several instances where users trying to send
      mail from outside of the corporate network have been unable to due to
      being in a dns blacklist (while not infected themselves, a previous user
      of the dynamic IP address must have been).

    • #3162134

      Service Monitoring

      by justin fielding ·

      In reply to In my own words…

      Previously I took readers over the merits of Nagios, the Open Source
      system monitoring tool.  For some people Nagios is too heavy and
      requires far too much time and effort to configure; what’s needed is a
      lightweight monitoring tool which can keep an eye on daemons and let you
      know when things aren’t as they should be.  ‘monit’
      (http://www.tildeslash.com/monit/) seems to fit the bill, it could also
      make a useful companion to Nagios as it will detect a dead process and
      restart it for you!  A handy HTTP interface is built in so you can check
      on the system status from the comfort of a web browser.  Services can be
      put in to groups for easy management, dependencies can be configured
      between services and files/disks can be monitored.

      All over this looks
      like a useful little program (compiles to 200KB), I haven’t tried it yet
      but intend to in the near future–maybe you can give it a try and let us
      know how you got on?

    • #3162417

      Securing your portable storage: CruzerLock and TrueCrypt

      by justin fielding ·

      In reply to In my own words…

      As previously discussed, portable storage (e.g., USB
      PenDrives) has changed the way that we move our data, but it brings the issue
      of potential data loss firmly into the forefront of our security strategy. These
      devices are very small, and they are all too easy to misplace (or have stolen),
      so how do we go about making sure that the data stored on these devices is
      secure and can only be accessed by those who are authorised to do so? 

      Some device manufacturers like SanDisk offer encryption /
      protection software, which is CruzerLock in its
      case. The basic features offered by these packages are pretty much equal across
      the board. The free version of CruzerLock includes both file encryption and
      compression: 

      • Encrypt entire directory
        structures, folders, or drives files from a PC hard drive, or from the flash
        drive, into the CruzerLock-protected archive using an integrated MS
        Windows Explorer utility
      • Upgrades encryption to 448
        bit Advanced Encryption Standard from current 56 bit
      • Powerful compression engine
        (PKZip) built into the software, compresses data up to one tenth of its
        original size
      • Integrated password
        recovery

      Upgraded versions of the software are available; some of the
      additional features offered by these include the ability to assign permission
      rights (view, copy, delete etc.), use integrated filesharing, and lock content
      to specific machines. To be honest, I don?t think it?s worth paying up to $100
      for the upgraded features?I simply want to lock my files away so they are
      protected if I lose the device. I currently use a small USB key to store a
      backup of my financial records. This is protected by the manufacturers’
      included software, which is easy to use and portable, as the unlock program is
      run straight from a small unencrypted partition on the key. There is, however,
      one reason why I have not used encryption on my main 2GB USB key: very simply,
      it?s that the manufacturers’ programs never support Linux. I keep copies of
      letters, e-mail and bookmark backups, photos, etc. on my key (not exactly top-secret
      military documents, but I would rather not have people looking through them if
      I lost it). I am in Linux often, so if I want to update my bookmarks to put
      them in sync with my Windows bookmarks?my key is encrypted by the Windows
      software, and I have no chance of getting to the bookmarks file while in Linux.
      Of course, if I set up an encrypted filesystem on the key from Linux, I can?t
      access my data in Windows. 

      Well, now I have found a solution?an Open Source project
      called TrueCrypt.

      Not only is TrueCrypt available for Windows, but I was
      delighted to see that Linux packages are available in many flavours: Fedora
      rpm, Debian/Ubuntu deb, plus SuSE rpm. Add to that the source code for both the
      Windows and Linux applications, and the only major OS lacking is Apples OS X. A
      future release is apparently planned. So what are the main features?

      • Creates
        a virtual encrypted disk within a file and mounts it as a real disk.
      • Encrypts
        an entire hard disk partition or a device, such as USB flash drive.
      • Encryption
        is automatic, real-time (on-the-fly) and transparent.
      • Provides
        two levels of plausible deniability, in case an adversary forces you to
        reveal the password:

        1. Hidden
          volume (steganography ? more information may be found here).
        2. No
          TrueCrypt volume can be identified (volumes cannot be distinguished from
          random data).

      • Encryption
        algorithms: AES-256, Blowfish (448-bit key), CAST5, Serpent, Triple DES,
        and Twofish. Mode of operation: LRW (CBC supported as legacy).

      The documentation is very complete. It seems there is also a
      traveller mode for Windows use?this installs an application which can automatically
      launch the unlock program and then, once supplied with the password, it will
      mount your encrypted drive. It would be nice to have this functionality for
      Linux too, but I can?t see it happening (due to differing Kernel versions etc.).
      Next week we?ll take a look at setting up TrueCrypt in Windows. How well will
      it work?

      • #3162350

        Securing your portable storage: CruzerLock and TrueCrypt

        by dawgit ·

        In reply to Securing your portable storage: CruzerLock and TrueCrypt

        Ok, I’ll byte.  As some-one who is swaping things back & forth between Windows & Linux, doing so a little more secure sounds good to me.  Thanks for the heads-up on this. -d

      • #3154426

        Securing your portable storage: CruzerLock and TrueCrypt

        by dlandry9 ·

        In reply to Securing your portable storage: CruzerLock and TrueCrypt

        I use this program for all my on-the-fly encryption needs, it has been excellent. The recent additions to use keyfiles is great, as it allows you to be secured against keyloggers.

      • #3152186

        Securing your portable storage: CruzerLock and TrueCrypt

        by lizseal ·

        In reply to Securing your portable storage: CruzerLock and TrueCrypt

        You can call this a blog, but it is really excellent Security Awareness information that I am glad to have.  And, I will be passing this on to a couple of Security individuals, that I have worked with.  Thinking of these devices as extended infrastructure puts a new twist into Security Policy and Desktop Support.  Thanks

    • #3153588

      Playstation 3 launch set

      by justin fielding ·

      In reply to In my own words…

      The long awaited Playstation 3 launch has been set for this November,
      one year (and 10-million units) behind the launch of Microsofts
      Xbox360. The Playstation 3 is claimed to offer twice the performance of
      the X360 with a 3-core 3.2Ghz PowerPC CPU. The wireless controllers
      will use bluetooth based connectivity, similar to those of the X360 and
      there will be two versions of the console on sale priced at $499 and
      $599–the major difference being cited is a 20GB hard disk in the $499
      model and 60GB hard disk in the $599 model.

      What Sony have not been so forthcoming in advertising is that the
      cheaper model will lack support for removable media (SD, CF, Sony Duo),
      the WiFi will be removed and so will the HDMI outputs which are
      necessary for high definition–this could create a lot of unhappy owners
      when they are brought the base model for Christmas!

      The new controllers will be tilt sensitive; general opinion seems to be
      that although a novel feature, this could end up being more irritating
      than absorbing.

      • #3152916

        Playstation 3 launch set

        by frylock ·

        In reply to Playstation 3 launch set

        “…HDMI outputs which are
        necessary for high definition…”

        The $499 model won’t even have component video outputs? HDMI is not
        necessary for high-def, you can get it from component anolog
        connections as well. HDMI is digital but it’s not necessarily better,
        and it is by no means the only way to get high definition video (by
        which I expect you mean >480p).

      • #3152897

        Playstation 3 launch set

        by gkurcon ·

        In reply to Playstation 3 launch set

        HDMI WILL be required if you want to watch Blu Ray movies in full HD
        resolution-assuming the studios set the disc to disable HD over
        component…a main feature of the new HDCP rules being applied to
        Blu-Ray and HD-DVD. Most of the studios say they will not do
        this, for now. But come on, this is the same industry that sues
        11 year old girls for millions of dollars because she downloaded some
        mp3s off of Kazaa. My advice is bank on the industry crippling
        your PS3’s component outs in the next year or two when you go to watch
        a Blu-Ray movie.

      • #3152891

        Playstation 3 launch set

        by frylock ·

        In reply to Playstation 3 launch set

        Ah, HDCP with Blu-Ray. There is no technical reason you couldn’t send
        HD content over component, so they’ll create a politcal one. Of course,
        I should have expected this! Still, I’m a little surprised they’re
        considering copy protection for analog, in the past at least most of
        their paranioa was centered around digital copies.

      • #3152881

        Playstation 3 launch set

        by justin fielding ·

        In reply to Playstation 3 launch set

        It’s a bit daft, whatever copy protection mechanism they try to implement–it will always be cracked, hacked and generally broken; they should stop wasting money on developing these schemes and take a few ??? off of the retail prices.  To be frank DVD’s can now generally be brought for under ?10 a year or so after release–it isn’t worth the time to copy/download the movie.  The only people who persistently copy/download/buy pirate are people who couldn’t afford to buy the original if a copy were not availible, so it’s not like they really lose that much revenue–music companies have more right to complain than movie studios, they really do lose out.

      • #3152772

        Playstation 3 launch set

        by smorty71 ·

        In reply to Playstation 3 launch set

        I think Sony has blown it for this round of the console wars. Insisting on including a Blu-Ray player, just to support their movie studio business, is their big mistake. It raised the price A LOT and most consumers won’t care about it. MSFT took the smarter approach by offering an HD-DVD as an optional accessory for those XBOX 360 owners who want one.

        I already have an XBOX 360, so the only console that I am excited about is the Wii. If the Wii launches at $199 (like every other Nintendo console), you can get a 360 and a Wii for the price of the PS3.

      • #3152769

        Playstation 3 launch set

        by gkurcon ·

        In reply to Playstation 3 launch set

        The entire copy protection logic is so flawed.  All that HDCP and limiting resolution over non-HDMI outputs accomplishes is effectively rendering early adopters HDTV sets useless.  I have a Toshiba HDTV RPTV with component in only.  I also have an InFocus HD projector that supports HDCP, but I’m not very happy about the plans they have for Blu-Ray and HD-DVD.  Anyway, back to the original topic..it does seem a little shady that they came out with two price points but are being very quiet about how the lower priced unit is really siginficantly less enabled.

    • #3153690

      Bot-Master banged up!

      by justin fielding ·

      In reply to In my own words…

      Evil bot-master Jeanson James Ancheta (aged 20) was sentenced to 57
      months (4 1/2+ years) in prison for spreading a bot program which
      compromised over 400,000 computers including those used in national
      defence and healthcare.  The 20-year-old made over $100,000 in affiliate
      schemes involving the installation of adware the victims machines–he
      also admitted to selling access to his botnets for unknown activities
      which could have included DOS attacks and sending of SPAM.

      After serving his prison term, Ancheta will be under 3 years of
      supervised release in which time his access to computers will be limited.

      A full account of the whole investigation, trial and conviction can be
      found on teh Department of Justice’ cybercrime.org
      (http://www.cybercrime.gov/anchetaSent.htm)

      • #3151916

        Bot-Master banged up!

        by dawgit ·

        In reply to Bot-Master banged up!

        1 down, X? to go.  I’d rather they let him have access to computers though, to explain just how he did it. It should be a requirement for release from prison for any one convicted of such computer crimes, write a paper on how & why they did it. (and no, they don’t get to keep the money) -d

      • #3151636

        Bot-Master banged up!

        by bfilmfan ·

        In reply to Bot-Master banged up!

        His new name is “That Lifer’s Girlfriend.”

        I hope he REALLY enjoys learning that spam is very, very painful.

         

    • #3154159

      Encrypt portable storage in Windows/Linux with TrueCrypt

      by justin fielding ·

      In reply to In my own words…

      So, last week I talked about TrueCrypt, the disk/file encryption package
      for both Windows and Linux. With the rising threat posed by identity theft, we
      should all be careful of what information we store on portable media?the devices
      are easily lost, so we need to assume that all of the data stored can be
      accessed by any random individual (be they good or bad). Encrypting our stored
      data negates this risk. At the very worst, you simply lose your data, but you
      don?t reveal anything. The most anyone who finds the media can do is format it
      and keep it. Losing your data may be a minor inconvenience, but it would be far
      worse to lose your data and also find out that ?50k in debt has been created
      under your identity!

      Luckily, we now have a way to consistently encrypt and decrypt
      in both a Windows and Linux environment. Let’s start by taking a look at
      TrueCrypt in Windows. The package can be downloaded here, and installation is
      pretty self-explanatory. Download the archive, unpack the files in a temporary
      directory, and then run the TrueCrypt_Setup.exe application. During the
      installation, there is an option to create a system restore point?I used this,
      but I don?t think it?s a necessary step; it’s more for peace of mind.

      Once installed, a TrueCrypt shortcut will be accessible from
      either the Desktop or StartMenu. When the application is opened, it will remain
      minimised in the tray when closed?in the preferences there is an option to have
      TrueCrypt run on startup.

      So let’s take a look at the main application:

      It?s all pretty standard with the main functions well
      placed. Creating a new encrypted volume is simple: Tools > Volume Creation Wizard. The wizard guides us through the
      creation of a new volume very smoothly, offering support content along the way
      so as to explain everything. First we need to select whether we want to create
      a hidden or standard volume; since I just want to keep my data secure in case
      of loss, I don?t see the need for hiding the volume, therefore I?m creating a
      standard one.

      Next up, we need to select a file or device to encrypt. The
      nice thing about TrueCrypt is that it gives this choice?if encrypting an entire
      hard disk, we may well want to select the entire device or a partition on that
      device; however, for the USB key, I have taken a different approach and used an
      encrypted file. You?ll see why later.

      I created a file called ?123.iso? placed on my freshly
      formatted (FAT32) 2GB pen drive. Click next and you get a choice of encryption.
      Many algorithms are available for use, and I quite like the fact that it lets
      me choose. I have gone for Twofish, which uses a 256-bit key and 128-bit block?good
      enough, I think. The next screen asks you how large the encrypted filesystem
      should be. Underneath the box, it shows you the amount of free space on the
      device where the filesystem will be created. I choose to use all but 10-MB of
      the space for my volume. The final steps before creation are setting a password
      and selecting a filesystem. For the password, over 20 characters are
      recommended and up to 64 are allowed; ten characters seem a bit more likely to
      me. Yes, it?s not as secure as 20, but you try remembering a 20-character
      string of jumble and then typing it every time you switch computers! I opted
      for a FAT filesystem with the default cluster size. Hitting format will create
      the volume, which can then be mounted in the main application.

      Now you?re probably wondering why I left 10MB of ?wasted?
      space on the disk? TrueCrypt allows us to create a ?traveller disk?. The
      traveller disk basically contains the application and driver which need to be
      used to access the encrypted volume in Windows; this can be run on any Windows
      machine (if you have rights to run executables) and can even automatically
      launch when the disk is inserted (if using Windows XP SP2). The ?traveller
      disk? option in the Tools menu
      simply asks for the root directory of the disk and auto-mount options. It then
      creates the necessary files on the disk. It?s a shame there isn?t a feature
      like this for roaming on Linux machines, however I doubt it would be possible
      due to the way in which the Linux program runs (loadable Kernel module
      required). A few MB of leftover space will provide fast access to some
      unimportant small files which you want to quickly move from one place to
      another. How much you leave (if any at all) is down to personal preference.

      Next week I?ll take a final look at TrueCrypt; I plan
      to install the Linux variant on my Ubuntu workhorse and attempt to mount the
      volume I have created in Windows.

    • #3158795