In the first two installments of this series we took a look at the history of Linux and how it has grown into an essential force in many enterprise data centers. We also discussed some of the ways in which Windows and Linux are both similar and different, with the goal of helping to orient new Linux administrators with a background in Windows operations.

The entirety of this series will focus on the following introductory Linux concepts:

  • Creating and managing user accounts, passwords and groups
  • Working with files, directories and permissions
  • Granting administrative privileges
  • Installing programs/services and setting startup processes
  • Managing running processes
  • Working with user environments
  • Formatting drives/partitions
  • Mounting devices/configuring, accessing and sharing network resources
  • Working with shell scripts
  • Troubleshooting/checking logs
  • Useful Linux command-line tricks
  • Getting free help/resources for further information

In Part I and II, we examined these three Linux concepts and how they work:

  • Creating and managing user accounts, passwords and groups
  • Working with files, directories and permissions
  • Granting administrative privileges
  • Installing programs/services and setting startup processes
  • Managing running processes
  • Working with user environments

As stated in Part I, prerequisites for users of this guide are familiarity with concepts of system administration such as creating users, working with permissions, managing programs and processes, configuring hardware such as network details and hard drives and reviewing event logs for errors. Furthermore, these tips and screenshots are based on Red Hat Linux administration, since that is generally the corporate standard, but should also work on most if not all other Linux flavors.

Formatting drives/partitions

This document refers to “drive” or “disk” as terms representing anything seen by Linux as a disk drive, whether it is an actual internal or external hard disk or a CD or DVD drive.

Adding new hard drives and partitions in modern versions of Windows is fairly straightforward. You go to Computer Management (workstations) or Server Manager (servers), expand Storage then access the Disk Management Utility, where you can bring drives online and offline, initialize drives, create, format, shrink and extend partitions and volumes, and assign letters to drives, such as H: drive for a newly installed disk.

The Red Hat Gnome GUI environment includes a program called Disk Utility which can be accessed from the main menu under “Applications” / “System Tools.” It provides a graphical view of a Linux file system which offers control functions similar to Windows:

This is really just a front end to what’s happening behind the scenes on a Linux file system. There are also other ways to use the GUI for disk tasks, such as the Logical Volume Manager (LVM). However, you can view and administer drives and partitions right from the command line, and it is highly recommended that you do so to get a good feel for how the system operates and uses disk resources.

Take note of the sections in the screenshot above which read: “Device: /dev/sda” and “Device: /dev/sda1” since these factor in during the next section. But first, a background on how drive access works in Linux.

Drives and their access (“mount”) points work differently in Linux as compared to Windows. As a reminder, Part I focused on how files and directories appear on a Linux system – whereas a Windows server may have folders such as C:\Users, C:\ProgramData and C:\Windows, Linux doesn’t use a C: drive, or any drive letter at all. Instead, Linux organizes directories on file systems with each folder preceded by a “/” or forward slash. This starts at the root or topmost directory, which is represented by “/” and then applies to all top-level folders and subfolders – examples include /bin which contains programs needed for the boot process, /dev, where device files for hardware components are located, and /home which contains user folders.

In Windows you might format a new hard disk and set the new volume as the H: drive, but in Linux you would create the partition, set up a file system on it then mount the volume with a specific name underneath the root or / location. Say you wanted the new drive to be used for application storage – you could mount it as /appstorage, then it would appear in the main directory tree:





Since there is no drive letter associated with a volume they can all be displayed under the root location, even if they’re on different disks or systems. This presents a more unified, inclusive view of all mounted volumes.

While you can use the “ls” command to show directories, a better option is the “df” command, which stands for “disk filesystem.” Running this will show a screen similar to the following:

This shows you disk data in bytes, so using the “-h” switch to display it in “human readable” format makes a bit more sense:

Now we get to the /dev/sda and /dev/sda1 sections referenced previously. Linux identifies different types of drives using different letters.

The first two letters indicate the type of drive and the third shows the drive number in the system. The third letter increments with each subsequent drive added, so “sda” would be the first drive in a system, “sdb” the second, and “sdc” the third, for instance.

Partitions on each drive are associated with a number NOT a letter, so “dev/sda1” would be the first partition on the first drive. “Dev/sda2” would be the second partition on the first drive. “Dev/sdb1” would be the first partition on the second drive, and so forth (note this numbering scheme applies to Red Hat Linux; the Ubuntu Grub boot loader starts at 0 not 1).

Drives and partitions – whether actual or potential – appear under the /dev directory. You can view them by typing:

ls /dev

In the above example, the system is a virtual machine and so it has two drives:



The vda drive has three partitions:




For the purpose of this demonstration, a second drive of 26 Gb in size has been added to the virtual machine and it is now represented by vdb. The vdb drive is blank; it has no partitions.

So how can you actually create and format a volume on a new disk? First you’ll need to sudo to root to perform the operation. Type sudo -s then enter the appropriate password.

Now we can use the fdisk command to examine and configure drive information (this assumes drive sizes of 2 Tb or smaller; larger drives would use the parted command but since this hard drives are frequently less than 2 Tb we’ll focus on using fdisk).


fdisk –l

The new vdb drive is listed here and the file size is displayed in bytes; in this case it is about 26 Gb in size. We already identified the new drive as vdb, but using fdisk -l can be helpful in finding the name if you’re unsure.

Now we’re ready to create a partition on the new drive. It’s possible to create a partition and file system on a new drive and then mount this for access or adding the drive as a physical part of an existing volume group. We’ll focus on the first scenario; the second one is outside the scope of this article but Red Hat provides instructions for this.


fdisk /dev/(device)

In the example of the new virtual hard drive you would run:

fdisk /dev/vdb

In the above example, a warning appears that “DOS-compatible mode is deprecated. It’s strongly recommended to switch off the mode (command ‘c’) and change display units to sectors (command ‘u’).” If you receive this error type c and then u as recommended:

It’s worth typing m to display the help screen so you can learn more about the functions of fdisk:

If you type p to print the partition table it will show nothing present:

Type n to create the new partition. You will be given the choice of creating an extended or primary partition:

To use the entire disk as a single volume, choose p. An extended partition comes in handy if you intend to create more than four primary partitions (since that is the limit) but for the purpose of this example we’ll stick with a single primary partition:

The system will prompt for the partition number; 1 was entered since that is the first choice available. The defaults on the first and last sectors were also selected, and the partition was created.

Type p to print the partition table:

You can see that the new vdb1 partition now exists.

Type w to write the partition table changes:

You can verify the new partition is present under the /dev directory by typing:

ls /dev/vd*

You have to create a file system on the partition then mount it before it can be used. Windows uses file systems such as FAT (rare), FAT32 and NTFS. Linux can use those too, but the native counterparts are generally EXT3 (standard for many years) and EXT4 (more modern versions).

You can tell which file systems are in use typing:

mount | grep "^/dev"

This will display something similar to the following:

In this case we can see that EXT4 file systems are in use, so this is what we’ll work with.

The utility to create an EXT4 file system is called mkfs.ext4 (a similar utility exists for EXT3 file systems and is called mkfs.ext3). Let’s say we want to create a volume and label it as appstorage. We would run:

mkfs.ext4 -L /appstorage /dev/vdb1

This creates the volume, formats it and applies the label:

Now that we’ve established how to format drives and partitions, let’s see what comes next.

Mounting devices/configuring, accessing and sharing network resources

We must mount the volume in an accessible location by setting up a mount point.


mkdir /mnt/appstorage

mount /dev/vdb1 /mnt/appstorage

(the first command sets up the mount point in a folder, and the second actually mounts the volume to that accessible location)

Both commands should complete successfully. Now run:

df -h

As you can see, the new volume is mounted at /mnt/appstorage and is available for use (you could also select /appstorage instead of /mnt/appstorage if you wanted the new volume to appear at the root level).

Typing mount will show all mounted file systems:

You can perform these same steps with external/USB flash drives (Linux may automatically make them accessible from the GUI but not from the command line):

  1. Find out what drive the USB stick represents (e.g. sdb)
  2. Create the partition on the USB stick
  3. Create a file system on the USB stick
  4. Create a mount point under the /mnt folder (e.g. /mnt/usbdrive)
  5. Mount the USB stick file system to the mount point

If the USB stick already has a file system on it you can skip steps #2 and #3.

To backtrack, if you need to unmount a file system simply type umount (mount point). For instance:

umount /mnt/appstorage

will work for the example above.

You can delete partitions in a similar fashion; run fdisk /dev/(device), then use d to delete the partition(s) and follow the respective prompts.

There’s one more thing to do if you want to make the mount point permanent: add the details to the /etc/fstab file, which is the file systems table in Linux; this controls what gets mounted at startup.


vi /etc/fstab

The fstab file is then displayed:

Each column in the fstab file represents a certain field:

File system name/label

Mount point

File system type

Mount options (defaults are generally fine)

Dump Options (determines backup options by the dump utility; generally OK to select 0)

File System check order (select 2 since this is the 2nd drive on the system, but you can select 0 if you don’t want the volume checked)

Press insert to enter text then use the arrow keys to go to the bottom of the file and add a new line.

For the /appstorage mount point we would add:

/dev/vdb1 /mnt/appstorage ext4 defaults 0 2

In Part II we discussed mounting a DVD drive for access to installation packages. To set up a DVD as a permanent mount you would add this line to fstab:

/dev/dvd /mnt/dvdrom auto defaults 0 0

(it should be noted you would have to unmount the DVD volume via umount (mount point) every time you wanted to switch discs then remount it once the new one is loaded)

Save and exit the file (using :wq!, remember), then reboot and confirm the volume remains present by running df -h:

If you should need to take out failed mount points (such as on a dead drive), simply edit the file again and add a “#” in front of the line, so Linux will not try to mount it the next time it starts up.

Connecting to other Linux systems

This is an overview of how to mount local disks for use. But what about accessing network disks on other Linux systems? In Windows it’s possible to map a drive from Windows Explorer or the command line via the “net use [drive letter] \\server\share” command. With Linux you would follow the same mount process as with local disks, with a few stipulations:

  1. The target file server must have the NFS (network file system) or SMB (Server Message Block) service installed and running (this article works with NFS, though SMB can be a good choice to share files between Linux and Windows systems).
  2. The directory must be exported (shared) on the target server in a file under /etc/exports.
  3. You must have network connectivity and permissions to the directory (a group works best) for remote access. This guide assumes no firewall restrictions are in place between hosts, but this should be analyzed in a live environment and adjusted accordingly.

Let’s refer back to the fstab file on the test system:

The bottom entry for “devnfs:/tools” shows an NFS volume on the devnfs server mounted locally (and set to do so every time Linux boots).

The command to perform this works as follows:

mount (server):/directory /(local mount point)

To mount the above example, you would use:

mount devnfs:/tools /tools

How are directories set up for remote access?

In this example the devnfs server has an entry in the /etc/exports file which appears as follows:

/tools *(rw,sync)

  • The /tools entry is the name of the directory to be shared.
  • The asterisk (*) signifies access is permitted from any machine (this can be replaced with a specific host name, a series of hostnames separated by asterisks to connote wildcards, or IP network addresses/network ranges).
  • The rw entry signifies read/write access for remote users. This can be removed to apply the default of read-only access.
  • The sync entry indicates that the server will not reply to remote requests before related changes are written to disc; this helps to ensure integrity of read-write data. For a read-only volume you should use async instead.

So, let’s say you have a folder on a system you want to export for others to access. Follow these steps:

  1. Make sure they have the appropriate local accounts and permissions to that location (covered in Part I). For testing you could run chmod a+x (folder) to grant all users read, write and execute capabilities to the folder, but in a live production environment you would likely want to restrict access more carefully by selecting the right usage of the chmod command.
  2. Make sure network access exists between the local and remote machines (and again, that no firewalls are in use, or if there are exceptions have been set up – see this Red Hat article for more information)
  3. Your local and remote systems must have NFS installed (we covered how to install packages in Part II; this package is called nfs-utils)
  4. Create an /etc/exports file on the system on which you want to export (share) a directory using the following command:

vi /etc/exports

(this will create a new file if it does not already exist)

Model the content after the above example. A basic entry will include mount point, access, read-write capability (if desired) and sync/async capability. Make sure to eliminate any spaces between the hostname and options in order to ensure the appropriate access is applied.

Let’s say we want to share out the local volume created previously (/mnt/appstorage) and the system name is smatteso-vm1.

We would add this line to /etc/exports:

/mnt/appstorage *(rw,sync)

Entering :wq! saves and exits the file.

Now type service nfs reload and the mount point can be accessed remotely (any time the /etc/exports file is changed you will need to restart the NFS server)

showmount -e will show available NFS mounts on the local system:

showmount -e (hostname/IP) will show the available local NFS mounts on the remote system you want to connect from:

rpcinfo -p (hostname/IP) will confirm whether the NFS server on the target is listening:

Now you have to create a mount point folder to use on the system you want to connect from. You can create any folder name you wish in any location to which you have access. For instance, this command will create a new folder called appstorage under the /mnt directory:

mkdir /mnt/appstorage

Now mount the remote volume to the local /mnt/appstorage (or whatever folder you have designated as the mount point) location:

mount smatteso-vm1:/mnt/appstorage /mnt/appstorage

(you could also use “mount -t nfs” instead to signify that you are mounting an nfs file system)

No confirmation will be given that the remote volume mounted, but you can confirm by typing:

df -h

which should reveal:

You can see the remote volume mounted, and if you access the local /mnt folder you will see the appstorage directory listed. You can read, write and execute files here as needed depending on available permissions.

Want to automatically mount this network folder every time the system boots? Just edit the /etc/fstab file and add a line similar to the following:

smatteso-vm1:/mnt/appstorage /mnt/appstorage nfs defaults 0 0

Need to disconnect a network folder? Use the umount (mount location) command, such as:

umount /mnt/appstorage

(don’t run this if you’re actually in the directory or you’ll get a “device is busy” error).

Finally, if you would like to stop exporting a folder on a system, edit /etc/exports to remove the related information, save and exit the file, then run service nfs reload.

Working with shell scripts

The Linux command line gives you flexibility and power, but typing commands over and over can be repetitive and time consuming. Windows administrators are likely to be familiar with scripting capabilities such as Powershell and Visual Basic scripts or even DOS-based batch (.BAT) files which can store a series of commands to run later (or schedule for execution at a predetermined time). Linux offers similar options in the form of shell scripts, which can greatly ease administrative burden and streamline operations.

A shell script is a standard file in Linux (though it often ends in .sh to connote that it is a shell script; this makes it easier to organize and search for items of this nature) with execute (x) capabilities so that it can be run by users with the appropriate rights.

Shell scripts can control all aspects of the operating system, from setting up new users to copying files to remote shares to installing applications, and they can use variables just as in Windows to facilitate their operation. They are plain text files that run commands or chain them together to achieve the desired result.

Let’s work with a few samples. Use vi to create a new file called


# This is a sample script

echo “Linux script has completed!”

Save and exit the file.

Let’s examine the above script line-by-line.

#!/bin/bash ensures the script runs in the current user’s bash shell environment (covered in Part II)

# This is a sample script is a “remark” statement is signified by the # at the beginning – any information preceded with a # will not actually run in the script (except for the #!/bin/bash line), but can be used to explain details about elements within the file.

echo “Linux script has completed!” will display the words “Linux script has completed” on the screen. The echo statement is used to show information regarding the success, failure or status of commands or other elements in a script.

Use chmod+x to assign the execute permission on the file for your account.

Run it from current directory:


Or use bash or (the bash and sh commands can run scripts)

The script returns the following results:

It doesn’t get more basic than this, but this is a good introduction to the scripting world.

Let’s try a more detailed example which will use variables to show who is logged in as well as the current date and time.

Use vi to create a new script called and add these details:


# This script will show who is logged in and current date/time


echo “Hello $USER”

echo “Today is”;date

echo “Number of user logins:”; who | wc -l

echo “Calendar”


exit 0

Save and exit the file.

In the above script, “$USER” is a variable representing the logged-on user (it needs to be in quotes since it is preceded by a dollar sign).

“;date” is a variable representing the current date and time (it does not need to be in quotes since it is preceded by a semi-colon).

“;who | wc -l” is a command using the variable “;who” that displays the number of users logged in and piping it (via the | symbol) to a separate comment that prints a count of the results on the next line – this is a good example of how commands can be chained together. The pipe command can come in handy in many other areas both in scripting and direct commands.

Run chmod +x

Now run ./ and the following information will be displayed on the screen:

Here’s a useful script which will display important statistics about your system such as connected users, disk and memory usage, utilization, processes and more:

echo "uptime:"
echo "Currently connected:"
echo "--------------------"
echo "Last logins:"
last -a |head -3
echo "--------------------"
echo "Disk and memory usage:"
df -h | xargs | awk '{print "Free/total disk: " $11 " / " $9}'
free -m | xargs | awk '{print "Free/total memory: " $17 " / " $8 " MB"}'
echo "--------------------"
start_log=`head -1 /var/log/messages |cut -c 1-12`
oom=`grep -ci kill /var/log/messages`
echo -n "OOM errors since $start_log :" $oom
echo ""
echo "--------------------"
echo "Utilization and most expensive processes:"
top -b |head -3
top -b |head -10 |tail -4
echo "--------------------"
echo "Open TCP ports:"
nmap -p- -T4
echo "--------------------"
echo "Current connections:"
ss -s
echo "--------------------"
echo "vmstat:"
vmstat 1 5
echo "--------------------"
echo "processes:"
ps auxf --width=200

(courtesy of, which provided the free sample)

This displays the following output on a test system:

Wed Sep 17 17:32:13 EDT 2014


17:32:13 up 5:09, 3 users, load average: 0.09, 0.06, 0.01

Currently connected:

17:32:13 up 5:09, 3 users, load average: 0.09, 0.06, 0.01


smatteso tty7 :0 12:24 5:09m 3.13s 0.01s pam: gdm-password

smatteso pts/0 :0.0 12:25 5:06m 0.12s 0.19s gnome-terminal

smatteso pts/1 smatteso-t5500.l 17:31 0.00s 0.07s 0.01s sshd: smatteso [priv]


Last logins:

smatteso pts/1 Wed Sep 17 17:31 still logged in

smatteso pts/2 Wed Sep 17 13:13 – 13:13 (00:00) localhost:10.0

smatteso pts/2 Wed Sep 17 13:13 – 13:13 (00:00) localhost:10.0


Disk and memory usage:

Free/total disk: 48G / 73G

Free/total memory: 15447 / 15949 MB


OOM errors since Sep 16 17:17 : 16


Utilization and most expensive processes:

top – 17:32:14 up 5:09, 3 users, load average: 0.09, 0.06, 0.01

Tasks: 279 total, 1 running, 278 sleeping, 0 stopped, 0 zombie

Cpu(s): 0.4%us, 0.4%sy, 0.0%ni, 98.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.6%st


4934 root 20 0 15156 1328 880 R 2.0 0.0 0:00.01 top

1 root 20 0 19360 1560 1240 S 0.0 0.0 0:00.78 init

2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd


Open TCP ports: line 26: nmap: command not found


Current connections:

Total: 643 (kernel 750)

TCP: 31 (estab 4, closed 2, orphaned 0, synrecv 0, timewait 2/0), ports 23

Transport Total IP IPv6

750 – –

RAW 0 0 0

UDP 28 17 11

TCP 29 18 11

INET 57 35 22

FRAG 0 0 0



procs ———–memory———- —swap– —–io—- –system– —–cpu—–

r b swpd free buff cache si so bi bo in cs us sy id wa st

0 0 0 15093148 120316 605236 0 0 4 4 53 54 0 0 99 0 1

1 0 0 15093096 120316 605236 0 0 0 0 300 347 0 0 100 0 0

0 0 0 15093832 120324 605228 0 0 0 36 383 390 0 0 99 0 0

0 0 0 15093220 120324 605236 0 0 0 0 206 348 0 0 100 0 0

0 0 0 15093344 120324 605236 0 0 0 0 185 326 0 0 100 0 0

(it also displays all processes via ps auxf; these have been left out for space reasons)

Some common shell script variables within the Bash shell:

$? = the result of the last run command; 0 represents success and 1 (or more) represents failure

$GROUPS = groups the logged-on user belongs to

$HOME = user home directory

$HOSTNAME = host name of system

$HOSTTYPE = system CPU hardware

$PATH = path to executable files

$PWD = current directory

$SECONDS = the number of seconds a script has been running

$UID = user ID number (similar to a SID in Windows)

It’s easy to see how scripts can be created to run functions based on the hostname or CPUs involved, who is logging in or what groups they belong to. Just as with Windows scripts, Linux scripts can use “if then” conditions whereby “if” a certain condition applies (a user ID is a certain identifier) “then” a command can execute. You can also supply an “else” statement to run another command if the condition does NOT apply.

You can set your own variables quite easily by typing (variable name)=(what you want to set the variable to). For instance:

var="The command ran successfully"

will assign the phrase “The command ran successfully” to the variable of var. Now if you type:

echo $var

this will return the result “The command ran successfully.”

You can add variable assignments this way within scripts as well.

This covers the basics of shell scripting, but there is much more information available to help expand your scripting knowledge. Resources which can help are discussed later in the article.

Troubleshooting/checking logs

Like any other operating system, Linux sometimes has problems with applications, hardware, user accounts and other elements of daily computing life. General common sense troubleshooting tips can apply across all operating systems:

  • Make sure applications are installed properly and match the hardware (64-bit programs with 64-bit CPUs for instance)
  • Make sure users have appropriate rights and are using correct passwords
  • Make sure hardware is functional and appropriate drivers are in use
  • Make sure proper network access exists
  • Make sure system processes are running and doing what they should
  • If something worked previously but won’t work now find out what changed
  • Research errors to find root causes
  • Review logs to see what’s going on behind the scenes

Whereas Windows has the Event Viewer for system logs, Linux does not have a centralized “one stop shopping” place to check when problems crop up; different log files are used for different functions, but many of these are stored in the /var/log folder. Some examples:

/var/log/anaconda.log = installation related messages

/var/log/boot.log = boot information

/var/log/dmesg = kernel information about the operating system

/var/log/messages = General messages and system related items

/var/log/cron.log = Crond logs (scheduled tasks, also known as cron jobs)

/var/log/dpkg.log = Information about package installations/removals

/var/log/httpd/ = Apache access and error logs directory

/var/log/secure or /var/log/auth.log = Authentication log

/var/log/wtmp = Login records file

/var/log/yum.log = Yum command log

Red Hat provides the option to configure global logging elements with a process called rsyslogd. It works in conjunction with a file called /etc/rsyslog.conf which can be edited to specify (or research) the appropriate information. A sample appears as follows:

If used, this can help pinpoint what log files to look at for certain functions; the file indicates mail problems are kept in /var/log/maillog and boot messages in /var/log/boot.log. This is the default, but the file can be edited to change this if necessary.

You can use the cat command to display the contents of a log file. For instance:

cat /var/log/boot.log

will display the contents of the boot.log file.

You may see an overload of information as several screens flow by, however, so better options for displaying files are the more, tail and grep commands.

more will display a file page by page, allowing you to press f to show the next page. It also displays the current position in the log via a percentage point in the lower left (for instance if you are 10% through the log). If you need to go back to view previous pages of the log file you can press the b key. To exit the file press Ctrl-C or Ctrl-Break then q (many functions in Linux can be exited this way).

(It may be handy to use the more -f switch if the lines of the log file don’t display properly)

The tail command will display the last 10 lines in a log file by default in a “live action” mode whereby new entries to the log will appear as they are added. You can add the -n (number) switch to specify how many lines to display. For instance:

tail /var/log/messages -n 25

will show the last 25 lines in the messages log, updating the display as these are added.

What’s especially helpful about tail is that you can use it with multiple files:

tail logfile1 logfile2

(As with more, it may be handy to use the tail -f switch if the lines of the log file don’t display properly)

Finally, grep can be used to search within a log file and return only the results you’re interested in. This example checks the /var/log/messages file for any instances of the term “error”:

grep error /var/log/messages

(use grep -i to check for a term in case-insensitive format, meaning it would look for any instance of the word error regardless of upper or lower case)

The resolution to problems you find in these (or other) logs will depend on the situation involved; they may indicate component failures, authentication or permission problems, missing files or stranger elements. However, these should provide sufficient insight into what’s happening so that you can then research solutions online.

Useful Linux command-line tricks

The possibilities with command line options in Linux are vast and diverse. Here’s a list of some helpful examples which can make your administrative tasks easier. Try experimenting with these and see what other commands you can find:

For instance, history might show these items:

!948 would run “service nfs reload.”

Getting free help/resources for further information

Training courses in Linux are available from companies such as New Horizons and Learning Tree, and there is plenty of further information in the form of websites and books which can advance your knowledgebase.

One of the historical – but misguided – arguments against free or open-source software like Linux is that without official paid support it can be difficult to find assistance. After all, businesses are made or broken on good technical support to keep their systems healthy. It should be noted that companies like Red Hat actually do sell their software with support options (Red Hat Enterprise Linux), but there are plenty of support forums and websites that are just a Google search away, and which contain questions and answers from all walks of Linux life.

For Red Hat Linux users, the first place to seek general information and documentation will be the Red Hat site, of course:

Here are some other helpful Linux links based on category.

Guides and how-tos: (The Linux Documentation Project)

Support forums for advice:

Linux News:

In addition, TechRepublic’s Open Source blog offers useful, timely tips and information about Linux and open-source applications:

Some noteworthy books about Linux which can also be useful:

The Linux Command Line: A Complete Introduction by William E. Shotts Jr

Linux for Beginners by Jason Cannon

UNIX and Linux System Administration Handbook by Evi Nemeth, Garth Snyder, Trent R. Hein and Ben Whaley

Linux Bible by Christopher Negus

Linux Cookbook by Carla Schroder


We’ve covered a lot of ground to help get you up to speed using Linux. We’ve gone over how to work with user accounts (both regular and administrative) and environments as well as groups; files, folders and permissions; setting up programs and services; configuring and using local and remote drives, shell scripts; troubleshooting and reviewing log files; handy commands and resources to find out more information.

There is much more to learn – so much, in fact that even experts who have worked with Linux for years regularly find new ways to do things – and many of the techniques explored here can be accomplished via other methods both basic and advanced. In addition, some elements or procedures not deemed necessary to this guide have been left out to streamline the learning process. This guide has been intended as an introductory overview of multiple concepts rather than a deep dive into one particular area.

Effective system administration involves a diverse understanding of multiple operating systems, so when armed with the right tools and information the similarities and differences between Windows and Linux can often be a cause for intrigue and inspiration rather than confusion or frustration. We hope this has been a useful introduction as you begin to explore the Linux realm and that this serves as a good foundation for your knowledge so you can build upon it further.