If you need to work with remote directories between two Linux machines, here's how to do so securely, with the help of sshfs.
Linux is one of the most flexible platforms on the market. If you have a need to do something, with Linux you can do it. Take, for instance, the ability to securely mount remote file systems and work with them on a local machine. Although this feature isn't built into the operating system by default, it can easily be added with sshfs.
Sshfs stands for Secure Shell File System and works as a filesystem for Linux, capable of operating on files and directories on a remote computer, using secure shell. It's secure, reliable, and actually pretty easy to use.
I want to walk you through installing and using sshfs. I'll be demonstrating on a Ubuntu 16.04 Server platform, but know that sshfs can be installed on nearly any Linux distribution, directly from the standard repositories.
Installing sshfs is very simple. Open up a terminal window on your machine and issue the commands:
sudo apt-get update sudo apt-get install sshfs
Once the installation is complete, you're ready to go.
You will also need to have openssh-server installed on the remote machine. To accomplish this, issue the command:
sudo apt-get install openssh-server
Done and done.
The sshfs command is used in similar fashion to the secure copy (scp) command. But before we actually run the command to mount a remote directory, we must create a directory on the local system, for mounting purposes. For our example, I am going to be mounting the directory /data found on the machine at 192.168.1.146 onto the directory /remote_data, found on the machine at 192.168.1.139.
The first thing we must do is create the local folder on 192.168.1.139 with the command sudo mkdir /remote_data. Now we need to give our user access (jack) to that folder. I'll make it simple and assume only one user needs access to that folder and use the chown command like so:
sudo chown -R jack.jack /remote_data
Now, on the remote machine, we need to make sure that our user has access to the /data directory with the command:
sudo chown -R jack.jack /data
Of course, you could make this even more flexible, by changing the ownership to a group and then adding specific users to that group. However, for the sake of example, we'll keep it simple.
Now we're ready to mount the remote directory. On the local machine (at IP 192.168.1.139), issue the command:
sudo sshfs firstname.lastname@example.org:/data /remote_data
Since this is the first time connection, the key fingerprint of the user will be displayed and you'll be asked if you want to continue on. Type yes and hit enter. Now, type the user's password and the connection will be made. You should now be able to work with the remote directory, /data, as if it were the local directory, /remote_data.
This will give you full read/write access to the remote directory.
When you're done working with the remote directory, you can unmount it with the command sudo umount /remote_data.
The only caveat to using sshfs is that the remote directory will unmount upon a reboot. If you want to set up automounting in /etc/fstab, it requires the use of passwordless ssh key pairs.
How easy was that? If you have a need to work with a directory on a remote Linux machine, you can make that process really simple with sshfs.
- How to install Config Server Firewall on CentOS 7 (TechRepublic)
- How to harden Ubuntu Server 16.04 security in five steps (TechRepublic)
- Why Cyborg Essentials should be your penetration testing platform (TechRepublic)
- How to protect secure shell on CentOS 7 with Fail2ban (TechRepublic)
- Big Linux bug, low security concerns (ZDNet)