Using SSH or Rclone to mount remote directories on Ubuntu or macOS

; Date: Sun Mar 03 2024

Tags: Linux

SSH is a multifaceted tool that can be used for more than terminal sessions to a remote server. With SSHFS, we can mount a remote filesystem.

With SSH we can mount any directory on a server reachable with SSH. This is more convenient than using protocols like SMB or NFS, because that requires installing server software on the remote system.

In some cases the use of SSH to mount a remote file-system is described as using SFTP. SFTP is a command similar to the venerable old ftp program that's existed on the Internet since the early 1970s. SFTP and SSH are closely related programs, but one (SSH) is focused on command-line connections to remote systems, while SFTP is focused on file transfers to the same.

The man page for sftp says it is a file transfer program, similar to ftp, which performs all its operations over an encrypted ssh connection. The difference between calling it an SSH or SFTP file-system is that sftp uses ssh connections for file transfer.

A potentially important point is that SSH connections are automatically encrypted. Whatever data you're sending across the Internet should be encrypted to avoid snoops stealing your data.

The most obvious tool to use is SSHFS, since it a file-system driver which runs over SSH/SFTP. However, we will also use Rclone, in its SFTP mode, to compare the two.

SSHFS is a FUSE file-system driver that "allows you to mount a remote file-system using SSH (more precisely, the SFTP subsystem)". This means it integrates with the regular file-system, and we can use regular tools to manipulate the mounted files.

Rclone is tool primarily for synchronizing files between a huge list of remote cloud storage providers. It also happens to support mounting those systems as file-systems. It is a very mature tool that is widely used and supported.

Installing SSHFS

On Ubuntu (or Debian) systems:

$ sudo apt-get update
$ sudo apt-get install sshfs

For other Linux distros you'll no doubt find it in the package repository, possibly under this name.

For installation on macOS: (osxfuse.github.io) https://osxfuse.github.io/

On the OSXFUSE site it says to download code from the GitHub repository. However, it is available through MacPorts, and presumably through HomeBrew. For MacPorts, run:

$ sudo port install sshfs

Once installed this way, sshfs on macOS acts the same as on Linux.

For Windows, see: (github.com) https://github.com/winfsp/sshfs-win I have not tested this alternative. Where SSHFS is a FUSE (File-system in Userspace) file-system, sshfs-win runs on Windows File System Proxy. For more information about that, see (winfsp.dev) https://winfsp.dev/

Mounting a remote filesystem using SSHFS

Start by making a local directory on which to mount the remote directory:

$ mkdir -p ~/mounts/USER-NAME@REMOTE

In this command USER-NAME is the user name on the remote system, and REMOTE is your name for that system. The mountpoint directory can have any name you like. This particular name acts as documentation of where the mount comes from.

For example:

$ sudo sshfs -o allow_other,default_permissions \
    USER-NAME@REMOTE:/home/USER-NAME \
    ~/mounts/USER-NAME@REMOTE
The authenticity of host 'HOST-NAME (nn.nnn.nnn.nnn)' can't be established.
ED25519 key fingerprint is SHA256:TtGS7EIXwA5McPTLUEDV8EiXYA0MnPwLfEadLgaXFJZE.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
USER-NAME@REMOTE's password: 
read: Interrupted system call

This is a good start, but it is asking for the remote password. Because it is running under root (using sudo) it is not picking up the personal SSH key.

On the sshfs man page, it's recommended that in normal operation for the sshfs to be run by a regular user ID. This is different than we normally do with mounting real disk drives, which typically is run as root using sudo. When run by a regular user, sshfs automatically does user ID mapping.

$ sshfs \
  -o allow_other,default_permissions,identityfile=/home/USER/.ssh/id_rsa.pub \
  USER-NAME@REMOTE:/home/USER-NAME \
  ~/mounts/USER-NAME@REMOTE
fusermount3: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf

The identityfile option says to use an SSH key. In this example we are using one from the standard ~/.ssh directory.

The fusermount message tells us to specify another option in /etc/fuse.conf, so edit that file and uncomment the corresponding line.

$ sshfs \
    -o allow_other,default_permissions \
    USER-NAME@REMOTE:/home/USER-NAME \
    ~/mounts/USER-NAME@REMOTE

This works with no messages. It seems that because it is not running as root (using sudo) that the identityfile option is not required.

But, in this case the remote user account is the SSH account for a shared hosting server on my web hosting provider (Dreamhost). On accessing the remote directory, this error is printed:

$ ls -l ~/mounts/USER-NAME@REMOTE/
ls: cannot open directory '/home/USER/mounts/USER-NAME@REMOTE/': Permission denied

This is because my user ID on the remote system is not my local user ID. This error came from the local filesystem, because my user ID (user 1000) did not have sufficient clout to view the files which had user ID and permissions from the remote filesystem.

Consulting the sshfs manual page (man sshfs) I see an option, idmap, that will solve the problem:

$ sshfs \
    -o allow_other,default_permissions,idmap=user \
    USER-NAME@REMOTE:/home/USER-NAME \
    ~/mounts/USER-NAME@REMOTE

And, now, listing the files in the remote directory works perfectly. The idmap=user option says to map ownership of remote files to the local user who mounted the filesystem.

Using SSH to mount a remote filesystem using a PEM key

So far we've seen how to use a regular SSH key to authenticate the SSH session used by SSHFS. Sometimes PEM keys are used rather than regular SSH keys, for example with AWS EC2 servers.

In my case, I have a virtual server provisioned in the DreamCompute service, which is a pseudo-clone of EC2. The user name ubuntu on that server is accessed using SSH with a PEM key exactly as on an EC2 server.

In my ~/.ssh/config file, I have the following snippet:

Host host001
HostName ###.###.###.###
User ubuntu
IdentityFile ~/.ssh/host001-key.pem

Host host001
  HostName host001

This says the host name host001 corresponds to the IP address given for the HostName option, the user name ubuntu, and to use the named PEM file. With this setup accessing this server with SSH is simple:

$ ssh host001

To replicate this on SSHFS:

$ sshfs \
  -o allow_other,default_permissions,identityfile=/home/USER/.ssh/KEY-FILE.pem \
  USER-NAME@SERVER-DOMAIN:/opt \
  /home/USER/mounts/HOST-opt/

We discussed this earlier when showing use of a normal SSH key. We've simply replaced the SSH key file name with the PEM key file name, and everything else is the same.

Also notice we're not using the idmap option. The USER-NAME@SERVER-DOMAIN string gives both the remote user name and the domain name.

To test the ability to create a file over this connection:

$ touch ~/mounts/REMOTE-HOST/aaa
$ ls -l ~/mounts/REMOTE-HOST/aaa
-rw-rw-r-- 1 david david 0 Jun 19 00:59 /home/david/mounts/host001/aaa

Then SSH to the remote server and run this:

$ ls -l aaa
-rw-rw-r-- 1 ubuntu ubuntu 0 Jun 18 21:59 aaa
$ echo 'Hello, world' >aaa
$ logout
Connection to nnn.nnn.nnn.nnn closed.
$ cat ~/mounts/REMOTE-HOST/aaa
Hello, world
$ echo 'Hello, world!' >~/mounts/host001/aaa
$ ssh host001 cat aaa
Hello, world!

This demonstrates we can view and edit files, using normal everyday tools, over an SSH connection to a remote server. First we created the file from our laptop on the remote server, then modified it on the remote, saw the change on our laptop, then edited it again on the laptop, and saw the new change on the remote server.

The file-system mounts show up when using df -h and also in Nautilis, the normal Linux/Ubuntu file manager.

When done with the mount, simply unmount it in the normal way:

$ umount ~/mounts/host001
$ umount ~/mounts/host001-opt 

Which is the normal way to unmount a file-system on Linux and Unix-like systems.

Installing Rclone

(rclone.org) Rclone is a very cool tool whose primary purpose is data synchronization. It also handles mounting remote file-systems to the local system. It supports a long list of cloud storage systems. We discussed Rclone at length in Mounting S3 (compatible) buckets as a filesystem on Linux or macOS using S3FS or Rclone

On Ubuntu you can find this:

$ apt-cache search rclone
rclone - rsync for commercial cloud storage
rclone-browser - Simple cross platform GUI for rclone

Installing the rclone-browser package provides the GUI, and also installs the rclone CLI.

For macOS, the packages are available in MacPorts and presumably HomeBrew.

For Windows, the Rclone website offers binaries to download.

Using Rclone to mount an SSH/SFTP file system

Rclone uses the word SFTP to describe using SSH to mount a remote file-system.

The concepts are similar to what we just did with SSHFS. Namely, we use either a normal SSH key, or a PEM key, to authenticate ourselves against a remote host. We then mount a remote directory onto a local mount point.

When mounting S3 buckets using Rclone, the Rclone Browser GUI application perfectly handled the task without having to use the command-line tool. But, for mounting SFTP/SSH file-systems, I found it easier to use the command-line tool.

For my remote server which uses a PEM file, this configuration worked:

[REMOTEHOST]
type = sftp
host = nnn.nnn.nnn.nnn
use_insecure_cipher = true
disable_hashcheck = true
ask_password = false
key_file = /home/david/.ssh/host001-key.pem
user = ubuntu

For the host parameter give either an IP address or a domain name, if a domain name is assigned to the host.

For a remote host secured with a normal SSH key, use this configuration:

[SSH-KEY-REMOTE]
type = sftp
host = DOMAIN.NAME
user = USERNAME
key_use_agent = true
use_insecure_cipher = false
disable_hashcheck = false
key_file = /home/david/.ssh/id_rsa

In either case, you enter the config mode like so:

$ rclone config
Current remotes:

Name                 Type
====                 ====
REMOTESSH            sftp
dropbox              dropbox
google-USER1         drive
google-USER2         drive
REMOTEPEM            sftp
nextcloud-USER3      webdav
shared               s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> 

It lists any current configurations. To create a new one, type n then start answering the questions. To edit an existing, type e and start answering the questions.

When you're finished with the questions, you'll be shown a summary like the above, and then it asks you:

y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

If you're not happy with it, type e to go through the questions again.

To test the connection use the lsd command (list directory) like so:

$ rclone lsd REMOTE-SSH:
2024/06/19 12:16:05 Failed to create file system for "REMOTE-SSH:": \
  failed to read public key file: open id_rsa.pub: no such file or directory

This error was generated when key_file was simply id_rsa.pub. It was determined that to use the normal SSH key, you must enter the full pathname and leave off .pub.

With a successful configuration, you will be shown a listing of files in the remote directory.

To mount a remote file-system:

$ rclone mount REMOTENAME: ~/path/to/mountpoint

In the Rclone CLI, there are three syntaxes for paths:

  1. /path/to/file -- Refers to a file, or directory, in the local file-system.
  2. remote: -- Refers to the default directory for the remote connection
  3. remote:/path/to/file -- Refers to a file at /path/to/file on the remote system

In this case we've mounted the remote user's home directory.

If you want to mount another directory:

$ rclone mount REMOTENAME:/opt ~/path/to/mountpoint

On my VPS, the /opt directory contains directories for all the Docker containers.

This command also stays in the foreground. If you want this to run in the background:

$ rclone mount REMOTENAME: ~/path/to/mountpoint --daemon

This spelling, daemon, has a long history in computer systems. In the olden days of computing I recall hearing of ultra-Christians having conniptions that we're summoning demons or something.

This flag simply means the process will detach from the controlling terminal, and run in the background. In Docker, the flag for this purpose is --detach, which is both more descriptive, and doesn't run the risk of ire from ultra-Christians worrying about our souls.

It tells us something about the age of the Rclone project that this is the flag used. According to (en.wikipedia.org) Wikipedia, the daemon terminology began in 1963 with an MIT project. The originator thought of "Maxwell's demon" which was an "imaginary agent" in Physics that helped to sort molecules.

Okay, --daemon flag causes the rclone command to detach from the terminal and become a background process. It's just a piece of software. There's no need to worry about our immortal souls.

Once you've run the above command, the df -h command shows the mount, and you can see the files using

$ ls -l ~/path/to/mountpoint

Any normal program will be able to access files in the directory. Also, file ownership is mapped automatically to your local user ID.

When you're done:

$ umount ~/path/to/mountpoint

This is the normal way to unmount any mounted file-system.

Summary

Everything shown above runs on both macOS and Linux. The same tools are available for Windows, but I did not test that.

It is simple and easy to use SSH/SFTP to mount a remote file-system. Doing so gives access to the remote files, such as on a VPS or shared hosting environment, as if they're on your local machine.

My primary goal is to use an editor, like Visual Studio Code, on my laptop to directly edit files on a remote machine. I have both a VPS and a shared hosting account, and often need to edit files on those systems. While I'm quite adept at SSH'ing into the machine and using /usr/bin/vi to edit remote files, as I've done for about 40 years, it's a nicer experience to use a local editor.

Of the two choices, SSHFS or Rclone, I strongly recommend Rclone. As I noted with the exploration of mounting S3 buckets, Rclone supports a huge list of cloud storage systems, and has a ton of other features including both one-direction and bi-direction synchronization.

About the Author(s)

(davidherron.com) David Herron : David Herron is a writer and software engineer focusing on the wise use of technology. He is especially interested in clean energy technologies like solar power, wind power, and electric cars. David worked for nearly 30 years in Silicon Valley on software ranging from electronic mail systems, to video streaming, to the Java programming language, and has published several books on Node.js programming and electric vehicles.