Tags: Ubuntu
Ubuntu is supposed to be fairly simple to use with lots of attention put on a solid/reliable well-thought-out Linux user experience. My experience setting up a multi-drive Ubuntu computer showed it's not quite as smooth a user experience as the Ubuntu reputation would lead you to believe.
What I have is a 5th generation Intel NUC with Core i5 CPU, an M.2 SSD drive, and a regular 2.5" spinning hard disk. See Intel NUC's perfectly supplant Apple's Mac Mini as a lightweight desktop computer for a writeup and pictures.
I spent awhile pondering how to set this up, and have some high-level observations:
- Linux has this LVM (Linux Volume Manager) feature that seems capable of gluing multiple drives together, and setting up partitions (a.k.a. Volumes) that are dynamically resizable.
- There seems to be an LVM mode for hybrid combination of SSD and hard disk. The description sounds like the Fusion drive on macOS X. This mode was mentioned as a high level thing with no documentation on implementation, and I was unable to find an implementation tutorial. I did find a warning that in such a situation if/when the SSD drives it will take down the whole hybrid thing.
- In general it's difficult to find tutorials on LVM setup. Further, the Ubuntu installer simply does not support multi-drive LVM configuration. It lets you designate the drive as LVM, but...
- In general the Ubuntu installer has section dealing with custom drive partitioning, but the user interface is horridly complicated and confusing.
Therefore, after looking at several LVM tutorials and the warnings against a hybrid LVM setup, I decided against that route. That's a shame since it's very attractive to use an SSD as a dynamic cache for a regular spinning hard disk.
In the Ubuntu installer I selected the SSD as the installation target. It did not offer any option to put anything on the HDD - unless I were to use the custom setup mode. As I said above, that mode is downright unusably complex.
The result was:
/dev/sda1 was 200 MB, FAT32, and mounted on /boot/efi
/dev/sda2 was unallocated unformatted holding the remaining nearly 300GB
/dev/sdb1 was 500 MB, FAT32, unmounted, but marked "boot, esp"
/dev/sdb2 was 118GB, ext4, mounted on /
Avoiding too many writes on the SSD
SSD lifetime is lengthened if you reduce the number of write operations. Read operations don't shorten an SSD lifetime, but write operations do. Hence for the longest possible lifetime, we must keep the SSD mostly for read-only files. See SSD drive lifetime expectancy explained
On Ubuntu that means
- Keep Swapping partition on the HDD
- Move the
/home
partition to the HDD - Configure Docker to keep its data on the HDD
I may decide to move other things like /var
or /tmp
to the HDD as well.
Formatting strategy
The SSD is what it is - a small boot partition, and the remainder for the operating system.
The HDD already has one partition. It will be useful to set up a proper Swap partition, and an ext4
partition for the /home
file system. Rather than set up another file system for the other purposes, I'll configure Docker and other services to use a directory on /home
. Technically that's not the correct use of /home
but I don't care.
Maybe I should instead of set up the remaining space on the HDD as an LVM partition. That way I could have set up custom sized resizeable partitions, mounting them into specific places in the file system. But -- the following is simpler.
A problem cropped up at this point. For simplicity I wanted to use gparted to format the HDD but it wouldn't launch out of the box. For the solution, see: Fixing Ubuntu to allow running gparted to format/configure disks
For the Swap partition, used gparted to create a linux-swap partition - /dev/sda2 - 48GB - The rationale for a 48GB swap partition is, a) main memory is 16GB, b) 48GB is a 3x multiple. Maybe that's too much for the swap partition. I don't really care, it seems like a good even number.
I then used gparted to create an ext4 partition for the remainder of the disk.
At that point I expected gparted would be able to help edit /etc/fstab
and otherwise to assign mount points for the new partitions. But nothing let me do so. Eventually I noticed in the footer "3 operations pending" and then in the Edit menu was a choice "Apply all operations". At that point I realized, the partitioning I'd done had not been sync'd to the disk.
Running Apply all operations set up the partitions, formatting /home
with ext4
.
/etc/fstab
setup
The real source of truth for mounted filesystems is /etc/fstab
. Unfortunately gparted does not help with editing that file.
The current best practice for /etc/fstab
is using the UUID for the partition to identify the partition being mounted. In the past you'd use /dev/sda2
or some such, but apparently the UUID is a more reliable identifier.
For example here's the resulting /etc/fstab
for my system, using UUID's to identify the partitions.
UUID=a7b9766c-7b35-4684-b6cc-cdbe3ee2ed5e / ext4 errors=remount-ro 0 1
UUID=67E3-17ED /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
UUID=8f47f9bf-3988-482d-b263-e96d819ca293 none swap sw 0 0
UUID=7b9210d7-219f-4fc8-8501-677277a05f19 /home-tmp ext4 errors=remount-ro 0 2
The question is - how to get those UUID values? The gparted program can show the UUID by right-clicking the partition, then selecting Information in the popup menu.
Another method is the blkid
program, which is run as so:
$ blkid
/dev/sdb2: UUID="a7b9766c-7b35-4684-b6cc-cdbe3ee2ed5e" TYPE="ext4" PARTUUID="58f9da9b-fd97-4c7d-8331-b5e3511fb0e9"
/dev/sda2: UUID="8f47f9bf-3988-482d-b263-e96d819ca293" TYPE="swap" PARTLABEL="UNTITLED" PARTUUID="3b4f3c8b-2830-403a-85df-ff2773dc6388"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/sda1: LABEL="EFI" UUID="67E3-17ED" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="dd0db366-a6cb-4f12-9373-0001e8f4a07c"
/dev/sda3: UUID="7b9210d7-219f-4fc8-8501-677277a05f19" TYPE="ext4" PARTUUID="dee834c8-6784-417a-bb1f-a849fcfaedfa"
/dev/sdb1: UUID="8DBA-4413" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="2346e846-d2c3-432a-9049-2e6cca95c3ad"
Once you have the UUID's, it's just a matter of setting up /etc/fstab
.
After that, you run mount -a
and the file-systems are mounted.
The result looks like so:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 11M 1.6G 1% /run
/dev/sdb2 117G 15G 97G 14% /
tmpfs 7.8G 196M 7.6G 3% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/loop0 89M 89M 0 100% /snap/darktable/4
/dev/loop1 193M 193M 0 100% /snap/vlc/131
/dev/loop2 82M 82M 0 100% /snap/core/3887
/dev/loop3 140M 140M 0 100% /snap/slack/4
/dev/loop4 198M 198M 0 100% /snap/polarr/3
/dev/loop5 175M 175M 0 100% /snap/atom/116
/dev/sda3 246G 9.4G 225G 4% /home-tmp
/dev/sda1 197M 4.6M 193M 3% /boot/efi
Why does /dev/sda3
say /home-tmp
? There's an issue of moving /home
contents to the new partition, and mounting the new partition on /home
, WHILE YOU ARE LOGGED IN.
A logged-in user has a bunch of open files in their home directory. The X11 server providing the GUI environment is the culprit. It simply does not make sense to move /home
while the user is logged in.
Taking care of final tasks - /home
and Docker
We've gotten to a point where both drives are fully formatted and allocated. But the Docker and /home
files are still on the SSD. I want to cover both of these in separate blog posts.