Linux Disk Mount and Management Guide: From lsblk to fstab

Linux tutorial - IT technology blog
Linux tutorial - IT technology blog

Adding a disk to a server, a full / partition, a /var partition running out of space — these situations happen more often than you’d think. On the old CentOS 7 server at work, I had to deal with disk management quite a few times, and it didn’t always go smoothly. This article captures what actually proved useful after nearly two years of wrestling with disks on Linux production systems.

Understanding “mount” in Linux

Unlike Windows, which uses drive letters (C:, D:, E:), Linux uses a single directory tree starting at /. Every storage device — SSD, HDD, USB drive, network drive — is “attached” to a point in that tree. That’s called a mount point.

When you plug a new drive into a server, Linux recognizes it as a block device (typically /dev/sdb, /dev/nvme1n1, etc.) but you can’t use it right away. There are 4 steps that must be done in order:

  1. Create a partition on the disk (if one doesn’t exist)
  2. Format the partition with the appropriate filesystem
  3. Mount it to a directory
  4. Configure it to mount automatically on reboot

Core concepts before you start

Block device naming

Linux names devices using a consistent convention:

  • /dev/sda, /dev/sdb — SATA/SCSI disks (a = first, b = second…)
  • /dev/sda1, /dev/sda2 — partitions 1 and 2 of /dev/sda
  • /dev/nvme0n1, /dev/nvme0n1p1 — NVMe SSDs
  • /dev/vdavirtual disk (KVM, VPS)

Important: The name /dev/sdb can change after a reboot if the device detection order changes. Always use UUIDs in fstab, never device names — I learned this the hard way after a server failed to boot because the disks swapped order.

Common filesystems

  • ext4: Default on Ubuntu/Debian, stable, suitable for most use cases
  • xfs: Default on RHEL/CentOS, better for large files and high I/O
  • btrfs: Supports snapshots and copy-on-write, but think carefully before using it in production

Step-by-step walkthrough

Step 1: List current disks

The first command I run on any server is lsblk:

lsblk
# NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
# sda      8:0    0   50G  0 disk
# ├─sda1   8:1    0    1G  0 part /boot
# └─sda2   8:2    0   49G  0 part /
# sdb      8:16   0  200G  0 disk        ← new disk, not yet mounted

To also see filesystem info and UUIDs:

lsblk -f
# Shows FSTYPE, UUID, LABEL, MOUNTPOINTS

Or use fdisk -l to inspect the partition table in detail:

sudo fdisk -l /dev/sdb

Step 2: Create a partition

I prefer parted over fdisk because it supports GPT and disks larger than 2TB:

sudo parted /dev/sdb

# Inside parted interactive mode:
(parted) mklabel gpt              # Create a GPT partition table
(parted) mkpart primary ext4 0% 100%   # One partition using the entire disk
(parted) print                    # Verify the result
(parted) quit

For disks smaller than 2TB using traditional MBR:

sudo fdisk /dev/sdb
# Press: n → p → 1 → Enter → Enter → w

Step 3: Format the filesystem

Format the newly created partition (e.g., /dev/sdb1):

# ext4 — the most common choice
sudo mkfs.ext4 /dev/sdb1

# xfs — use on RHEL/CentOS or when high throughput is needed
sudo mkfs.xfs /dev/sdb1

# Add a label for easier identification later
sudo mkfs.ext4 -L "data-disk" /dev/sdb1

On the company’s CentOS 7 server, I went with xfs because it handles concurrent writes better than ext4 — which matters when nginx, PHP-FPM, and MySQL are all writing logs at the same time. After moving /var/log to xfs, I/O wait during peak hours dropped noticeably, especially with log rotation running every night at midnight.

Step 4: Mount manually

Create the mount point first, then mount to it:

# Create the directory to use as the mount point
sudo mkdir -p /mnt/data

# Mount
sudo mount /dev/sdb1 /mnt/data

# Verify
df -h /mnt/data
# Filesystem      Size  Used Avail Use% Mounted on
# /dev/sdb1       197G   28M  197G   1% /mnt/data

This mount only persists until the next reboot. For a permanent mount, you need to configure /etc/fstab.

Step 5: Configure fstab with UUID

A single typo here can prevent the server from booting — trust me, I know. First, get the UUID of the partition:

sudo blkid /dev/sdb1
# /dev/sdb1: UUID="a1b2c3d4-e5f6-7890-abcd-ef1234567890" TYPE="ext4"

# Or use lsblk:
lsblk -f /dev/sdb1

Then add an entry to /etc/fstab:

sudo nano /etc/fstab

Add this line at the end of the file:

UUID=a1b2c3d4-e5f6-7890-abcd-ef1234567890  /mnt/data  ext4  defaults  0  2

Explanation of the 6 fields in fstab:

  1. Device: UUID=… (recommended) or /dev/sdb1
  2. Mount point: /mnt/data
  3. Filesystem type: ext4, xfs, btrfs…
  4. Options: defaults (rw, suid, exec, auto, nouser, async)
  5. Dump: 0 = do not back up with the dump utility
  6. Pass: 0 = skip fsck, 1 = root partition, 2 = all other partitions

Always test fstab before rebooting — this is a step I never skip after one bad fstab entry left a server unable to boot:

sudo mount -a   # Mount all entries in fstab
# No errors means you're good

Step 6: Common management operations

Unmounting a disk:

sudo umount /mnt/data

# If you get "target is busy", check which process is using it:
sudo lsof /mnt/data
# Or:
sudo fuser -m /mnt/data

Mount with advanced options:

# Mount read-only
sudo mount -o ro /dev/sdb1 /mnt/data

# noexec, nosuid — good for partitions storing uploaded files
sudo mount -o noexec,nosuid /dev/sdb1 /mnt/data

# Remount on-the-fly without unmounting
sudo mount -o remount,rw /mnt/data

Check disk usage:

# Overview of all partitions
df -hT   # -T shows filesystem type

# Check inode usage (sometimes a disk appears "full" because inodes are exhausted, not space)
df -i

# Find directories consuming the most space
du -sh /* 2>/dev/null | sort -rh | head -20

Troubleshooting common errors

“wrong fs type” error on mount: Occurs when the filesystem type doesn’t match. Use lsblk -f to confirm the correct type, then specify it explicitly:

sudo mount -t xfs /dev/sdb1 /mnt/data

Filesystem corruption after a crash: Run fsck while the partition is unmounted:

sudo umount /dev/sdb1
sudo fsck -y /dev/sdb1   # -y automatically answers yes to all prompts

Conclusion

After nearly two years and well over 20 disk operations on production systems, there are a few principles I apply every single time:

  • Always use UUIDs in fstab — never device names like /dev/sdb1
  • Test with mount -a before rebooting — one bad fstab entry is all it takes to never forget
  • Add a label when formatting partitions so they’re easy to identify when you have multiple disks
  • Monitor disk usage regularly and set alerts at 80% capacity so you have time to act before things fill up

Looking back, what I wish I’d learned earlier wasn’t the commands — it was the workflow: identify the device → partition → format → mount → fstab. Once you’ve internalized that sequence, you can handle any server. Even adding a 200GB data disk to production at 2 AM with zero downtime and zero panic.

Share: