Skip to main content.

Serially concatenating disks in NetBSD


Live demo in BSD Now Episode 028 | Originally written by Christian Koch for | Last updated: 2014/03/12

NOTE: the author/maintainer of the tutorial(s) is no longer with the show, so the information below may be outdated or incorrect.

In this tutorial, we are going to create a number of virtual disks and concatenate them in such a way that our file system can grow on demand. We'll see how it should be possible to "plug in" extra storage hardware whenever we want, and then leverage this extra space however we want.

There are two different ways disks can be concatenated: either with a "striping" effect, or "serially." We're going to concatenate the disks serially. The idea is that I should be able to "plug in" more storage space whenever I want, without affecting any pre-existing data. For the purposes of this exercise, we're going to go after extra brownie points and use virtual disks instead of real disks. First I create an empty file 1 GB in size.

$ dd if=/dev/zero of=DISK1 bs=1024 count=1048576

Then I create another empty file 2 GB in size.

$ dd if=/dev/zero of=DISK2 bs=1024 count=2097152

DISK1 and DISK2 are just files, but we're going to use vnconfig(8) to make the system treat these files as if they were empty disks.

# vnconfig -cv /dev/vnd0 DISK1
/dev/vnd0d: 107374124 bytes on DISK1
# vnconfig -cv /dev/vnd1 DISK2
/dev/vnd1d: 2147483648 bytes on DISK2

We confirm that they've been successfully created:

# vnconfig -l
vnd0: / (/dev/wd0a) inode 16660613
vnd1: / (/dev/wd0a) inode 16660615
vnd2: not in use
vnd3: not in use

Now we're going to concatenate the disks with the help of ccd.

# ccdconfig -cv ccd0 0 none /dev/vnd0d /dev/vnd1d
ccd0: 2 components (vnd0d, vnd1d), 6291456 blocks concatenated

To my surprise, this concatenated disk seems to already have a disklabel applied to it:

# disklabel /dev/ccd0

type: ccd
disk: ccd
label: fictitious
bytes/sector: 512
sectors/track: 2048
tracks/cylinder: 1
sectors/cylinder: 2048
cylinders: 3072
total sectors: 6291456
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0        # microseconds
track-to-track seek: 0    # microseconds
drivedata: 0

4 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
a:   6291456         0     4.2BSD      0     0     0  # (Cyl.      0 -   3071)
d:   6291456         0     unused      0     0        # (Cyl.      0 -   3071)

It seems that partition "a" has already been labeled to expect a FFS file system there. I'll go ahead and try to put a new filesystem on it:

# newfs /dev/rccd0a

/dev/rccd0a: 3072.0MB (6291456 sectors) block size 16384, fragment size 2048
  using 17 cylinder groups of 180.72MB, 11566 blks, 22784 inodes.
super-block backups (for fsck_ffs -b #) at:
32, 370144, 740256, 1110368, 1480480, 1850592, 2220704, 2590816, 2960928,

Well, that was easy. Notice we use /dev/rccd0a, not /dev/ccd0a. newfs(8) requires a raw device, not a block device. On NetBSD and OpenBSD, we refer to the "raw access" version of a block device by prepending "r" to the device name. Anyway, let's mount it.

# mount_ffs /dev/ccd0a /home/christian/mnt

We confirm it's been mounted:

$ df -hl

Filesystem         Size       Used      Avail %Cap Mounted on
/dev/wd0a          286G       264G       7.9G  97% /
kernfs             1.0K       1.0K         0B 100% /kern
ptyfs              1.0K       1.0K         0B 100% /dev/pts
procfs             4.0K       4.0K         0B 100% /proc
/dev/ccd0a         3.0G       2.0K       2.8G   0% /home/christian/mnt

Observe that the original files DISK1 and DISK2 have been updated accordingly.

$ file DISK1

DISK1: Unix Fast File system [v1] (little-endian), last mounted on
/home/christian/mnt, last written at Fri Jan 24 12:46:39 2014, clean
flag 2, number of blocks 1572864, number of data blocks 1548375, number
of cylinder groups 17, block size 16384, fragment size 2048, minimum
percentage of free blocks 5, rotational delay 0ms, disk rotational speed
60rps, TIME optimization

$ file DISK2
DISK2: data

We'll write some stuff to the concatenated disk image, then unmount the disk.

# umount /home/christian/mnt

You can mount /dev/cdd0a and see your data reappear. But actually we're going to unconfigure the concatenated disk. (Be sure to unmount the file system first.)

# ccdconfig -uv ccd0
ccd0 unconfigured

Now here's the super fun and interesting part. We're going to add another virtual disk, despite the fact we already have a file system and data and everything. This third disk will be 1 GB in size.

$ dd if=/dev/zero of=DISK3 bs=1024 count=1048576
# vnconfig -c vnd2 DISK3
# ccdconfig -cv ccd0 0 none /dev/vnd0d /dev/vnd1d /dev/vnd2d

ccd0: 3 components (vnd0d, vnd1d, vnd2d), 8388608 blocks concatenated

I can only imagine it would be necessary to concatenate the first two disks this time around in the exact same order as last time. After all, we already confirmed that DISK1 has the label.

Here's the catch. The (concatenated) disk just got bigger, yes, but the file system on that disk is still the same size as before. We have to grow the FFS partition to fill the rest of the disk.

# disklabel -i /dev/ccd0
> a
> yes
> *
> W
> y
> Q

Essentially you request to modify partition "a" with the "a" command. You confirm this is a FFS file system (a.k.a. "4.2BSD"). You grow the file system's label to the "rest of the disk" with the asterisk. Then you write the new label and confirm with "yes". Then you exit disklabel. Now we'll fix the file system then properly resize it.

# fsck -fy /dev/cdd0a
# resize_ffs -y /dev/ccd0a

Finally we can mount it again for real.

# mount /dev/ccd0a /home/christian/mnt
$ df -hl

Filesystem         Size       Used      Avail %Cap Mounted on
/dev/wd0a          286G       265G       6.9G  97% /
kernfs             1.0K       1.0K         0B 100% /kern
ptyfs              1.0K       1.0K         0B 100% /dev/pts
procfs             4.0K       4.0K         0B 100% /proc
/dev/ccd0a         3.9G        25M       3.7G   0% /home/christian/mnt

Wow. We just plugged in an extra storage device and our file system grew! We only used virtual disks in this tutorial (vnd), but just imagine what you can do with an array of actual hard drives.

As far as I can tell, the only really scary part is making sure you concatenate the disks in the exact same way every single time you need to mount the file system. The only place to safely append a disk is at the "end" of the array, never in the middle. I haven't tested how robust NetBSD and its various subsystems are when it comes to putting things in the wrong order. It would be worth investigating, for sure.

Latest News

New announcement


We understand that Michael Dexter, Brad Davis, and George Rosamond think there should be more real news....

Two Year Anniversary


We're quickly approaching our two-year anniversary, which will be on episode 105. To celebrate, we've created a unique t-shirt design, available for purchase until the end of August. Shirts will be shipped out around September 1st. Most of the proceeds will support the show, and specifically allow us to buy...

New discussion segment


We're thinking about adding a new segment to the show where we discuss a topic that the listeners suggest. It's meant to be informative like a tutorial, but more of a "free discussion" format. If you have any subjects you want us to explore, or even just a good name...

How did you get into BSD?


We've got a fun idea for the holidays this year: just like we ask during the interviews, we want to hear how all the viewers and listeners first got into BSD. Email us your story, either written or a video version, and we'll read and play some of them for...

Episode 243: Understanding The Scheduler


This episode was brought to you by Headlines OpenBSD 6.3 released Punctual as ever, OpenBSD 6.3 has been releases with the following features/changes: > Improved HW support, including: > SMP support on OpenBSD/arm64 platforms > vmm/vmd improvements: > IEEE 802.11 wireless stack improvements > Generic network stack improvements > Installer improvements > Routing daemons and other userland network improvements > Security...

Episode 242: Linux Takes The Fastpath


Direct Download:MP3 AudioVideo This episode was brought to you by Headlines TrueOS STABLE 18.03 Release The TrueOS team is pleased to announce the availability of a new STABLE release of the TrueOS project (version 18.03). This is a special release due to the security issues impacting the computing world since the beginning...

Episode 241: Bowling in the LimeLight


Direct Download:MP3 AudioVideo This episode was brought to you by Headlines [Other big ZFS improvements you might have missed] 9075 Improve ZFS pool import/load process and corrupted pool recovery > One of the first tasks during the pool load process is to parse a config provided from userland that describes what devices the pool is...

Episode 240: TCP Blackbox Recording


Direct Download:VideoMP3 Audio This episode was brought to you by Headlines [A number of Upstream ZFS features landed in FreeBSD this week] 9188 increase size of dbuf cache to reduce indirect block decompression With compressed ARC (6950) we use up to 25% of our CPU to decompress indirect blocks, under a workload of...