Skip to main content.

A crash course on HAMMER FS

2014-09-03

Live demo in BSD Now Episode 053. | Originally written by Toby for bsdnow.tv | Last updated: 2014/09/03

NOTE: the author/maintainer of the tutorial(s) is no longer with the show, so the information below may be outdated or incorrect.

HAMMER is a 64-bit filesystem developed by Matthew Dillon for DragonFly BSD. Similarly to ZFS, HAMMER was designed to be a next-gen filesystem to add new storage features to UNIX-based operating systems that were previously limited to UFS/FFS. HAMMER has a maximum storage capacity of 1 exabyte (1018 bytes), or more than 1 million terabytes. It contains several features that provide data integrity, resistance against crashes, and fine-grained backup controls that rival ZFS.


Requirements

Currently, HAMMER is limited to DragonFly BSD, and that OS is the only way to really explore the filesystem. In current implementations, HAMMER can only be deployed to at most 256 devices. You can download DragonFly BSD by fetching a bzip2-compressed ISO from one of the mirrors. Unlike ZFS, HAMMER does not assert control over the underlying hardware, so you can use HAMMER on top of a hardware or software RAID device without trouble. It is designed for large drives, so a device of at least 50 GB in size is suggested for your installation. We'll begin with an existing DragonFly installation, then add a new storage device, configure it with a HAMMER filesystem, and explain some of its features and caveats.


Creation and Initial Setup

Once you have a suitable storage device installed, you can find it in the dmesg output (assuming a SATA device):

# dmesg | grep da0
disk scheduler: set policy of da0 to noop
da0 at ahci0 bus 0 target 0 lun 0
da0:  Fixed Direct Access SCSI-4 device
da0: Serial Number VB0a457314-9e7252da
da0: 300.000MB/s transfers
da0: 51200MB (104857600 512 byte sectors: 255H 63S/T 6527C)

Note that DragonFly includes a line for the device's serial number. This is a fixed value that can be used to manage the drive regardless of where it is cabled inside the machine, or if it is moved to another DragonFly machine later. You can find all such devices in DragonFly BSD under /dev/serno, including partition information where applicable:

# ls -l /dev/serno/
total 0
crw-r-----  1 root  operator   27, 0x1e110007 Jul 28 21:08 VB0a457314-9e7252da
crw-r-----  1 root  operator   27, 0x1e100007 Jul 28 19:56 VB0a457314-9e7252da.s0
crw-r-----  1 root  operator   21, 0x1e110007 Jul 28 19:56 VB2-01700376
crw-r-----  1 root  operator   20, 0x1e110007 Jul 28 19:56 VB8447d4a8-660fa274
crw-r-----  1 root  operator   20, 0x1e120007 Jul 28 19:56 VB8447d4a8-660fa274.s1
crw-r-----  1 root  operator   20, 0x00020000 Jul 28 19:56 VB8447d4a8-660fa274.s1a
crw-r-----  1 root  operator   20, 0x00020001 Jul 28 19:56 VB8447d4a8-660fa274.s1b
crw-r-----  1 root  operator   20, 0x00020003 Jul 28 19:56 VB8447d4a8-660fa274.s1d

Create a new HAMMER filesystem with "newfs_hammer". This command requires a filesystem label and at least one device:

# newfs_hammer -L MYLABEL /dev/serno/VB0a457314-9e7252da
Volume 0 DEVICE /dev/serno/VB0a457314-9e7252da size  50.00GB
initialize freemap volume 0
initializing the undo map (504 MB)
---------------------------------------------
1 volume total size  50.00GB version 6
boot-area-size:       64.00MB
memory-log-size:     128.00MB
undo-buffer-size:    504.00MB
total-pre-allocated:   0.51GB
fsid:                4ac6b735-16a2-11e4-bcca-090027b5e454

NOTE: Please remember that you may have to manually set up a
cron(8) job to prune and reblock the filesystem regularly.
By default, the system automatically runs 'hammer cleanup'
on a nightly basis.  The periodic.conf(5) variable
'daily_clean_hammer_enable' can be unset to disable this.
Also see 'man hammer' and 'man HAMMER' for more information.

Once your HAMMER filesystem has been successfully created you can mount it manually with "mount_hammer":

# mkdir /data
# mount_hammer /dev/serno/VB0a457314-9e7252da /data

Inspect your mount points and see that mount_hammer has succeeded:

# mount
ROOT on / (hammer, local)
devfs on /dev (devfs, local)
/dev/serno/VB8447d4a8-660fa274.s1a on /boot (ufs, local)
/pfs/@@-1:00001 on /var (null, local)
/pfs/@@-1:00002 on /tmp (null, local)
/pfs/@@-1:00003 on /usr (null, local)
/pfs/@@-1:00004 on /home (null, local)
/pfs/@@-1:00005 on /usr/obj (null, local)
/pfs/@@-1:00006 on /var/crash (null, local)
/pfs/@@-1:00007 on /var/tmp (null, local)
procfs on /proc (procfs, local)
MYLABEL on /data (hammer, local)
# df -h /data
Filesystem                           Size   Used  Avail Capacity  Mounted on
MYLABEL                               49G   265M    49G     1%    /data

MYLABEL exists and has about 49 GB of storage available in it. You can also instruct DragonFly BSD to mount this filesystem automatically by adding this line to /etc/fstab:

echo /dev/serno/VB0a457314-9e7252da /data hammer rw 1 1 >> /etc/fstab

If you provide multiple devices to newfs_hammer, HAMMER will combine them into a single HAMMER filesystem similar to a software RAID-0, ZFS's "striping" feature, or the old concatenated disk device (ccd) driver. This does NOT let you do failover between hardware devices and it does NOT give you mirroring or make redundant copies of your data! The recommended way to run a HAMMER filesystem safely across multiple disks with automatic failover is to use a hardware RAID.


Pseudo-filesystems

HAMMER is up and running now, so you can start writing to and reading from it immediately as well as start making snapshots and using HAMMER's file history tools. However, the real strength of HAMMER lies in its ability to use a pseudo-filesystem (PFS) to compartmentalize data. Similar to ZFS's "datasets", HAMMER uses PFSes as a way to break an entire HAMMER filesystem into smaller pieces to give more flexibility to administrators. Best practices suggest that all PFSes be created in a HAMMER filesystem under a ./pfs directory beneath the HAMMER filesystem mountpoint, in this case /data, which is always considered PFS #0. To make a new PFS:

# hammer pfs-status /data
/data   PFS #0 {
    sync-beg-tid=0x0000000000000000
    sync-end-tid=0x0000000100010150
    shared-uuid=4ac6b735-16a2-11e4-bcca-090027b5e454
    unique-uuid=4ac6b735-16a2-11e4-bcca-090027b5e454
    label="MYLABEL"
    prune-min=00:00:00
    operating as a MASTER
    snapshots directory defaults to /var/hammer/
}
# mkdir /data/pfs
# hammer pfs-master /data/pfs/myfiles
Creating PFS #1 succeeded!
/data/pfs/myfiles
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x0000000100008080
    shared-uuid=6c136aa6-16a5-11e4-bcca-090027b5e454
    unique-uuid=6c136ae1-16a5-11e4-bcca-090027b5e454
    label=""
    prune-min=00:00:00
    operating as a MASTER
    snapshots directory defaults to /var/hammer/

You can mount this PFS with DragonFly BSD's "mount_null":

# mkdir /data/myfiles
# mount_null /data/pfs/myfiles /data/myfiles
# mount
ROOT on / (hammer, local)
devfs on /dev (devfs, local)
/dev/serno/VB8447d4a8-660fa274.s1a on /boot (ufs, local)
/pfs/@@-1:00001 on /var (null, local)
/pfs/@@-1:00002 on /tmp (null, local)
/pfs/@@-1:00003 on /usr (null, local)
/pfs/@@-1:00004 on /home (null, local)
/pfs/@@-1:00005 on /usr/obj (null, local)
/pfs/@@-1:00006 on /var/crash (null, local)
/pfs/@@-1:00007 on /var/tmp (null, local)
procfs on /proc (procfs, local)
MYLABEL on /data (hammer, local)
/data/pfs/@@-1:00001 on /data/myfiles (null, local)
# df -h
Filesystem                           Size   Used  Avail Capacity  Mounted on
ROOT                                 8.5G   1.1G   7.4G    13%    /
devfs                                1.0K   1.0K     0B   100%    /dev
/dev/serno/VB8447d4a8-660fa274.s1a   756M   112M   584M    16%    /boot
/pfs/@@-1:00001                      8.5G   1.1G   7.4G    13%    /var
/pfs/@@-1:00002                      8.5G   1.1G   7.4G    13%    /tmp
/pfs/@@-1:00003                      8.5G   1.1G   7.4G    13%    /usr
/pfs/@@-1:00004                      8.5G   1.1G   7.4G    13%    /home
/pfs/@@-1:00005                      8.5G   1.1G   7.4G    13%    /usr/obj
/pfs/@@-1:00006                      8.5G   1.1G   7.4G    13%    /var/crash
/pfs/@@-1:00007                      8.5G   1.1G   7.4G    13%    /var/tmp
procfs                               4.0K   4.0K     0B   100%    /proc
MYLABEL                               49G   273M    49G     1%    /data
/data/pfs/@@-1:00001                  49G   273M    49G     1%    /data/myfiles

You can also add a nullfs line in /etc/fstab:

echo /data/pfs/myfiles /data/myfiles null rw 0 0 >> /etc/fstab

N.B.: A HAMMER filesystem can have up to 65,536 PFSes and there are currently no quota mechanisms in HAMMER to restrict the size of a PFS; any PFS can consume the full storage capacity of that filesystem if not using another higher-level quota mechanism.

You can now start writing data to the PFS like any other filesystem:

# cd /data/myfiles
# echo hello world. > hw.txt
# dd if=/dev/zero of=file.dat bs=12345 count=1
# ls -la
total 16
drwxr-xr-x  1 root  wheel      0 Jul 28 22:34 .
drwxr-xr-x  1 root  wheel      0 Jul 28 22:28 ..
-rw-r--r--  1 root  wheel  12345 Jul 28 22:34 file.dat
-rw-r--r--  1 root  wheel     13 Jul 28 22:33 hw.txt

Mirroring

To make a copy of these files, you can create a read-only slave PFS, either on the same machine or a different machine with the "hammer" command's mirror-copy option:

# hammer mirror-copy /data/pfs/myfiles /data/pfs/mybackup
PFS slave /data/pfs/mybackup does not exist.
Do you want to create a new slave PFS? (yes|no) yes
Creating PFS #2 succeeded!
/data/pfs/mybackup
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x0000000000000001
    shared-uuid=b20c176a-16a5-11e4-bcca-090027b5e454
    unique-uuid=4d865e0e-16a8-11e4-bcca-090027b5e454
    slave
    label=""
    prune-min=00:00:00
    operating as a SLAVE
    snapshots directory defaults to /var/hammer/
Prescan to break up bulk transfer
Prescan 1 chunks, total 0 MBytes (17392)
Mirror-read /data/pfs/myfiles succeeded
# ls -l /data/pfs/mybackup/
total 16
-rw-r--r--  1 root  wheel  12345 Jul 28 22:34 file.dat
-rw-r--r--  1 root  wheel     13 Jul 28 22:33 hw.txt

mirror-copy is a one-time copy operation. If you want to keep copying updates persistently from /data/myfiles to /data/mybackup, use mirror-stream:

# hammer mirror-stream /data/pfs/myfiles /data/pfs/mybackup &
[1] 950
Prescan to break up bulk transfer
Prescan 1 chunks, total 0 MBytes (0)

Now if you add a new file to /data/myfiles it will get mirrored to /data/mybackup:

# echo This is a new file > /data/myfiles/newfile.txt
# sync
# cat /data/pfs/mybackup/newfile.txt
This is a new file

File History

If you edit a file in a HAMMER PFS, you can track its changes over time:

# echo Hello world! > /data/myfiles/hw.txt
# sync
# hammer history /data/myfiles/hw.txt
/data/myfiles/hw.txt    00000001000105f6 clean {
    0000000100018280 28-Jul-2014 22:33:40
    00000001000184c0 28-Jul-2014 22:48:24
}

With "undo", you can look at complete file version history and even restore old versions:

# undo -a /data/myfiles/hw.txt
/data/myfiles/hw.txt: ITERATE ENTIRE HISTORY

>>> /data/myfiles/hw.txt 0001 0x00000001000182e0 28-Jul-2014 22:33:40

hello world.

>>> /data/myfiles/hw.txt 0002 0x00000001000184c0 28-Jul-2014 22:48:24

Hello world!

# undo -o hw.txt.original hw.txt

# cat hw.txt.original
hello world.

Even if someone deletes a file, you can recover it with its history intact:

# rm newfile.txt
# ls -l newfile.txt
ls: newfile.txt: No such file or directory
# undo -o newfile.txt newfile.txt
# cat newfile.txt
This is a new file
# undo -a newfile.txt
newfile.txt: ITERATE ENTIRE HISTORY

>>> newfile.txt 0001 0x0000000100018440 28-Jul-2014 22:47:06

This is a new file

>>> newfile.txt 0002 0x0000000100018660 28-Jul-2014 22:51:45


>>> newfile.txt 0003 0x0000000100018740 28-Jul-2014 22:52:17

This is a new file

You can also look at the changes to a file as a unified diff by using undo with the -d option:

# undo -d /data/myfiles/hw.txt
diff -N -r -u /data/myfiles/hw.txt@@0x00000001000182e0 /data/myfiles/hw.txt@@0x00000001000184c0 (to 28-Jul-2014 22:48:24)
--- /data/myfiles/hw.txt@@0x00000001000182e0    2014-07-28 22:51:05.333482000 +0000
+++ /data/myfiles/hw.txt@@0x00000001000184c0    2014-07-28 22:51:05.333482000 +0000
@@ -1 +1 @@
-hello world.
+Hello world!

By default, undo will show the diff of the recent change of a file. If you want to compare the current file to an earlier version of the file, you can specify a transaction-id value with -t to pick a specific revision of the file from its history. If you want to compare the file to its first version:

# echo This > /data/myfiles/hw.txt && sync
# echo file > /data/myfiles/hw.txt && sync
# echo has > /data/myfiles/hw.txt && sync
# echo many > /data/myfiles/hw.txt && sync
# echo edits > /data/myfiles/hw.txt && sync
# echo but > /data/myfiles/hw.txt && sync
# echo now > /data/myfiles/hw.txt && sync
# echo it > /data/myfiles/hw.txt && sync
# echo says > /data/myfiles/hw.txt && sync
# echo 'HELLO WORLD AGAIN\!\!' > /data/myfiles/hw.txt
# sync
# undo -d -t 0x00000001000182e0 /data/myfiles/hw.txt
diff -N -r -u /data/myfiles/hw.txt@@0x00000001000182e0 /data/myfiles/hw.txt (to 01-Jan-1970 00:00:00)
--- /data/myfiles/hw.txt@@0x00000001000182e0    2014-07-28 22:51:05.333482000 +0000
+++ /data/myfiles/hw.txt        2014-07-28 22:58:35.333482000.411506000 +0000
@@ -1 +1 @@
-hello world.
+HELLO WORLD AGAIN!!

Snapshots

Just like ZFS, HAMMER also allows you to make snapshots of the entire filesystem, or any directory:

# hammer snap /data/myfiles snapshot_1

# hammer snapls /data/myfiles
Snapshots on /data/myfiles      PFS #1
Transaction ID          Timestamp               Note
0x0000000100018780      2014-07-28 22:53:42 UTC snapshot_1

# mkdir -p /data/myfiles/path/to/an/important/dir

# echo "Important Data Here" > /data/myfiles/path/to/an/important/dir/file.txt

# hammer snap /data/myfiles/path/to/an/important/dir important_snapshot_1

# hammer snapls /data/myfiles
Snapshots on /data/myfiles      PFS #1
Transaction ID          Timestamp               Note
0x0000000100018780      2014-07-28 22:53:42 UTC snapshot_1
0x0000000100040a90      2014-07-28 22:58:55 UTC important_snapshot_1

Snapshots are considered "live", they don't need special instructions to access their contents. You restore data from a snapshot by copying it to the target location. N.B.: HAMMER snapshots set files' mtime and atime to the snapshot ctime, so the original stat() values of a file will not persist if restored from a HAMMER snapshot:

# stat file.txt
1449880784 4295200070 -rw-r--r-- 1 root wheel 4294967295 17 "Jul 30 23:11:14 2014"
 "Jul 28 22:58:36 2014" "Jul 28 22:58:36 2014" 16384 0 0 file.txt
# stat ./snap-20140728-2258/path/to/an/important/dir/file.txt
1450141249 4295200070 -rw-r--r-- 1 root wheel 4294967295 17 "Jul 28 22:58:36 2014"
 "Jul 28 22:58:36 2014" "Jul 28 22:58:36 2014" 16384 0 0
 ./snap-20140728-2258/path/to/an/important/dir/file.txt

Because HAMMER snapshots don't preserve source mtimes and atimes, restoring from a snapshot with a utility that first compares mtimes won't work as expected. You could use rsync with the "--checksum" option, or DragonFly's builtin cpdup utility with the "-VV" argument to avoid having to copy everything:

# cd /data/myfiles
# rm file.dat
# cpdup -o -v -VV ./snap-20140728-2253/. .
./file.dat                       copy-ok
./snap-20140728-2253             not-removed
./path/to/an/important/dir/file.txt not-removed
./path/to/an/important/dir/snap-20140728-2258 not-removed
./path/to/an/important/dir       not-removed
./path/to/an/important           not-removed
./path/to/an                     not-removed
./path/to                        not-removed
./path                           not-removed
# ls -l
total 16
-rw-r--r--  1 root  wheel  12345 Jul 28 22:34 file.dat
-rw-r--r--  1 root  wheel     13 Jul 28 22:33 hw.txt
-rw-r--r--  1 root  wheel     13 Jul 28 22:50 hw.txt.original
-rw-r--r--  1 root  wheel     19 Jul 28 22:52 newfile.txt
drwxr-xr-x  1 root  wheel      0 Jul 31 01:00 path
lrwxr-xr-x  1 root  wheel     34 Jul 28 22:53 snap-20140728-2253 -> /data/myfiles/@@0x0000000100018780

Cleanup

HAMMER filesystems require regular maintenance through "hammer cleanup". This command creates new snapshots, prunes old ones, and adjusts filesystem internals to preserve performance. By default, "hammer cleanup" is performed automatically once per day via "/etc/periodic/daily/160.clean-hammer". You can also run cleanup manually. hammer cleanup collectively snapshots, prunes, rebalances, runs deduplication, and reblocks the specified HAMMER filesystem, or all HAMMER filesystems and null mounts it finds if no specific filesystem is selected:

# hammer cleanup /data/myfiles
cleanup /data/myfiles/       - handle PFS #1 using /var/hammer/data/myfiles/
           snapshots - run
               prune - run
           rebalance - run..
             reblock - run....
              recopy - skip

By default, DragonFly BSD makes daily snapshots of your HAMMER filesystems and stores them for 60 days. By using "hammer viconfig", you can adjust how much space HAMMER uses for its snapshots. viconfig also controls the frequency and maximum runtime of the cleanup actions (pruning, rebalancing, reblocking, and deduplication) if you want to tune your HAMMER setup. Be careful here, and make sure you've read man hammer(8) before doing any tweaking. For example, you might choose to increase the runtime for HAMMER's cleanup operations on a large disk that would otherwise reach the default 5-minute limit during cleanup and stop. If your filesystem is busy, reblocking may not finish in five minutes, even when using a cyclefile to intelligently restart where it left off.

# hammer viconfig /data/myfiles
# No configuration present, here are some defaults
# you can uncomment.  Also remove these instructions
#
#snapshots 1d 60d
#prune     1d 5m
#rebalance 1d 5m
#dedup     1d 5m
#reblock   1d 5m
#recopy    30d 10m

The first column is the category, the second column is the frequency (1d is one day), and the third column is the retention period for snapshots and the runtime for the other categories. To keep snapshots for only 7 days, apply this change. Note that the other cleanup lines are uncommented but remain unchanged from their defaults so those steps won't be skipped:

# hammer viconfig /data/myfiles
snapshots 1d 7d
prune     1d 5m
rebalance 1d 5m
dedup     1d 5m
reblock   1d 5m
recopy    30d 10m

When "hammer cleanup" is next run, it will follow the provided time values. If you choose a snapshot period less than one day, be sure to run "hammer cleanup" more frequently.

Snapshots are removed based on their age, not as the filesystem fills up. If you have a busy filesystem with files that frequently change (log files or packet captures for example), you may need to manually reclaim space. Deleting a file does not automatically free up space, so instead you can reblock a HAMMER filesystem to get unused space back. The default reblock fill percentage is 100%, so HAMMER will completely defragment and reclaim all space on the PFS. In a pinch, you can quickly free up space by providing a smaller fill percentage, like 80% or 90%:

# hammer reblock /data/myfiles 80
reblock start 8000000000000000:0000 free level 1677722
Reblock /data/myfiles succeeded
Reblocked:
    0/660 btree nodes
    11639/24352 data elements
    560660480/1178872015 data bytes

This will free up some amount of space on the PFS when you need it without taking the time to completely reorder everything. You can still run "hammer reblock" with a 100% fill percentage later when you have time to let it be thorough:

# hammer reblock /data/myfiles 100
reblock start 8000000000000000:0000 free level 0
Reblock /data/myfiles succeeded
Reblocked:
    660/660 btree nodes
    24352/24352 data elements
    1178872015/1178872015 data bytes

Lastly, you can delete a HAMMER filesystem's history. This will delete all snapshots and file history in the filesystem meta-data, so be very sure you want to erase the complete historical record of a PFS before you run it:

# hammer info /data/myfiles
Volume identification
        Label               MYLABEL
        No. Volumes         1
        FSID                0fc1f6b3-1e85-11e4-8542-090027b5e454
        HAMMER Version      6
Big block information
        Total            6310
        Used              631 (10.00%)
        Reserved           33 (0.52%)
        Free             5646 (89.48%)
Space information
        No. Inodes          7
        Total size        49G (52932116480 bytes)
        Used             4.9G (10.00%)
        Reserved         264M (0.52%)
        Free              44G (89.48%)
PFS information
        PFS ID  Mode    Snaps  Mounted on
             0  MASTER      0  /data/myfiles
             1  MASTER      0  /data/myfiles
# hammer history /data/myfiles/file.dat | wc -l
     268
# hammer prune-everything /data/myfiles
Prune /data/myfiles/: EVERYTHING
Prune /data/myfiles/: objspace 8000000000000000:0000 7fffffffffffffff:ffff pfs_id 1
Prune /data/myfiles/: prune_min is 0d/00:00:00
Prune /data/myfiles/ succeeded
Pruned 59233/59240 records (0 directory entries) and 0 bytes
# hammer history /data/myfiles/file.dat | wc -l
       3
# hammer info /data/myfiles
Volume identification
  Label               MYLABEL
  No. Volumes         1
  FSID                0fc1f6b3-1e85-11e4-8542-090027b5e454
  HAMMER Version      6
Big block information
  Total            6310
  Used                2 (0.03%)
  Reserved           33 (0.52%)
  Free             6275 (99.45%)
Space information
  No. Inodes          7
  Total size        49G (52932116480 bytes)
  Used              16M (0.03%)
  Reserved         264M (0.52%)
  Free              49G (99.45%)
PFS information
  PFS ID  Mode    Snaps  Mounted on
       0  MASTER      0  /data/myfiles
       1  MASTER      0  /data/myfiles

Both "hammer reblock" and "hammer prune-everything" apply on a per-PFS basis, so you can selectively reblock or prune one PFS and reclaim space that can be used by the other PFSes on the same HAMMER filesystem. These are powerful commands you can use to manage your filesystem history, possibly destroying it, so be mindful of how you use them.


Conclusion

HAMMER is a very sophisticated storage mechanism, one that provides functionality similar to many components of ZFS as well as a few unique features not found in the other BSDs. It combines support for large filesystems, data integrity, crash resistance, snapshotting, powerful mirroring features, and file history and recovery tools that rival source control utilities like git and Subversion that gives it an advantage over many other choices of filesystem.

For more information about HAMMER:

DragonFly HAMMER overview
man hammer(5)
man hammer(8)
How to implement master PFS replication

Latest News

Two Year Anniversary

2015-08-08

We're quickly approaching our two-year anniversary, which will be on episode 105. To celebrate, we've created a unique t-shirt design, available for purchase until the end of August. Shirts will be shipped out around September 1st. Most of the proceeds will support the show, and specifically allow us to buy...

New discussion segment

2015-01-17

We're thinking about adding a new segment to the show where we discuss a topic that the listeners suggest. It's meant to be informative like a tutorial, but more of a "free discussion" format. If you have any subjects you want us to explore, or even just a good name...

How did you get into BSD?

2014-11-26

We've got a fun idea for the holidays this year: just like we ask during the interviews, we want to hear how all the viewers and listeners first got into BSD. Email us your story, either written or a video version, and we'll read and play some of them for...

EuroBSDCon 2014

2014-09-18

As you might expect, both Allan and Kris will be at EuroBSDCon this year. They'll be busy hunting down various BSD developers and forcing them to do interviews, but don't hesitate to say hi if you're a listener!...


Episode 146: Music to Beastie’s ears

2016-06-16

Direct Download: Video | HD Video | MP3 Audio | OGG Audio | Torrent This episode was brought to you by Headlines BSDCan Recap and Live Stream Videos OpenBSD BSDCan 2016 papers now available Allan’s slides and Paper Michael W Lucas presents Allan with a gift “FreeBSD Mastery: Advanced ZedFS” Highlighted Tweets: Groff Arrives at BSDCan...

Episode 145: At the Core of it all

2016-06-08

Direct Download: Video | HD Video | MP3 Audio | OGG Audio | Torrent This episode was brought to you by Interview - Benno Rice - benno@freebsd.org / @jeamland Manager, OS & Networking at EMC Isilon Emily Dunham: Community Automation iXsystems 1U Rackmount Server - 4 Bay Hot-Swap SAS/SATA Drive Bays 400W Redundant Power Supply...

Episode 144: The PF life

2016-06-01

Direct Download: Video | HD Video | MP3 Audio | OGG Audio | Torrent This episode was brought to you by Headlines dotSecurity 2016 - Theo de Raadt - Privilege Separation and Pledge Video Slides Interested in Privilege Separation and security in general? If so, then you are in for a treat, we have both...

Episode 143: One small step for DRM, one giant leap for BSD

2016-05-25

Direct Download: Video | HD Video | MP3 Audio | OGG Audio | Torrent This episode was brought to you by Headlines How the number of states affects pf’s performance of FreeBSD Our friend Olivier of FreeNAS and BSDRP fame has an interesting blog post this week detailing his unique issue with finding a firewall...