<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Fri, 01 May 2026 09:33:38 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>BSD Now - Episodes Tagged with “Intel”</title>
    <link>https://www.bsdnow.tv/tags/intel</link>
    <pubDate>Thu, 21 Apr 2022 03:00:00 -0400</pubDate>
    <description>Created by three guys who love BSD, we cover the latest news and have an extensive series of tutorials, as well as interviews with various people from all areas of the BSD community. It also serves as a platform for support and questions. We love and advocate FreeBSD, OpenBSD, NetBSD, DragonFlyBSD and TrueOS. Our show aims to be helpful and informative for new users that want to learn about them, but still be entertaining for the people who are already pros.
The show airs on Wednesdays at 2:00PM (US Eastern time) and the edited version is usually up the following day. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>A weekly podcast and the place to B...SD</itunes:subtitle>
    <itunes:author>JT Pennington</itunes:author>
    <itunes:summary>Created by three guys who love BSD, we cover the latest news and have an extensive series of tutorials, as well as interviews with various people from all areas of the BSD community. It also serves as a platform for support and questions. We love and advocate FreeBSD, OpenBSD, NetBSD, DragonFlyBSD and TrueOS. Our show aims to be helpful and informative for new users that want to learn about them, but still be entertaining for the people who are already pros.
The show airs on Wednesdays at 2:00PM (US Eastern time) and the edited version is usually up the following day. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/c/c91b88f1-e824-4815-bcb8-5227818d6010/cover.jpg?v=4"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>berkeley,freebsd,openbsd,netbsd,dragonflybsd,trueos,trident,hardenedbsd,tutorial,howto,guide,bsd,interview</itunes:keywords>
    <itunes:owner>
      <itunes:name>JT Pennington</itunes:name>
      <itunes:email>feedback@bsdnow.tv</itunes:email>
    </itunes:owner>
<itunes:category text="News">
  <itunes:category text="Tech News"/>
</itunes:category>
<itunes:category text="Education">
  <itunes:category text="How To"/>
</itunes:category>
<item>
  <title>451: Tuning ZFS recordsize</title>
  <link>https://www.bsdnow.tv/451</link>
  <guid isPermaLink="false">e05f4b5e-9285-42ae-87ba-151ec71f80b7</guid>
  <pubDate>Thu, 21 Apr 2022 03:00:00 -0400</pubDate>
  <author>JT Pennington</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/c91b88f1-e824-4815-bcb8-5227818d6010/e05f4b5e-9285-42ae-87ba-151ec71f80b7.mp3" length="35683176" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>JT Pennington</itunes:author>
  <itunes:subtitle>Full system backups with FFS snapshots, ZFS and dump(8), tuning recordsize in OpenZFS, Optimizing FreeBSD Power Consumption on Modern Intel Laptops, remember to check for ZFS filesystems being mounted, Use tcpdump to save wireless bridge, and more</itunes:subtitle>
  <itunes:duration>1:00:45</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/c/c91b88f1-e824-4815-bcb8-5227818d6010/cover.jpg?v=4"/>
  <description>Full system backups with FFS snapshots, ZFS and dump(8), tuning recordsize in OpenZFS, Optimizing FreeBSD Power Consumption on Modern Intel Laptops, remember to check for ZFS filesystems being mounted, Use tcpdump to save wireless bridge, and more
NOTES
This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow)
Headlines
Full system backups with FFS snapshots, ZFS and dump(8) (https://www.unitedbsd.com/d/705-full-system-backups-with-ffs-snapshots-zfs-and-dump8)
Tuning Recordsize in OpenZFS (https://klarasystems.com/articles/tuning-recordsize-in-openzfs/)
News Roundup
Optimizing FreeBSD Power Consumption on Modern Intel Laptops (https://www.neelc.org/posts/optimize-freebsd-for-intel-tigerlake/)
I need to remember to check for ZFS filesystems being mounted (https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSCheckForMounted)
Use tcpdump to save wireless bridge (https://adventurist.me/posts/0027)
Beastie Bits
• [FreeBSD on the Vortex86DX CPU](https://www.cambus.net/freebsd-on-the-vortex86dx-cpu/)
• [HAMMER2 vs USB stick pulls](https://www.dragonflydigest.com/2022/03/22/26800.html)
• [New US mirror for DragonFly](https://www.dragonflydigest.com/2022/03/09/26742.html)
• [HelloSystem 13.1 RC1](https://github.com/helloSystem/ISO/releases/tag/experimental-13.1-RC1)
• [Video introduction to OpenBSD 7.0](https://www.youtube.com/watch?v=KeUsE-3nSes)
• [Losses in the community](https://minnie.tuhs.org/pipermail/tuhs/2022-April/025643.html)
Tarsnap
This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups.
Feedback/Questions
Sam - BSD Laptops (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/451/feedback/Sam%20-%20BSD%20Laptops.md)
Reese - Electric Groff (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/451/feedback/Reese%20-%20Electric%20Groff.md)
Alexandra - New to BSD (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/451/feedback/Alexandra%20-%20New%20to%20BSD.md)
Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)
***
</description>
  <itunes:keywords>freebsd, openbsd, netbsd, dragonflybsd, trueos, trident, hardenedbsd, tutorial, howto, guide, bsd, operating system, open source, shell, unix, os, berkeley, software, distribution, release, zfs, zpool, dataset, interview, ports, packages, backups, dump, tuning, recordsize, optimizing, power consumption, intel, laptop, mount, mounting, mounted, tcpdump, wireless bridge</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Full system backups with FFS snapshots, ZFS and dump(8), tuning recordsize in OpenZFS, Optimizing FreeBSD Power Consumption on Modern Intel Laptops, remember to check for ZFS filesystems being mounted, Use tcpdump to save wireless bridge, and more</p>

<p><strong><em>NOTES</em></strong><br>
This episode of BSDNow is brought to you by <a href="https://www.tarsnap.com/bsdnow" rel="nofollow">Tarsnap</a> and the <a href="https://www.patreon.com/bsdnow" rel="nofollow">BSDNow Patreon</a></p>

<h2>Headlines</h2>

<h3><a href="https://www.unitedbsd.com/d/705-full-system-backups-with-ffs-snapshots-zfs-and-dump8" rel="nofollow">Full system backups with FFS snapshots, ZFS and dump(8)</a></h3>

<hr>

<h3><a href="https://klarasystems.com/articles/tuning-recordsize-in-openzfs/" rel="nofollow">Tuning Recordsize in OpenZFS</a></h3>

<hr>

<h2>News Roundup</h2>

<h3><a href="https://www.neelc.org/posts/optimize-freebsd-for-intel-tigerlake/" rel="nofollow">Optimizing FreeBSD Power Consumption on Modern Intel Laptops</a></h3>

<hr>

<h3><a href="https://utcc.utoronto.ca/%7Ecks/space/blog/solaris/ZFSCheckForMounted" rel="nofollow">I need to remember to check for ZFS filesystems being mounted</a></h3>

<hr>

<h3><a href="https://adventurist.me/posts/0027" rel="nofollow">Use tcpdump to save wireless bridge</a></h3>

<hr>

<h2>Beastie Bits</h2>

<pre><code>• [FreeBSD on the Vortex86DX CPU](https://www.cambus.net/freebsd-on-the-vortex86dx-cpu/)
• [HAMMER2 vs USB stick pulls](https://www.dragonflydigest.com/2022/03/22/26800.html)
• [New US mirror for DragonFly](https://www.dragonflydigest.com/2022/03/09/26742.html)
• [HelloSystem 13.1 RC1](https://github.com/helloSystem/ISO/releases/tag/experimental-13.1-RC1)
• [Video introduction to OpenBSD 7.0](https://www.youtube.com/watch?v=KeUsE-3nSes)
• [Losses in the community](https://minnie.tuhs.org/pipermail/tuhs/2022-April/025643.html)
</code></pre>

<hr>

<h3>Tarsnap</h3>

<ul>
<li>This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups.</li>
</ul>

<h2>Feedback/Questions</h2>

<ul>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/451/feedback/Sam%20-%20BSD%20Laptops.md" rel="nofollow">Sam - BSD Laptops</a></li>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/451/feedback/Reese%20-%20Electric%20Groff.md" rel="nofollow">Reese - Electric Groff</a></li>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/451/feedback/Alexandra%20-%20New%20to%20BSD.md" rel="nofollow">Alexandra - New to BSD</a></li>
</ul>

<hr>

<ul>
<li>Send questions, comments, show ideas/topics, or stories you want mentioned on the show to <a href="mailto:feedback@bsdnow.tv" rel="nofollow">feedback@bsdnow.tv</a>
***</li>
</ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Full system backups with FFS snapshots, ZFS and dump(8), tuning recordsize in OpenZFS, Optimizing FreeBSD Power Consumption on Modern Intel Laptops, remember to check for ZFS filesystems being mounted, Use tcpdump to save wireless bridge, and more</p>

<p><strong><em>NOTES</em></strong><br>
This episode of BSDNow is brought to you by <a href="https://www.tarsnap.com/bsdnow" rel="nofollow">Tarsnap</a> and the <a href="https://www.patreon.com/bsdnow" rel="nofollow">BSDNow Patreon</a></p>

<h2>Headlines</h2>

<h3><a href="https://www.unitedbsd.com/d/705-full-system-backups-with-ffs-snapshots-zfs-and-dump8" rel="nofollow">Full system backups with FFS snapshots, ZFS and dump(8)</a></h3>

<hr>

<h3><a href="https://klarasystems.com/articles/tuning-recordsize-in-openzfs/" rel="nofollow">Tuning Recordsize in OpenZFS</a></h3>

<hr>

<h2>News Roundup</h2>

<h3><a href="https://www.neelc.org/posts/optimize-freebsd-for-intel-tigerlake/" rel="nofollow">Optimizing FreeBSD Power Consumption on Modern Intel Laptops</a></h3>

<hr>

<h3><a href="https://utcc.utoronto.ca/%7Ecks/space/blog/solaris/ZFSCheckForMounted" rel="nofollow">I need to remember to check for ZFS filesystems being mounted</a></h3>

<hr>

<h3><a href="https://adventurist.me/posts/0027" rel="nofollow">Use tcpdump to save wireless bridge</a></h3>

<hr>

<h2>Beastie Bits</h2>

<pre><code>• [FreeBSD on the Vortex86DX CPU](https://www.cambus.net/freebsd-on-the-vortex86dx-cpu/)
• [HAMMER2 vs USB stick pulls](https://www.dragonflydigest.com/2022/03/22/26800.html)
• [New US mirror for DragonFly](https://www.dragonflydigest.com/2022/03/09/26742.html)
• [HelloSystem 13.1 RC1](https://github.com/helloSystem/ISO/releases/tag/experimental-13.1-RC1)
• [Video introduction to OpenBSD 7.0](https://www.youtube.com/watch?v=KeUsE-3nSes)
• [Losses in the community](https://minnie.tuhs.org/pipermail/tuhs/2022-April/025643.html)
</code></pre>

<hr>

<h3>Tarsnap</h3>

<ul>
<li>This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups.</li>
</ul>

<h2>Feedback/Questions</h2>

<ul>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/451/feedback/Sam%20-%20BSD%20Laptops.md" rel="nofollow">Sam - BSD Laptops</a></li>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/451/feedback/Reese%20-%20Electric%20Groff.md" rel="nofollow">Reese - Electric Groff</a></li>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/451/feedback/Alexandra%20-%20New%20to%20BSD.md" rel="nofollow">Alexandra - New to BSD</a></li>
</ul>

<hr>

<ul>
<li>Send questions, comments, show ideas/topics, or stories you want mentioned on the show to <a href="mailto:feedback@bsdnow.tv" rel="nofollow">feedback@bsdnow.tv</a>
***</li>
</ul>]]>
  </itunes:summary>
</item>
<item>
  <title>360: Full circle</title>
  <link>https://www.bsdnow.tv/360</link>
  <guid isPermaLink="false">69d88af7-54da-4612-9fc2-84ffae001c46</guid>
  <pubDate>Thu, 23 Jul 2020 08:00:00 -0400</pubDate>
  <author>JT Pennington</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/c91b88f1-e824-4815-bcb8-5227818d6010/69d88af7-54da-4612-9fc2-84ffae001c46.mp3" length="42925160" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>JT Pennington</itunes:author>
  <itunes:subtitle>Chasing a bad commit, New FreeBSD Core Team elected, Getting Started with NetBSD on the Pinebook Pro, FreeBSD on the Intel 10th Gen i3 NUC, pf table size check and change, and more.</itunes:subtitle>
  <itunes:duration>42:27</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/c/c91b88f1-e824-4815-bcb8-5227818d6010/cover.jpg?v=4"/>
  <description>Chasing a bad commit, New FreeBSD Core Team elected, Getting Started with NetBSD on the Pinebook Pro, FreeBSD on the Intel 10th Gen i3 NUC, pf table size check and change, and more.
NOTES
This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/)
Headlines
Chasing a bad commit (https://vishaltelangre.com/chasing-a-bad-commit/)
While working on a big project where multiple teams merge their feature branches frequently into a release Git branch, developers often run into situations where they find that some of their work have been either removed, modified or affected by someone else's work accidentally. It can happen in smaller teams as well. Two features could have been working perfectly fine until they got merged together and broke something. That's a highly possible case. There are many other cases which could cause such hard to understand and subtle bugs which even continuous integration (CI) systems running the entire test suite of our projects couldn't catch.
We are not going to discuss how such subtle bugs can get into our release branch because that's just a wild territory out there. Instead, we can definitely discuss about how to find a commit that deviated from an expected outcome of a certain feature. The deviation could be any behaviour of our code that we can measure distinctively — either good or bad in general.
New FreeBSD Core Team Elected (https://www.freebsdnews.com/2020/07/14/new-freebsd-core-team-elected/)
The FreeBSD Project is pleased to announce the completion of the 2020 Core Team election. Active committers to the project have elected your Eleventh FreeBSD Core Team.!
Baptiste Daroussin (bapt)
Ed Maste (emaste)
George V. Neville-Neil (gnn)
Hiroki Sato (hrs)
Kyle Evans (kevans)
Mark Johnston (markj)
Scott Long (scottl)
Sean Chittenden (seanc)
Warner Losh (imp)
***
News Roundup
Getting Started with NetBSD on the Pinebook Pro (https://bentsukun.ch/posts/pinebook-pro-netbsd/)
If you buy a Pinebook Pro now, it comes with Manjaro Linux on the internal eMMC storage. Let’s install NetBSD instead!
The easiest way to get started is to buy a decent micro-SD card (what sort of markings it should have is a science of its own, by the way) and install NetBSD on that. On a warm boot (i.e. when rebooting a running system), the micro-SD card has priority compared to the eMMC, so the system will boot from there.
+ A FreeBSD developer has borrowed some of the NetBSD code to get audio working on RockPro64 and Pinebook Pro: https://twitter.com/kernelnomicon/status/1282790609778905088
FreeBSD on the Intel 10th Gen i3 NUC (https://adventurist.me/posts/00300)
I have ended up with some 10th Gen i3 NUC's (NUC10i3FNH to be specific) to put to work in my testbed. These are quite new devices, the build date on the boxes is 13APR2020. Before I figure out what their true role is (one of them might have to run linux) I need to install FreeBSD -CURRENT and see how performance and hardware support is.
pf table size check and change (https://www.dragonflydigest.com/2020/06/29/24698.html)
Did you know there’s a default size limit to pf’s state table?  I did not, but it makes sense that there is one.  If for some reason you bump into this limit (difficult for home use, I’d think), here’s how you change it (http://lists.dragonflybsd.org/pipermail/users/2020-June/381261.html)
There is a table-entries limit specified, you can see current settings with
'pfctl -s all'.  You can adjust the limits in the /etc/pf.conf file
containing the rules with a line like this near the top:
set limit table-entries 100000
+ In the original mail thread, there is mention of the FreeBSD sysctl net.pf.request_maxcount, which controls the maximum number of entries that can be sent as a single ioctl(). This allows the user to adjust the memory limit for how big of a list the kernel is willing to allocate memory for.
Beastie Bits
tmux and bhyve (https://callfortesting.org/tmux/)
Azure and FreeBSD (https://azuremarketplace.microsoft.com/en-us/marketplace/apps/thefreebsdfoundation.freebsd-12_1)
Groff Tutorial (https://www.youtube.com/watch?v=bvkmnK6-qao&amp;amp;feature=youtu.be)
***
###Tarsnap
This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups.
Tarsnap Mastery (https://mwl.io/nonfiction/tools#tarsnap)
Feedback/Questions
Chris - ZFS Question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/360/feedback/Chris%20-%20zfs%20question.md)
Patrick - Tarsnap (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/360/feedback/Patrick%20-%20Tarsnap.md)
Pin - pkgsrc (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/360/feedback/pin%20-%20pkgsrc.md)
***
Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)
***
</description>
  <itunes:keywords>freebsd, openbsd, netbsd, dragonflybsd, trueos, trident, hardenedbsd, tutorial, howto, guide, bsd, operating system, os, berkeley, software, distribution, zfs, interview, commit, core team, freebsd core team, election, elected, pinebook, pinebook pro, i3, Intel, Intel i3, i3 NUC, pf, packet filter, table size, table size check</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Chasing a bad commit, New FreeBSD Core Team elected, Getting Started with NetBSD on the Pinebook Pro, FreeBSD on the Intel 10th Gen i3 NUC, pf table size check and change, and more.</p>

<p><strong><em>NOTES</em></strong><br>
This episode of BSDNow is brought to you by <a href="https://www.tarsnap.com/" rel="nofollow">Tarsnap</a></p>

<h2>Headlines</h2>

<h3><a href="https://vishaltelangre.com/chasing-a-bad-commit/" rel="nofollow">Chasing a bad commit</a></h3>

<blockquote>
<p>While working on a big project where multiple teams merge their feature branches frequently into a release Git branch, developers often run into situations where they find that some of their work have been either removed, modified or affected by someone else&#39;s work accidentally. It can happen in smaller teams as well. Two features could have been working perfectly fine until they got merged together and broke something. That&#39;s a highly possible case. There are many other cases which could cause such hard to understand and subtle bugs which even continuous integration (CI) systems running the entire test suite of our projects couldn&#39;t catch.<br>
We are not going to discuss how such subtle bugs can get into our release branch because that&#39;s just a wild territory out there. Instead, we can definitely discuss about how to find a commit that deviated from an expected outcome of a certain feature. The deviation could be any behaviour of our code that we can measure distinctively — either good or bad in general.</p>
</blockquote>

<hr>

<h3><a href="https://www.freebsdnews.com/2020/07/14/new-freebsd-core-team-elected/" rel="nofollow">New FreeBSD Core Team Elected</a></h3>

<blockquote>
<p>The FreeBSD Project is pleased to announce the completion of the 2020 Core Team election. Active committers to the project have elected your Eleventh FreeBSD Core Team.!</p>
</blockquote>

<ul>
<li>Baptiste Daroussin (bapt)</li>
<li>Ed Maste (emaste)</li>
<li>George V. Neville-Neil (gnn)</li>
<li>Hiroki Sato (hrs)</li>
<li>Kyle Evans (kevans)</li>
<li>Mark Johnston (markj)</li>
<li>Scott Long (scottl)</li>
<li>Sean Chittenden (seanc)</li>
<li>Warner Losh (imp)
***</li>
</ul>

<h2>News Roundup</h2>

<h3><a href="https://bentsukun.ch/posts/pinebook-pro-netbsd/" rel="nofollow">Getting Started with NetBSD on the Pinebook Pro</a></h3>

<blockquote>
<p>If you buy a Pinebook Pro now, it comes with Manjaro Linux on the internal eMMC storage. Let’s install NetBSD instead!<br>
The easiest way to get started is to buy a decent micro-SD card (what sort of markings it should have is a science of its own, by the way) and install NetBSD on that. On a warm boot (i.e. when rebooting a running system), the micro-SD card has priority compared to the eMMC, so the system will boot from there.</p>

<ul>
<li>A FreeBSD developer has borrowed some of the NetBSD code to get audio working on RockPro64 and Pinebook Pro: <a href="https://twitter.com/kernelnomicon/status/1282790609778905088" rel="nofollow">https://twitter.com/kernelnomicon/status/1282790609778905088</a>
***</li>
</ul>
</blockquote>

<h3><a href="https://adventurist.me/posts/00300" rel="nofollow">FreeBSD on the Intel 10th Gen i3 NUC</a></h3>

<blockquote>
<p>I have ended up with some 10th Gen i3 NUC&#39;s (NUC10i3FNH to be specific) to put to work in my testbed. These are quite new devices, the build date on the boxes is 13APR2020. Before I figure out what their true role is (one of them might have to run linux) I need to install FreeBSD -CURRENT and see how performance and hardware support is.</p>
</blockquote>

<hr>

<h3><a href="https://www.dragonflydigest.com/2020/06/29/24698.html" rel="nofollow">pf table size check and change</a></h3>

<blockquote>
<p>Did you know there’s a default size limit to pf’s state table?  I did not, but it makes sense that there is one.  If for some reason you bump into this limit (difficult for home use, I’d think), <a href="http://lists.dragonflybsd.org/pipermail/users/2020-June/381261.html" rel="nofollow">here’s how you change it</a><br>
There is a table-entries limit specified, you can see current settings with<br>
&#39;pfctl -s all&#39;.  You can adjust the limits in the /etc/pf.conf file<br>
containing the rules with a line like this near the top:<br>
<code>set limit table-entries 100000</code></p>

<ul>
<li>In the original mail thread, there is mention of the FreeBSD sysctl net.pf.request_maxcount, which controls the maximum number of entries that can be sent as a single ioctl(). This allows the user to adjust the memory limit for how big of a list the kernel is willing to allocate memory for.
***</li>
</ul>
</blockquote>

<h2>Beastie Bits</h2>

<ul>
<li><a href="https://callfortesting.org/tmux/" rel="nofollow">tmux and bhyve</a></li>
<li><a href="https://azuremarketplace.microsoft.com/en-us/marketplace/apps/thefreebsdfoundation.freebsd-12_1" rel="nofollow">Azure and FreeBSD</a></li>
<li><a href="https://www.youtube.com/watch?v=bvkmnK6-qao&feature=youtu.be" rel="nofollow">Groff Tutorial</a>
***
###Tarsnap</li>
<li>This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups.
<a href="https://mwl.io/nonfiction/tools#tarsnap" rel="nofollow">Tarsnap Mastery</a></li>
</ul>

<h2>Feedback/Questions</h2>

<ul>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/360/feedback/Chris%20-%20zfs%20question.md" rel="nofollow">Chris - ZFS Question</a></li>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/360/feedback/Patrick%20-%20Tarsnap.md" rel="nofollow">Patrick - Tarsnap</a></li>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/360/feedback/pin%20-%20pkgsrc.md" rel="nofollow">Pin - pkgsrc</a>
***</li>
<li>Send questions, comments, show ideas/topics, or stories you want mentioned on the show to <a href="mailto:feedback@bsdnow.tv" rel="nofollow">feedback@bsdnow.tv</a>
***</li>
</ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Chasing a bad commit, New FreeBSD Core Team elected, Getting Started with NetBSD on the Pinebook Pro, FreeBSD on the Intel 10th Gen i3 NUC, pf table size check and change, and more.</p>

<p><strong><em>NOTES</em></strong><br>
This episode of BSDNow is brought to you by <a href="https://www.tarsnap.com/" rel="nofollow">Tarsnap</a></p>

<h2>Headlines</h2>

<h3><a href="https://vishaltelangre.com/chasing-a-bad-commit/" rel="nofollow">Chasing a bad commit</a></h3>

<blockquote>
<p>While working on a big project where multiple teams merge their feature branches frequently into a release Git branch, developers often run into situations where they find that some of their work have been either removed, modified or affected by someone else&#39;s work accidentally. It can happen in smaller teams as well. Two features could have been working perfectly fine until they got merged together and broke something. That&#39;s a highly possible case. There are many other cases which could cause such hard to understand and subtle bugs which even continuous integration (CI) systems running the entire test suite of our projects couldn&#39;t catch.<br>
We are not going to discuss how such subtle bugs can get into our release branch because that&#39;s just a wild territory out there. Instead, we can definitely discuss about how to find a commit that deviated from an expected outcome of a certain feature. The deviation could be any behaviour of our code that we can measure distinctively — either good or bad in general.</p>
</blockquote>

<hr>

<h3><a href="https://www.freebsdnews.com/2020/07/14/new-freebsd-core-team-elected/" rel="nofollow">New FreeBSD Core Team Elected</a></h3>

<blockquote>
<p>The FreeBSD Project is pleased to announce the completion of the 2020 Core Team election. Active committers to the project have elected your Eleventh FreeBSD Core Team.!</p>
</blockquote>

<ul>
<li>Baptiste Daroussin (bapt)</li>
<li>Ed Maste (emaste)</li>
<li>George V. Neville-Neil (gnn)</li>
<li>Hiroki Sato (hrs)</li>
<li>Kyle Evans (kevans)</li>
<li>Mark Johnston (markj)</li>
<li>Scott Long (scottl)</li>
<li>Sean Chittenden (seanc)</li>
<li>Warner Losh (imp)
***</li>
</ul>

<h2>News Roundup</h2>

<h3><a href="https://bentsukun.ch/posts/pinebook-pro-netbsd/" rel="nofollow">Getting Started with NetBSD on the Pinebook Pro</a></h3>

<blockquote>
<p>If you buy a Pinebook Pro now, it comes with Manjaro Linux on the internal eMMC storage. Let’s install NetBSD instead!<br>
The easiest way to get started is to buy a decent micro-SD card (what sort of markings it should have is a science of its own, by the way) and install NetBSD on that. On a warm boot (i.e. when rebooting a running system), the micro-SD card has priority compared to the eMMC, so the system will boot from there.</p>

<ul>
<li>A FreeBSD developer has borrowed some of the NetBSD code to get audio working on RockPro64 and Pinebook Pro: <a href="https://twitter.com/kernelnomicon/status/1282790609778905088" rel="nofollow">https://twitter.com/kernelnomicon/status/1282790609778905088</a>
***</li>
</ul>
</blockquote>

<h3><a href="https://adventurist.me/posts/00300" rel="nofollow">FreeBSD on the Intel 10th Gen i3 NUC</a></h3>

<blockquote>
<p>I have ended up with some 10th Gen i3 NUC&#39;s (NUC10i3FNH to be specific) to put to work in my testbed. These are quite new devices, the build date on the boxes is 13APR2020. Before I figure out what their true role is (one of them might have to run linux) I need to install FreeBSD -CURRENT and see how performance and hardware support is.</p>
</blockquote>

<hr>

<h3><a href="https://www.dragonflydigest.com/2020/06/29/24698.html" rel="nofollow">pf table size check and change</a></h3>

<blockquote>
<p>Did you know there’s a default size limit to pf’s state table?  I did not, but it makes sense that there is one.  If for some reason you bump into this limit (difficult for home use, I’d think), <a href="http://lists.dragonflybsd.org/pipermail/users/2020-June/381261.html" rel="nofollow">here’s how you change it</a><br>
There is a table-entries limit specified, you can see current settings with<br>
&#39;pfctl -s all&#39;.  You can adjust the limits in the /etc/pf.conf file<br>
containing the rules with a line like this near the top:<br>
<code>set limit table-entries 100000</code></p>

<ul>
<li>In the original mail thread, there is mention of the FreeBSD sysctl net.pf.request_maxcount, which controls the maximum number of entries that can be sent as a single ioctl(). This allows the user to adjust the memory limit for how big of a list the kernel is willing to allocate memory for.
***</li>
</ul>
</blockquote>

<h2>Beastie Bits</h2>

<ul>
<li><a href="https://callfortesting.org/tmux/" rel="nofollow">tmux and bhyve</a></li>
<li><a href="https://azuremarketplace.microsoft.com/en-us/marketplace/apps/thefreebsdfoundation.freebsd-12_1" rel="nofollow">Azure and FreeBSD</a></li>
<li><a href="https://www.youtube.com/watch?v=bvkmnK6-qao&feature=youtu.be" rel="nofollow">Groff Tutorial</a>
***
###Tarsnap</li>
<li>This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups.
<a href="https://mwl.io/nonfiction/tools#tarsnap" rel="nofollow">Tarsnap Mastery</a></li>
</ul>

<h2>Feedback/Questions</h2>

<ul>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/360/feedback/Chris%20-%20zfs%20question.md" rel="nofollow">Chris - ZFS Question</a></li>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/360/feedback/Patrick%20-%20Tarsnap.md" rel="nofollow">Patrick - Tarsnap</a></li>
<li><a href="https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/360/feedback/pin%20-%20pkgsrc.md" rel="nofollow">Pin - pkgsrc</a>
***</li>
<li>Send questions, comments, show ideas/topics, or stories you want mentioned on the show to <a href="mailto:feedback@bsdnow.tv" rel="nofollow">feedback@bsdnow.tv</a>
***</li>
</ul>]]>
  </itunes:summary>
</item>
<item>
  <title>Episode 251: Crypto HAMMER | BSD Now 251</title>
  <link>https://www.bsdnow.tv/251</link>
  <guid isPermaLink="false">http://feed.jupiter.zone/bsdnow#entry-2136</guid>
  <pubDate>Thu, 21 Jun 2018 05:00:00 -0400</pubDate>
  <author>JT Pennington</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/c91b88f1-e824-4815-bcb8-5227818d6010/034d5002-639f-4744-a773-9c000ce91d1c.mp3" length="53300210" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>JT Pennington</itunes:author>
  <itunes:subtitle>DragonflyBSD’s hammer1 encrypted master/slave setup, second part of our BSDCan recap, NomadBSD 1.1-RC1 available, OpenBSD adds an LDAP client to base, FreeBSD gets pNFS support, Intel FPU Speculation Vulnerability confirmed, and what some Unix command names mean.</itunes:subtitle>
  <itunes:duration>1:28:43</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/c/c91b88f1-e824-4815-bcb8-5227818d6010/cover.jpg?v=4"/>
  <description>DragonflyBSD’s hammer1 encrypted master/slave setup, second part of our BSDCan recap, NomadBSD 1.1-RC1 available, OpenBSD adds an LDAP client to base, FreeBSD gets pNFS support, Intel FPU Speculation Vulnerability confirmed, and what some Unix command names mean.
&lt;p&gt;##Headlines&lt;br&gt;
&lt;a href="https://www.reddit.com/r/dragonflybsd/comments/8riwtx/towards_a_hammer1_masterslave_encrypted_setup/"&gt;DragonflyBSD: Towards a HAMMER1 master/slave encrypted setup with LUKS&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I just wanted to share my experience with setting up DragonFly master/slave HAMMER1 PFS’s on top of LUKS&lt;br&gt;
So after a long time using an Synology for my NFS needs, I decided it was time to rethink my setup a little since I had several issues with it :&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;You cannot run NFS on top of encrypted partitions easily&lt;/li&gt;
&lt;li&gt;I suspect I am having some some data corruption (bitrot) on the ext4 filesystem&lt;/li&gt;
&lt;li&gt;the NIC was stcuk to 100 Mbps instead of 1 Gbps even after swapping cables, switches, you name it&lt;/li&gt;
&lt;li&gt;It’s proprietary&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;I have been playing with DragonFly in the past and knew about HAMMER, now I just had the perfect excuse to actually use it in production :) After setting up the OS, creating the LUKS partition and HAMMER FS was easy :&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;code&gt;kdload dm&lt;/code&gt;&lt;br&gt;
&lt;code&gt;cryptsetup luksFormat /dev/serno/&amp;lt;id1&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;code&gt;cryptsetup luksOpen /dev/serno/&amp;lt;id1&amp;gt; fort_knox&lt;/code&gt;&lt;br&gt;
&lt;code&gt;newfs_hammer -L hammer1_secure_master /dev/mapper/fort_knox&lt;/code&gt;&lt;br&gt;
&lt;code&gt;cryptsetup luksFormat /dev/serno/&amp;lt;id2&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;code&gt;cryptsetup luksOpen /dev/serno/&amp;lt;id2&amp;gt; fort_knox_slave&lt;/code&gt;&lt;br&gt;
&lt;code&gt;newfs_hammer -L hammer1_secure_slave /dev/mapper/fort_knox_slave&lt;/code&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Mount the 2 drives :&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;mount /dev/mapper/fort_knox /fort_knox&lt;/code&gt;&lt;br&gt;
&lt;code&gt;mount /dev/mapper_fort_know_slave /fort_knox_slave&lt;/code&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You can now put your data under /fort_knox&lt;br&gt;
Now, off to setting up the replication, first get the shared-uuid of /fort_knox&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;code&gt;hammer pfs-status /fort_knox&lt;/code&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Create a PFS slave “linked” to the master&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;code&gt;hammer pfs-slave /fort_knox_slave/pfs/slave shared-uuid=f9e7cc0d-eb59-10e3-a5b5-01e6e7cefc12&lt;/code&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;And then stream your data to the slave PFS !&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;code&gt;hammer mirror-stream /fort_knox /fort_knox_slave/pfs/slave&lt;/code&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;After that, setting NFS is fairly trivial even though I had problem with the /etc/exports syntax which is different than Linux&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;There’s a few things I wish would be better though but nothing too problematic or without workarounds :&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Cannot unlock LUKS partitions at boot time afaik (Acceptable tradeoff for the added security LUKS gives me vs my old Synology setup) but this force me to run a script to unlock LUKS, mount hammer and start mirror-stream at each boot&lt;/li&gt;
&lt;li&gt;No S1/S3 sleep so I made a script to shutdown the system when there’s no network neighborgs to serve the NFS&lt;/li&gt;
&lt;li&gt;As my system isn’t online 24/7 for energy reasons, I guess will have to run hammer cleanup myself from time to time&lt;/li&gt;
&lt;li&gt;Some uncertainty because hey, it’s kind of exotic but exciting too :)&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;Overall, I am happy, HAMMER1 and PFS are looking really good, DragonFly is a neat Unix and the community is super friendly (Matthew Dillon actually provided me with a kernel patch to fix the broken ACPI on the PC holding this setup, many thanks!), the system is still a “work in progress” but it is already serving my files as I write this post.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Let’s see in 6 months how it goes in the longer run !&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Helpful resources : &lt;a href="https://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/"&gt;https://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;###BSDCan 2018 Recap&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;As promised, here is our second part of our BSDCan report, covering the conference proper. The last tutorials/devsummit of that day lead directly into the conference, as people could pick up their registration packs at the Red Lion and have a drink with fellow BSD folks.&lt;/li&gt;
&lt;li&gt;Allan and I were there only briefly, as we wanted to get back to the “Newcomers orientation and mentorship” session lead by Michael W. Lucas. This session is intended for people that are new to BSDCan (maybe their first BSD conference ever?) and may have questions. Michael explained everything from the 6-2-1 rule (hours of sleep, meals per day, and number of showers that attendees should have at a minimum), to the partner and widowers program (lead by his wife Liz), to the sessions that people should not miss (opening, closing, and hallway track). Old-time BSDCan folks were asked to stand up so that people can recognize them and ask them any questions they might have during the conferences. The session was well attended. Afterwards, people went for dinner in groups, a big one lead by Michael Lucas to his favorite Shawarma place, followed by gelato (of course). This allowed newbies to mingle over dinner and ice cream, creating a welcoming atmosphere.&lt;/li&gt;
&lt;li&gt;The next day, after Dan Langille opened the conference, Benno Rice gave the keynote presentation about “The Tragedy of Systemd”.&lt;/li&gt;
&lt;li&gt;Benedict went to the following talks:&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;“Automating Network Infrastructures with Ansible on FreeBSD” in the DevSummit track. A good talk that connected well with his Ansible tutorial and even allowed some discussions among participants.&lt;br&gt;
“All along the dwatch tower”: Devin delivered a well prepared talk. I first thought that the number of slides would not fit into the time slot, but she even managed to give a demo of her work, which was well received. The dwatch tool she wrote should make it easy for people to get started with DTrace without learning too much about the syntax at first. The visualizations were certainly nice to see, combining different tools together in a new way.&lt;br&gt;
ZFS BoF, lead by Allan and Matthew Ahrens&lt;br&gt;
SSH Key Management by Michael W. Lucas. Yet another great talk where I learned a lot. I did not get to the SSH CA chapter in the new SSH Mastery book, so this was a good way to wet my appetite for it and motivated me to look into creating one for the cluster that I’m managing.&lt;br&gt;
The rest of the day was spent at the FreeBSD Foundation table, talking to various folks. Then, Allan and I had an interview with Kirk McKusick for National FreeBSD Day, then we had a core meeting, followed by a core dinner.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Day 2:
&lt;blockquote&gt;
&lt;p&gt;“Flexible Disk Use in OpenZFS”: Matthew Ahrens talking about the feature he is implementing to expand a RAID-Z with a single disk, as well as device removal.&lt;br&gt;
Allan’s talk about his efforts to implement ZSTD in OpenZFS as another compression algorithm. I liked his overview slides with the numbers comparing the algorithms for their effectiveness and his personal story about the sometimes rocky road to get the feature implemented.&lt;br&gt;
“zrepl - ZFS replication” by Christian Schwarz, was well prepared and even had a demo to show what his snapshot replication tool can do. We covered it on the show before and people can find it under sysutils/zrepl. Feedback and help is welcome.&lt;br&gt;
“The Evolution of FreeBSD Governance” by Kirk McKusick was yet another great talk by him covering the early days of FreeBSD until today, detailing some of the progress and challenges the project faced over the years in terms of leadership and governance. This is an ongoing process that everyone in the community should participate in to keep the project healthy and infused with fresh blood.&lt;br&gt;
Closing session and auction were funny and great as always.&lt;br&gt;
All in all, yet another amazing BSDCan. Thank you Dan Langille and your organizing team for making it happen! Well done.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Digital Ocean&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;###&lt;a href="http://nomadbsd.org/index.html#rel1.1-rc1"&gt;NomadBSD 1.1-RC1 Released&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The first – and hopefully final – release candidate of NomadBSD 1.1 is available!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Changes&lt;/li&gt;
&lt;li&gt;The base system has been upgraded to FreeBSD 11.2-RC3&lt;/li&gt;
&lt;li&gt;EFI booting has been fixed.&lt;/li&gt;
&lt;li&gt;Support for modern Intel GPUs has been added.&lt;/li&gt;
&lt;li&gt;Support for installing packages has been added.&lt;/li&gt;
&lt;li&gt;Improved setup menu.&lt;/li&gt;
&lt;li&gt;More software packages:&lt;/li&gt;
&lt;li&gt;benchmarks/bonnie++&lt;/li&gt;
&lt;li&gt;DSBDisplaySettings&lt;/li&gt;
&lt;li&gt;DSBExec&lt;/li&gt;
&lt;li&gt;DSBSu&lt;/li&gt;
&lt;li&gt;mail/thunderbird&lt;/li&gt;
&lt;li&gt;net/mosh&lt;/li&gt;
&lt;li&gt;ports-mgmt/octopkg&lt;/li&gt;
&lt;li&gt;print/qpdfview&lt;/li&gt;
&lt;li&gt;security/nmap&lt;/li&gt;
&lt;li&gt;sysutils/ddrescue&lt;/li&gt;
&lt;li&gt;sysutils/fusefs-hfsfuse&lt;/li&gt;
&lt;li&gt;sysutils/fusefs-sshfs&lt;/li&gt;
&lt;li&gt;sysutils/sleuthkit&lt;/li&gt;
&lt;li&gt;www/lynx&lt;/li&gt;
&lt;li&gt;x11-wm/compton&lt;/li&gt;
&lt;li&gt;x11/xev&lt;/li&gt;
&lt;li&gt;x11/xterm&lt;/li&gt;
&lt;li&gt;Many improvements and bugfixes&lt;br&gt;
The image and instructions can be found &lt;a href="http://nomadbsd.org/download.html"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;##News Roundup&lt;br&gt;
&lt;a href="https://undeadly.org/cgi?action=article;sid=20180616115514"&gt;LDAP client added to -current&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;CVSROOT:    /cvs
Module name:    src
Changes by: reyk@cvs.openbsd.org    2018/06/13 09:45:58

Log message:
    Import ldap(1), a simple ldap search client.
    We have an ldapd(8) server and ypldap in base, so it makes sense to
    have a simple LDAP client without depending on the OpenLDAP package.
    This tool can be used in an ssh(1) AuthorizedKeysCommand script.
    
    With feedback from many including millert@ schwarze@ gilles@ dlg@ jsing@
    
    OK deraadt@
    
    Status:
    
    Vendor Tag: reyk
    Release Tags:   ldap_20180613
    
    N src/usr.bin/ldap/Makefile
    N src/usr.bin/ldap/aldap.c
    N src/usr.bin/ldap/aldap.h
    N src/usr.bin/ldap/ber.c
    N src/usr.bin/ldap/ber.h
    N src/usr.bin/ldap/ldap.1
    N src/usr.bin/ldap/ldapclient.c
    N src/usr.bin/ldap/log.c
    N src/usr.bin/ldap/log.h
    
    No conflicts created by this import
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;###&lt;a href="https://undeadly.org/cgi?action=article;sid=20180614064341"&gt;Intel® FPU Speculation Vulnerability Confirmed&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Earlier this month, Philip Guenther (guenther@) &lt;a href="https://marc.info/?l=openbsd-cvs&amp;amp;m=152818076013158&amp;amp;w=2"&gt;committed&lt;/a&gt; (to amd64 -current) a change from lazy to semi-eager FPU switching to mitigate against rumored FPU state leakage in Intel® CPUs.&lt;/li&gt;
&lt;li&gt;Theo de Raadt (deraadt@) discussed this in &lt;a href="https://undeadly.org/cgi?action=article;sid=20180611101817"&gt;his BSDCan 2018 session&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Using information disclosed in Theo’s talk, &lt;a href="https://twitter.com/cperciva/status/1007010583244230656"&gt;Colin Percival&lt;/a&gt; developed a proof-of-concept exploit in around 5 hours. This seems to have prompted an early end to an embargo (in which OpenBSD was not involved), and the &lt;a href="https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00145.html"&gt;official announcement&lt;/a&gt; of the vulnerability.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://svnweb.freebsd.org/base?view=revision&amp;amp;revision=335072"&gt;FPU change in FreeBSD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;Summary:
System software may utilize the Lazy FP state restore technique to delay the restoring of state until an instruction operating on that state is actually executed by the new process. Systems using Intel® Core-based microprocessors may potentially allow a local process to infer data utilizing Lazy FP state restore from another process through a speculative execution side channel.
Description:
System software may opt to utilize Lazy FP state restore instead of eager save and restore of the state upon a context switch. Lazy restored states are potentially vulnerable to exploits where one process may infer register values of other processes through a speculative execution side channel that infers their value.
·    CVSS - 4.3 Medium CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:C/C:L/I:N/A:N
Affected Products:
Intel® Core-based microprocessors.
Recommendations:
If an XSAVE-enabled feature is disabled, then we recommend either its state component bitmap in the extended control register (XCR0) is set to 0 (e.g. XCR0[bit 2]=0 for AVX, XCR0[bits 7:5]=0 for AVX512) or the corresponding register states of the feature should be cleared prior to being disabled. Also for relevant states (e.g. x87, SSE, AVX, etc.), Intel recommends system software developers utilize Eager FP state restore in lieu of Lazy FP state restore.
Acknowledgements:
Intel would like to thank Julian Stecklina from Amazon Germany, Thomas Prescher from Cyberus Technology GmbH (https://www.cyberus-technology.de/), Zdenek Sojka from SYSGO AG (http://sysgo.com), and Colin Percival for reporting this issue and working with us on coordinated disclosure.
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;iXsystems&lt;/strong&gt;&lt;br&gt;
iX Ad Spot&lt;br&gt;
&lt;a href="https://www.ixsystems.com/blog/bsdcan-2018-recap/"&gt;iX Systems - BSDCan 2018 Recap&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;###&lt;a href="https://svnweb.freebsd.org/base?view=revision&amp;amp;revision=335012"&gt;FreeBSD gets pNFS support&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Merge the pNFS server code from projects/pnfs-planb-server into head.

This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
Merge the pN http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a "make universe", so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the "current stateid".

Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write operations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate.  Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.

A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
    the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
    and Change attributes for the file, so that the MDS doesn't need to
    acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the "dsNN"
subdirectories.  The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named "ds0" to "dsN" so that no one directory
gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the "nfsd" daemon is not running on the MDS,
once the "dsK" directories are created.

For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
             is. The critical bits of information returned by the FreeBSD
             server is the IP address of the DS and, for the Flexible
             File layout, that NFSv4.1 is to be used and that it is
             "tightly coupled".
             There is a "deviceid" which identifies the DeviceInfo.
Layout     - This is per file and can be recalled by the server when it
             is no longer valid. For the FreeBSD server, there is support
             for two types of layout, call File and Flexible File layout.
             Both allow the client to do I/O on the DS via NFSv4.1 I/O
             operations. The Flexible File layout is a more recent variant
             that allows specification of mirrors, where the client is
             expected to do writes to all mirrors to maintain them in a
             consistent state. The Flexible File layout also allows the
             client to report I/O errors for a DS back to the MDS.
             The Flexible File layout supports two variants referred to as
             "tightly coupled" vs "loosely coupled". The FreeBSD server always
             uses the "tightly coupled" variant where the client uses the
             same credentials to do I/O on the DS as it would on the MDS.
             For the "loosely coupled" variant, the layout specifies a
             synthetic user/group that the client uses to do I/O on the DS.
             The FreeBSD server does not do striping and always returns
             layouts for the entire file. The critical information in a layout
             is Read vs Read/Writea and DeviceID(s) that identify which
             DS(s) the data is stored on.

At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.

***

###[What does {some strange unix command name} stand for?](http://www.unixguide.net/unix/faq/1.3.shtml)

+ awk = "Aho Weinberger and Kernighan" 
+ grep = "Global Regular Expression Print" 
+ fgrep = "Fixed GREP". 
+ egrep = "Extended GREP" 
+ cat = "CATenate" 
+ gecos = "General Electric Comprehensive Operating Supervisor" 
+ nroff = "New ROFF" 
+ troff = "Typesetter new ROFF" 
+ tee = T 
+ bss = "Block Started by Symbol
+ biff = "BIFF" 
+ rc (as in ".cshrc" or "/etc/rc") = "RunCom" 
+ Don Libes' book "Life with Unix" contains lots more of these 
tidbits. 
***

##Beastie Bits
+ [RetroBSD: Unix for microcontrollers](http://retrobsd.org/wiki/doku.php)
+ [On the matter of OpenBSD breaking embargos (KRACK)](https://marc.info/?l=openbsd-tech&amp;amp;m=152910536208954&amp;amp;w=2)
+ [Theo's Basement Computer Paradise (1998)](https://zeus.theos.com/deraadt/hosts.html)
+ [Airport Extreme runs NetBSD](https://jcs.org/2018/06/12/airport_ssh)
+ [What UNIX shell could have been](https://rain-1.github.io/shell-2.html)

***
Tarsnap ad
***

##Feedback/Questions
+ We need more feedback and questions. Please email feedback@bsdnow.tv 
+ Also, many of you owe us BSDCan trip reports! We have shared what our experience at BSDCan was like, but we want to hear about yours. What can we do better next year? What was it like being there for the first time?
+ [Jason writes in](https://slexy.org/view/s205jU58X2)
    + https://www.wheelsystems.com/en/products/wheel-fudo-psm/
+ [June 19th was National FreeBSD Day](https://twitter.com/search?src=typd&amp;amp;q=%23FreeBSDDay)
***

- Send questions, comments, show ideas/topics, or stories you want mentioned on the show to [feedback@bsdnow.tv](mailto:feedback@bsdnow.tv)
***

&lt;/code&gt;&lt;/pre&gt; 
</description>
  <itunes:keywords>freebsd,openbsd,netbsd,dragonflybsd,trueos,trident,hardenedbsd,tutorial,howto,guide,bsd,interview,hammer,Intel,NomadBSD,LDAP,pNFS,RetroBSD</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>DragonflyBSD’s hammer1 encrypted master/slave setup, second part of our BSDCan recap, NomadBSD 1.1-RC1 available, OpenBSD adds an LDAP client to base, FreeBSD gets pNFS support, Intel FPU Speculation Vulnerability confirmed, and what some Unix command names mean.</p>

<p>##Headlines<br>
###<a href="https://www.reddit.com/r/dragonflybsd/comments/8riwtx/towards_a_hammer1_masterslave_encrypted_setup/">DragonflyBSD: Towards a HAMMER1 master/slave encrypted setup with LUKS</a></p>

<blockquote>
<p>I just wanted to share my experience with setting up DragonFly master/slave HAMMER1 PFS’s on top of LUKS<br>
So after a long time using an Synology for my NFS needs, I decided it was time to rethink my setup a little since I had several issues with it :</p>
</blockquote>

<ul>
<li>You cannot run NFS on top of encrypted partitions easily</li>
<li>I suspect I am having some some data corruption (bitrot) on the ext4 filesystem</li>
<li>the NIC was stcuk to 100 Mbps instead of 1 Gbps even after swapping cables, switches, you name it</li>
<li>It’s proprietary</li>
</ul>

<blockquote>
<p>I have been playing with DragonFly in the past and knew about HAMMER, now I just had the perfect excuse to actually use it in production :) After setting up the OS, creating the LUKS partition and HAMMER FS was easy :</p>
</blockquote>

<p><code>kdload dm</code><br>
<code>cryptsetup luksFormat /dev/serno/&lt;id1&gt;</code><br>
<code>cryptsetup luksOpen /dev/serno/&lt;id1&gt; fort_knox</code><br>
<code>newfs_hammer -L hammer1_secure_master /dev/mapper/fort_knox</code><br>
<code>cryptsetup luksFormat /dev/serno/&lt;id2&gt;</code><br>
<code>cryptsetup luksOpen /dev/serno/&lt;id2&gt; fort_knox_slave</code><br>
<code>newfs_hammer -L hammer1_secure_slave /dev/mapper/fort_knox_slave</code></p>

<ul>
<li>Mount the 2 drives :</li>
</ul>

<p><code>mount /dev/mapper/fort_knox /fort_knox</code><br>
<code>mount /dev/mapper_fort_know_slave /fort_knox_slave</code></p>

<blockquote>
<p>You can now put your data under /fort_knox<br>
Now, off to setting up the replication, first get the shared-uuid of /fort_knox</p>
</blockquote>

<p><code>hammer pfs-status /fort_knox</code></p>

<blockquote>
<p>Create a PFS slave “linked” to the master</p>
</blockquote>

<p><code>hammer pfs-slave /fort_knox_slave/pfs/slave shared-uuid=f9e7cc0d-eb59-10e3-a5b5-01e6e7cefc12</code></p>

<blockquote>
<p>And then stream your data to the slave PFS !</p>
</blockquote>

<p><code>hammer mirror-stream /fort_knox /fort_knox_slave/pfs/slave</code></p>

<blockquote>
<p>After that, setting NFS is fairly trivial even though I had problem with the /etc/exports syntax which is different than Linux</p>
</blockquote>

<blockquote>
<p>There’s a few things I wish would be better though but nothing too problematic or without workarounds :</p>
</blockquote>

<ul>
<li>Cannot unlock LUKS partitions at boot time afaik (Acceptable tradeoff for the added security LUKS gives me vs my old Synology setup) but this force me to run a script to unlock LUKS, mount hammer and start mirror-stream at each boot</li>
<li>No S1/S3 sleep so I made a script to shutdown the system when there’s no network neighborgs to serve the NFS</li>
<li>As my system isn’t online 24/7 for energy reasons, I guess will have to run hammer cleanup myself from time to time</li>
<li>Some uncertainty because hey, it’s kind of exotic but exciting too :)</li>
</ul>

<blockquote>
<p>Overall, I am happy, HAMMER1 and PFS are looking really good, DragonFly is a neat Unix and the community is super friendly (Matthew Dillon actually provided me with a kernel patch to fix the broken ACPI on the PC holding this setup, many thanks!), the system is still a “work in progress” but it is already serving my files as I write this post.</p>
</blockquote>

<blockquote>
<p>Let’s see in 6 months how it goes in the longer run !</p>
</blockquote>

<ul>
<li>Helpful resources : <a href="https://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/">https://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/</a></li>
</ul>

<p><hr></p>

<p>###BSDCan 2018 Recap</p>

<ul>
<li>As promised, here is our second part of our BSDCan report, covering the conference proper. The last tutorials/devsummit of that day lead directly into the conference, as people could pick up their registration packs at the Red Lion and have a drink with fellow BSD folks.</li>
<li>Allan and I were there only briefly, as we wanted to get back to the “Newcomers orientation and mentorship” session lead by Michael W. Lucas. This session is intended for people that are new to BSDCan (maybe their first BSD conference ever?) and may have questions. Michael explained everything from the 6-2-1 rule (hours of sleep, meals per day, and number of showers that attendees should have at a minimum), to the partner and widowers program (lead by his wife Liz), to the sessions that people should not miss (opening, closing, and hallway track). Old-time BSDCan folks were asked to stand up so that people can recognize them and ask them any questions they might have during the conferences. The session was well attended. Afterwards, people went for dinner in groups, a big one lead by Michael Lucas to his favorite Shawarma place, followed by gelato (of course). This allowed newbies to mingle over dinner and ice cream, creating a welcoming atmosphere.</li>
<li>The next day, after Dan Langille opened the conference, Benno Rice gave the keynote presentation about “The Tragedy of Systemd”.</li>
<li>Benedict went to the following talks:</li>
</ul>

<blockquote>
<p>“Automating Network Infrastructures with Ansible on FreeBSD” in the DevSummit track. A good talk that connected well with his Ansible tutorial and even allowed some discussions among participants.<br>
“All along the dwatch tower”: Devin delivered a well prepared talk. I first thought that the number of slides would not fit into the time slot, but she even managed to give a demo of her work, which was well received. The dwatch tool she wrote should make it easy for people to get started with DTrace without learning too much about the syntax at first. The visualizations were certainly nice to see, combining different tools together in a new way.<br>
ZFS BoF, lead by Allan and Matthew Ahrens<br>
SSH Key Management by Michael W. Lucas. Yet another great talk where I learned a lot. I did not get to the SSH CA chapter in the new SSH Mastery book, so this was a good way to wet my appetite for it and motivated me to look into creating one for the cluster that I’m managing.<br>
The rest of the day was spent at the FreeBSD Foundation table, talking to various folks. Then, Allan and I had an interview with Kirk McKusick for National FreeBSD Day, then we had a core meeting, followed by a core dinner.</p>
</blockquote>

<ul>
<li>Day 2:
<blockquote>
<p>“Flexible Disk Use in OpenZFS”: Matthew Ahrens talking about the feature he is implementing to expand a RAID-Z with a single disk, as well as device removal.<br>
Allan’s talk about his efforts to implement ZSTD in OpenZFS as another compression algorithm. I liked his overview slides with the numbers comparing the algorithms for their effectiveness and his personal story about the sometimes rocky road to get the feature implemented.<br>
“zrepl - ZFS replication” by Christian Schwarz, was well prepared and even had a demo to show what his snapshot replication tool can do. We covered it on the show before and people can find it under sysutils/zrepl. Feedback and help is welcome.<br>
“The Evolution of FreeBSD Governance” by Kirk McKusick was yet another great talk by him covering the early days of FreeBSD until today, detailing some of the progress and challenges the project faced over the years in terms of leadership and governance. This is an ongoing process that everyone in the community should participate in to keep the project healthy and infused with fresh blood.<br>
Closing session and auction were funny and great as always.<br>
All in all, yet another amazing BSDCan. Thank you Dan Langille and your organizing team for making it happen! Well done.</p>
</blockquote>
</li>
</ul>

<p><hr></p>

<p><strong>Digital Ocean</strong></p>

<p>###<a href="http://nomadbsd.org/index.html#rel1.1-rc1">NomadBSD 1.1-RC1 Released</a></p>

<blockquote>
<p>The first – and hopefully final – release candidate of NomadBSD 1.1 is available!</p>
</blockquote>

<ul>
<li>Changes</li>
<li>The base system has been upgraded to FreeBSD 11.2-RC3</li>
<li>EFI booting has been fixed.</li>
<li>Support for modern Intel GPUs has been added.</li>
<li>Support for installing packages has been added.</li>
<li>Improved setup menu.</li>
<li>More software packages:</li>
<li>benchmarks/bonnie++</li>
<li>DSBDisplaySettings</li>
<li>DSBExec</li>
<li>DSBSu</li>
<li>mail/thunderbird</li>
<li>net/mosh</li>
<li>ports-mgmt/octopkg</li>
<li>print/qpdfview</li>
<li>security/nmap</li>
<li>sysutils/ddrescue</li>
<li>sysutils/fusefs-hfsfuse</li>
<li>sysutils/fusefs-sshfs</li>
<li>sysutils/sleuthkit</li>
<li>www/lynx</li>
<li>x11-wm/compton</li>
<li>x11/xev</li>
<li>x11/xterm</li>
<li>Many improvements and bugfixes<br>
The image and instructions can be found <a href="http://nomadbsd.org/download.html">here</a>.</li>
</ul>

<p><hr></p>

<p>##News Roundup<br>
###<a href="https://undeadly.org/cgi?action=article;sid=20180616115514">LDAP client added to -current</a></p>

<pre><code>CVSROOT:    /cvs
Module name:    src
Changes by: reyk@cvs.openbsd.org    2018/06/13 09:45:58

Log message:
    Import ldap(1), a simple ldap search client.
    We have an ldapd(8) server and ypldap in base, so it makes sense to
    have a simple LDAP client without depending on the OpenLDAP package.
    This tool can be used in an ssh(1) AuthorizedKeysCommand script.
    
    With feedback from many including millert@ schwarze@ gilles@ dlg@ jsing@
    
    OK deraadt@
    
    Status:
    
    Vendor Tag: reyk
    Release Tags:   ldap_20180613
    
    N src/usr.bin/ldap/Makefile
    N src/usr.bin/ldap/aldap.c
    N src/usr.bin/ldap/aldap.h
    N src/usr.bin/ldap/ber.c
    N src/usr.bin/ldap/ber.h
    N src/usr.bin/ldap/ldap.1
    N src/usr.bin/ldap/ldapclient.c
    N src/usr.bin/ldap/log.c
    N src/usr.bin/ldap/log.h
    
    No conflicts created by this import
</code></pre>

<p><hr></p>

<p>###<a href="https://undeadly.org/cgi?action=article;sid=20180614064341">Intel® FPU Speculation Vulnerability Confirmed</a></p>

<ul>
<li>Earlier this month, Philip Guenther (guenther@) <a href="https://marc.info/?l=openbsd-cvs&amp;m=152818076013158&amp;w=2">committed</a> (to amd64 -current) a change from lazy to semi-eager FPU switching to mitigate against rumored FPU state leakage in Intel® CPUs.</li>
<li>Theo de Raadt (deraadt@) discussed this in <a href="https://undeadly.org/cgi?action=article;sid=20180611101817">his BSDCan 2018 session</a>.</li>
<li>Using information disclosed in Theo’s talk, <a href="https://twitter.com/cperciva/status/1007010583244230656">Colin Percival</a> developed a proof-of-concept exploit in around 5 hours. This seems to have prompted an early end to an embargo (in which OpenBSD was not involved), and the <a href="https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00145.html">official announcement</a> of the vulnerability.</li>
<li><a href="https://svnweb.freebsd.org/base?view=revision&amp;revision=335072">FPU change in FreeBSD</a></li>
</ul>

<pre><code>Summary:

System software may utilize the Lazy FP state restore technique to delay the restoring of state until an instruction operating on that state is actually executed by the new process. Systems using Intel® Core-based microprocessors may potentially allow a local process to infer data utilizing Lazy FP state restore from another process through a speculative execution side channel.

Description:

System software may opt to utilize Lazy FP state restore instead of eager save and restore of the state upon a context switch. Lazy restored states are potentially vulnerable to exploits where one process may infer register values of other processes through a speculative execution side channel that infers their value.

    ·    CVSS - 4.3 Medium CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:C/C:L/I:N/A:N
Affected Products:

Intel® Core-based microprocessors.

Recommendations:

If an XSAVE-enabled feature is disabled, then we recommend either its state component bitmap in the extended control register (XCR0) is set to 0 (e.g. XCR0[bit 2]=0 for AVX, XCR0[bits 7:5]=0 for AVX512) or the corresponding register states of the feature should be cleared prior to being disabled. Also for relevant states (e.g. x87, SSE, AVX, etc.), Intel recommends system software developers utilize Eager FP state restore in lieu of Lazy FP state restore.

Acknowledgements:

Intel would like to thank Julian Stecklina from Amazon Germany, Thomas Prescher from Cyberus Technology GmbH (https://www.cyberus-technology.de/), Zdenek Sojka from SYSGO AG (http://sysgo.com), and Colin Percival for reporting this issue and working with us on coordinated disclosure.
</code></pre>

<p><hr></p>

<p><strong>iXsystems</strong><br>
iX Ad Spot<br>
###<a href="https://www.ixsystems.com/blog/bsdcan-2018-recap/">iX Systems - BSDCan 2018 Recap</a></p>

<p>###<a href="https://svnweb.freebsd.org/base?view=revision&amp;revision=335012">FreeBSD gets pNFS support</a></p>

<pre><code>Merge the pNFS server code from projects/pnfs-planb-server into head.

This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
Merge the pN http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a &quot;make universe&quot;, so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the &quot;current stateid&quot;.

Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write operations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate.  Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.

A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
    the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
    and Change attributes for the file, so that the MDS doesn't need to
    acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the &quot;dsNN&quot;
subdirectories.  The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named &quot;ds0&quot; to &quot;dsN&quot; so that no one directory
gets too large. The value of &quot;N&quot; is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the &quot;nfsd&quot; daemon is not running on the MDS,
once the &quot;dsK&quot; directories are created.

For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
             is. The critical bits of information returned by the FreeBSD
             server is the IP address of the DS and, for the Flexible
             File layout, that NFSv4.1 is to be used and that it is
             &quot;tightly coupled&quot;.
             There is a &quot;deviceid&quot; which identifies the DeviceInfo.
Layout     - This is per file and can be recalled by the server when it
             is no longer valid. For the FreeBSD server, there is support
             for two types of layout, call File and Flexible File layout.
             Both allow the client to do I/O on the DS via NFSv4.1 I/O
             operations. The Flexible File layout is a more recent variant
             that allows specification of mirrors, where the client is
             expected to do writes to all mirrors to maintain them in a
             consistent state. The Flexible File layout also allows the
             client to report I/O errors for a DS back to the MDS.
             The Flexible File layout supports two variants referred to as
             &quot;tightly coupled&quot; vs &quot;loosely coupled&quot;. The FreeBSD server always
             uses the &quot;tightly coupled&quot; variant where the client uses the
             same credentials to do I/O on the DS as it would on the MDS.
             For the &quot;loosely coupled&quot; variant, the layout specifies a
             synthetic user/group that the client uses to do I/O on the DS.
             The FreeBSD server does not do striping and always returns
             layouts for the entire file. The critical information in a layout
             is Read vs Read/Writea and DeviceID(s) that identify which
             DS(s) the data is stored on.

At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.

***

###[What does {some strange unix command name} stand for?](http://www.unixguide.net/unix/faq/1.3.shtml)

+ awk = &quot;Aho Weinberger and Kernighan&quot; 
+ grep = &quot;Global Regular Expression Print&quot; 
+ fgrep = &quot;Fixed GREP&quot;. 
+ egrep = &quot;Extended GREP&quot; 
+ cat = &quot;CATenate&quot; 
+ gecos = &quot;General Electric Comprehensive Operating Supervisor&quot; 
+ nroff = &quot;New ROFF&quot; 
+ troff = &quot;Typesetter new ROFF&quot; 
+ tee = T 
+ bss = &quot;Block Started by Symbol
+ biff = &quot;BIFF&quot; 
+ rc (as in &quot;.cshrc&quot; or &quot;/etc/rc&quot;) = &quot;RunCom&quot; 
+ Don Libes' book &quot;Life with Unix&quot; contains lots more of these 
tidbits. 
***

##Beastie Bits
+ [RetroBSD: Unix for microcontrollers](http://retrobsd.org/wiki/doku.php)
+ [On the matter of OpenBSD breaking embargos (KRACK)](https://marc.info/?l=openbsd-tech&amp;m=152910536208954&amp;w=2)
+ [Theo's Basement Computer Paradise (1998)](https://zeus.theos.com/deraadt/hosts.html)
+ [Airport Extreme runs NetBSD](https://jcs.org/2018/06/12/airport_ssh)
+ [What UNIX shell could have been](https://rain-1.github.io/shell-2.html)

***
Tarsnap ad
***

##Feedback/Questions
+ We need more feedback and questions. Please email feedback@bsdnow.tv 
+ Also, many of you owe us BSDCan trip reports! We have shared what our experience at BSDCan was like, but we want to hear about yours. What can we do better next year? What was it like being there for the first time?
+ [Jason writes in](https://slexy.org/view/s205jU58X2)
    + https://www.wheelsystems.com/en/products/wheel-fudo-psm/
+ [June 19th was National FreeBSD Day](https://twitter.com/search?src=typd&amp;q=%23FreeBSDDay)
***

- Send questions, comments, show ideas/topics, or stories you want mentioned on the show to [feedback@bsdnow.tv](mailto:feedback@bsdnow.tv)
***

</code></pre>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>DragonflyBSD’s hammer1 encrypted master/slave setup, second part of our BSDCan recap, NomadBSD 1.1-RC1 available, OpenBSD adds an LDAP client to base, FreeBSD gets pNFS support, Intel FPU Speculation Vulnerability confirmed, and what some Unix command names mean.</p>

<p>##Headlines<br>
###<a href="https://www.reddit.com/r/dragonflybsd/comments/8riwtx/towards_a_hammer1_masterslave_encrypted_setup/">DragonflyBSD: Towards a HAMMER1 master/slave encrypted setup with LUKS</a></p>

<blockquote>
<p>I just wanted to share my experience with setting up DragonFly master/slave HAMMER1 PFS’s on top of LUKS<br>
So after a long time using an Synology for my NFS needs, I decided it was time to rethink my setup a little since I had several issues with it :</p>
</blockquote>

<ul>
<li>You cannot run NFS on top of encrypted partitions easily</li>
<li>I suspect I am having some some data corruption (bitrot) on the ext4 filesystem</li>
<li>the NIC was stcuk to 100 Mbps instead of 1 Gbps even after swapping cables, switches, you name it</li>
<li>It’s proprietary</li>
</ul>

<blockquote>
<p>I have been playing with DragonFly in the past and knew about HAMMER, now I just had the perfect excuse to actually use it in production :) After setting up the OS, creating the LUKS partition and HAMMER FS was easy :</p>
</blockquote>

<p><code>kdload dm</code><br>
<code>cryptsetup luksFormat /dev/serno/&lt;id1&gt;</code><br>
<code>cryptsetup luksOpen /dev/serno/&lt;id1&gt; fort_knox</code><br>
<code>newfs_hammer -L hammer1_secure_master /dev/mapper/fort_knox</code><br>
<code>cryptsetup luksFormat /dev/serno/&lt;id2&gt;</code><br>
<code>cryptsetup luksOpen /dev/serno/&lt;id2&gt; fort_knox_slave</code><br>
<code>newfs_hammer -L hammer1_secure_slave /dev/mapper/fort_knox_slave</code></p>

<ul>
<li>Mount the 2 drives :</li>
</ul>

<p><code>mount /dev/mapper/fort_knox /fort_knox</code><br>
<code>mount /dev/mapper_fort_know_slave /fort_knox_slave</code></p>

<blockquote>
<p>You can now put your data under /fort_knox<br>
Now, off to setting up the replication, first get the shared-uuid of /fort_knox</p>
</blockquote>

<p><code>hammer pfs-status /fort_knox</code></p>

<blockquote>
<p>Create a PFS slave “linked” to the master</p>
</blockquote>

<p><code>hammer pfs-slave /fort_knox_slave/pfs/slave shared-uuid=f9e7cc0d-eb59-10e3-a5b5-01e6e7cefc12</code></p>

<blockquote>
<p>And then stream your data to the slave PFS !</p>
</blockquote>

<p><code>hammer mirror-stream /fort_knox /fort_knox_slave/pfs/slave</code></p>

<blockquote>
<p>After that, setting NFS is fairly trivial even though I had problem with the /etc/exports syntax which is different than Linux</p>
</blockquote>

<blockquote>
<p>There’s a few things I wish would be better though but nothing too problematic or without workarounds :</p>
</blockquote>

<ul>
<li>Cannot unlock LUKS partitions at boot time afaik (Acceptable tradeoff for the added security LUKS gives me vs my old Synology setup) but this force me to run a script to unlock LUKS, mount hammer and start mirror-stream at each boot</li>
<li>No S1/S3 sleep so I made a script to shutdown the system when there’s no network neighborgs to serve the NFS</li>
<li>As my system isn’t online 24/7 for energy reasons, I guess will have to run hammer cleanup myself from time to time</li>
<li>Some uncertainty because hey, it’s kind of exotic but exciting too :)</li>
</ul>

<blockquote>
<p>Overall, I am happy, HAMMER1 and PFS are looking really good, DragonFly is a neat Unix and the community is super friendly (Matthew Dillon actually provided me with a kernel patch to fix the broken ACPI on the PC holding this setup, many thanks!), the system is still a “work in progress” but it is already serving my files as I write this post.</p>
</blockquote>

<blockquote>
<p>Let’s see in 6 months how it goes in the longer run !</p>
</blockquote>

<ul>
<li>Helpful resources : <a href="https://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/">https://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/</a></li>
</ul>

<p><hr></p>

<p>###BSDCan 2018 Recap</p>

<ul>
<li>As promised, here is our second part of our BSDCan report, covering the conference proper. The last tutorials/devsummit of that day lead directly into the conference, as people could pick up their registration packs at the Red Lion and have a drink with fellow BSD folks.</li>
<li>Allan and I were there only briefly, as we wanted to get back to the “Newcomers orientation and mentorship” session lead by Michael W. Lucas. This session is intended for people that are new to BSDCan (maybe their first BSD conference ever?) and may have questions. Michael explained everything from the 6-2-1 rule (hours of sleep, meals per day, and number of showers that attendees should have at a minimum), to the partner and widowers program (lead by his wife Liz), to the sessions that people should not miss (opening, closing, and hallway track). Old-time BSDCan folks were asked to stand up so that people can recognize them and ask them any questions they might have during the conferences. The session was well attended. Afterwards, people went for dinner in groups, a big one lead by Michael Lucas to his favorite Shawarma place, followed by gelato (of course). This allowed newbies to mingle over dinner and ice cream, creating a welcoming atmosphere.</li>
<li>The next day, after Dan Langille opened the conference, Benno Rice gave the keynote presentation about “The Tragedy of Systemd”.</li>
<li>Benedict went to the following talks:</li>
</ul>

<blockquote>
<p>“Automating Network Infrastructures with Ansible on FreeBSD” in the DevSummit track. A good talk that connected well with his Ansible tutorial and even allowed some discussions among participants.<br>
“All along the dwatch tower”: Devin delivered a well prepared talk. I first thought that the number of slides would not fit into the time slot, but she even managed to give a demo of her work, which was well received. The dwatch tool she wrote should make it easy for people to get started with DTrace without learning too much about the syntax at first. The visualizations were certainly nice to see, combining different tools together in a new way.<br>
ZFS BoF, lead by Allan and Matthew Ahrens<br>
SSH Key Management by Michael W. Lucas. Yet another great talk where I learned a lot. I did not get to the SSH CA chapter in the new SSH Mastery book, so this was a good way to wet my appetite for it and motivated me to look into creating one for the cluster that I’m managing.<br>
The rest of the day was spent at the FreeBSD Foundation table, talking to various folks. Then, Allan and I had an interview with Kirk McKusick for National FreeBSD Day, then we had a core meeting, followed by a core dinner.</p>
</blockquote>

<ul>
<li>Day 2:
<blockquote>
<p>“Flexible Disk Use in OpenZFS”: Matthew Ahrens talking about the feature he is implementing to expand a RAID-Z with a single disk, as well as device removal.<br>
Allan’s talk about his efforts to implement ZSTD in OpenZFS as another compression algorithm. I liked his overview slides with the numbers comparing the algorithms for their effectiveness and his personal story about the sometimes rocky road to get the feature implemented.<br>
“zrepl - ZFS replication” by Christian Schwarz, was well prepared and even had a demo to show what his snapshot replication tool can do. We covered it on the show before and people can find it under sysutils/zrepl. Feedback and help is welcome.<br>
“The Evolution of FreeBSD Governance” by Kirk McKusick was yet another great talk by him covering the early days of FreeBSD until today, detailing some of the progress and challenges the project faced over the years in terms of leadership and governance. This is an ongoing process that everyone in the community should participate in to keep the project healthy and infused with fresh blood.<br>
Closing session and auction were funny and great as always.<br>
All in all, yet another amazing BSDCan. Thank you Dan Langille and your organizing team for making it happen! Well done.</p>
</blockquote>
</li>
</ul>

<p><hr></p>

<p><strong>Digital Ocean</strong></p>

<p>###<a href="http://nomadbsd.org/index.html#rel1.1-rc1">NomadBSD 1.1-RC1 Released</a></p>

<blockquote>
<p>The first – and hopefully final – release candidate of NomadBSD 1.1 is available!</p>
</blockquote>

<ul>
<li>Changes</li>
<li>The base system has been upgraded to FreeBSD 11.2-RC3</li>
<li>EFI booting has been fixed.</li>
<li>Support for modern Intel GPUs has been added.</li>
<li>Support for installing packages has been added.</li>
<li>Improved setup menu.</li>
<li>More software packages:</li>
<li>benchmarks/bonnie++</li>
<li>DSBDisplaySettings</li>
<li>DSBExec</li>
<li>DSBSu</li>
<li>mail/thunderbird</li>
<li>net/mosh</li>
<li>ports-mgmt/octopkg</li>
<li>print/qpdfview</li>
<li>security/nmap</li>
<li>sysutils/ddrescue</li>
<li>sysutils/fusefs-hfsfuse</li>
<li>sysutils/fusefs-sshfs</li>
<li>sysutils/sleuthkit</li>
<li>www/lynx</li>
<li>x11-wm/compton</li>
<li>x11/xev</li>
<li>x11/xterm</li>
<li>Many improvements and bugfixes<br>
The image and instructions can be found <a href="http://nomadbsd.org/download.html">here</a>.</li>
</ul>

<p><hr></p>

<p>##News Roundup<br>
###<a href="https://undeadly.org/cgi?action=article;sid=20180616115514">LDAP client added to -current</a></p>

<pre><code>CVSROOT:    /cvs
Module name:    src
Changes by: reyk@cvs.openbsd.org    2018/06/13 09:45:58

Log message:
    Import ldap(1), a simple ldap search client.
    We have an ldapd(8) server and ypldap in base, so it makes sense to
    have a simple LDAP client without depending on the OpenLDAP package.
    This tool can be used in an ssh(1) AuthorizedKeysCommand script.
    
    With feedback from many including millert@ schwarze@ gilles@ dlg@ jsing@
    
    OK deraadt@
    
    Status:
    
    Vendor Tag: reyk
    Release Tags:   ldap_20180613
    
    N src/usr.bin/ldap/Makefile
    N src/usr.bin/ldap/aldap.c
    N src/usr.bin/ldap/aldap.h
    N src/usr.bin/ldap/ber.c
    N src/usr.bin/ldap/ber.h
    N src/usr.bin/ldap/ldap.1
    N src/usr.bin/ldap/ldapclient.c
    N src/usr.bin/ldap/log.c
    N src/usr.bin/ldap/log.h
    
    No conflicts created by this import
</code></pre>

<p><hr></p>

<p>###<a href="https://undeadly.org/cgi?action=article;sid=20180614064341">Intel® FPU Speculation Vulnerability Confirmed</a></p>

<ul>
<li>Earlier this month, Philip Guenther (guenther@) <a href="https://marc.info/?l=openbsd-cvs&amp;m=152818076013158&amp;w=2">committed</a> (to amd64 -current) a change from lazy to semi-eager FPU switching to mitigate against rumored FPU state leakage in Intel® CPUs.</li>
<li>Theo de Raadt (deraadt@) discussed this in <a href="https://undeadly.org/cgi?action=article;sid=20180611101817">his BSDCan 2018 session</a>.</li>
<li>Using information disclosed in Theo’s talk, <a href="https://twitter.com/cperciva/status/1007010583244230656">Colin Percival</a> developed a proof-of-concept exploit in around 5 hours. This seems to have prompted an early end to an embargo (in which OpenBSD was not involved), and the <a href="https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00145.html">official announcement</a> of the vulnerability.</li>
<li><a href="https://svnweb.freebsd.org/base?view=revision&amp;revision=335072">FPU change in FreeBSD</a></li>
</ul>

<pre><code>Summary:

System software may utilize the Lazy FP state restore technique to delay the restoring of state until an instruction operating on that state is actually executed by the new process. Systems using Intel® Core-based microprocessors may potentially allow a local process to infer data utilizing Lazy FP state restore from another process through a speculative execution side channel.

Description:

System software may opt to utilize Lazy FP state restore instead of eager save and restore of the state upon a context switch. Lazy restored states are potentially vulnerable to exploits where one process may infer register values of other processes through a speculative execution side channel that infers their value.

    ·    CVSS - 4.3 Medium CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:C/C:L/I:N/A:N
Affected Products:

Intel® Core-based microprocessors.

Recommendations:

If an XSAVE-enabled feature is disabled, then we recommend either its state component bitmap in the extended control register (XCR0) is set to 0 (e.g. XCR0[bit 2]=0 for AVX, XCR0[bits 7:5]=0 for AVX512) or the corresponding register states of the feature should be cleared prior to being disabled. Also for relevant states (e.g. x87, SSE, AVX, etc.), Intel recommends system software developers utilize Eager FP state restore in lieu of Lazy FP state restore.

Acknowledgements:

Intel would like to thank Julian Stecklina from Amazon Germany, Thomas Prescher from Cyberus Technology GmbH (https://www.cyberus-technology.de/), Zdenek Sojka from SYSGO AG (http://sysgo.com), and Colin Percival for reporting this issue and working with us on coordinated disclosure.
</code></pre>

<p><hr></p>

<p><strong>iXsystems</strong><br>
iX Ad Spot<br>
###<a href="https://www.ixsystems.com/blog/bsdcan-2018-recap/">iX Systems - BSDCan 2018 Recap</a></p>

<p>###<a href="https://svnweb.freebsd.org/base?view=revision&amp;revision=335012">FreeBSD gets pNFS support</a></p>

<pre><code>Merge the pNFS server code from projects/pnfs-planb-server into head.

This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
Merge the pN http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a &quot;make universe&quot;, so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the &quot;current stateid&quot;.

Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write operations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate.  Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.

A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
    the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
    and Change attributes for the file, so that the MDS doesn't need to
    acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the &quot;dsNN&quot;
subdirectories.  The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named &quot;ds0&quot; to &quot;dsN&quot; so that no one directory
gets too large. The value of &quot;N&quot; is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the &quot;nfsd&quot; daemon is not running on the MDS,
once the &quot;dsK&quot; directories are created.

For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
             is. The critical bits of information returned by the FreeBSD
             server is the IP address of the DS and, for the Flexible
             File layout, that NFSv4.1 is to be used and that it is
             &quot;tightly coupled&quot;.
             There is a &quot;deviceid&quot; which identifies the DeviceInfo.
Layout     - This is per file and can be recalled by the server when it
             is no longer valid. For the FreeBSD server, there is support
             for two types of layout, call File and Flexible File layout.
             Both allow the client to do I/O on the DS via NFSv4.1 I/O
             operations. The Flexible File layout is a more recent variant
             that allows specification of mirrors, where the client is
             expected to do writes to all mirrors to maintain them in a
             consistent state. The Flexible File layout also allows the
             client to report I/O errors for a DS back to the MDS.
             The Flexible File layout supports two variants referred to as
             &quot;tightly coupled&quot; vs &quot;loosely coupled&quot;. The FreeBSD server always
             uses the &quot;tightly coupled&quot; variant where the client uses the
             same credentials to do I/O on the DS as it would on the MDS.
             For the &quot;loosely coupled&quot; variant, the layout specifies a
             synthetic user/group that the client uses to do I/O on the DS.
             The FreeBSD server does not do striping and always returns
             layouts for the entire file. The critical information in a layout
             is Read vs Read/Writea and DeviceID(s) that identify which
             DS(s) the data is stored on.

At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.

***

###[What does {some strange unix command name} stand for?](http://www.unixguide.net/unix/faq/1.3.shtml)

+ awk = &quot;Aho Weinberger and Kernighan&quot; 
+ grep = &quot;Global Regular Expression Print&quot; 
+ fgrep = &quot;Fixed GREP&quot;. 
+ egrep = &quot;Extended GREP&quot; 
+ cat = &quot;CATenate&quot; 
+ gecos = &quot;General Electric Comprehensive Operating Supervisor&quot; 
+ nroff = &quot;New ROFF&quot; 
+ troff = &quot;Typesetter new ROFF&quot; 
+ tee = T 
+ bss = &quot;Block Started by Symbol
+ biff = &quot;BIFF&quot; 
+ rc (as in &quot;.cshrc&quot; or &quot;/etc/rc&quot;) = &quot;RunCom&quot; 
+ Don Libes' book &quot;Life with Unix&quot; contains lots more of these 
tidbits. 
***

##Beastie Bits
+ [RetroBSD: Unix for microcontrollers](http://retrobsd.org/wiki/doku.php)
+ [On the matter of OpenBSD breaking embargos (KRACK)](https://marc.info/?l=openbsd-tech&amp;m=152910536208954&amp;w=2)
+ [Theo's Basement Computer Paradise (1998)](https://zeus.theos.com/deraadt/hosts.html)
+ [Airport Extreme runs NetBSD](https://jcs.org/2018/06/12/airport_ssh)
+ [What UNIX shell could have been](https://rain-1.github.io/shell-2.html)

***
Tarsnap ad
***

##Feedback/Questions
+ We need more feedback and questions. Please email feedback@bsdnow.tv 
+ Also, many of you owe us BSDCan trip reports! We have shared what our experience at BSDCan was like, but we want to hear about yours. What can we do better next year? What was it like being there for the first time?
+ [Jason writes in](https://slexy.org/view/s205jU58X2)
    + https://www.wheelsystems.com/en/products/wheel-fudo-psm/
+ [June 19th was National FreeBSD Day](https://twitter.com/search?src=typd&amp;q=%23FreeBSDDay)
***

- Send questions, comments, show ideas/topics, or stories you want mentioned on the show to [feedback@bsdnow.tv](mailto:feedback@bsdnow.tv)
***

</code></pre>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
