<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="../assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>pV-tech (Posts about zfs)</title><link>https://pv-tech.eu/</link><description></description><atom:link href="https://pv-tech.eu/categories/zfs.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2023 &lt;a href="mailto:a13x7rsk@gmail.com"&gt;Paul Witek&lt;/a&gt; </copyright><lastBuildDate>Sun, 07 May 2023 18:26:50 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Common pitfall when benchmarking ZFS with fio</title><link>https://pv-tech.eu/posts/common-pitfall-when-benchmarking-zfs-with-fio/</link><dc:creator>Paul Witek</dc:creator><description>&lt;div&gt;&lt;p&gt;Let's say you build yourself a new ZFS pool on top of some pretty fast NVMe drives
and want to benchmark it to see how well it can run. You create a zvol and fire
up fio to sequentially read some data from it. But anticipating a large
number of IOPS, you don't want your CPU to bottleneck the performance, so
naturally you include &lt;em&gt;--numjobs=8&lt;/em&gt; to be sure you get the most out of your NAND
gates. Fio completes and TA-DAH: IOPS up the roof. But wait a minute... Your
pool consisting of three NVMe drives, each capable of 3.2 GB/s sequential read,
is read at the rate of 24 GB/s! Obviously the disk vendor would not undermine its
product perfromance, so something might be wrong with the test.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://pv-tech.eu/posts/common-pitfall-when-benchmarking-zfs-with-fio/"&gt;Read more…&lt;/a&gt; (2 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>fio</category><category>zfs</category><guid>https://pv-tech.eu/posts/common-pitfall-when-benchmarking-zfs-with-fio/</guid><pubDate>Wed, 26 Jan 2022 22:23:20 GMT</pubDate></item></channel></rss>