The ARC: FreeBSD's Secret Weapon (and Common Misconfiguration)
The single most impactful tunable in any ZFS deployment is the ARC -- the Adaptive Replacement Cache. The ARC is an in-memory read cache that sits between your applications and the underlying storage. On a properly tuned FreeBSD server, the ARC can absorb the vast majority of read I/O, making disk speed almost irrelevant for hot data. Get this wrong, and your expensive NVMe pool performs like spinning rust.
Start by checking your current ARC state. On FreeBSD, every ZFS tunable lives under the vfs.zfs sysctl namespace:
# Current ARC size and limits
sysctl vfs.zfs.arc.max
sysctl vfs.zfs.arc.min
sysctl kstat.zfs.misc.arcstats.size
# ARC hit ratio -- the number that matters most
sysctl kstat.zfs.misc.arcstats.hits
sysctl kstat.zfs.misc.arcstats.misses
The hit ratio tells you how often the ARC satisfies a read without touching disk. On a well-tuned production server, you want this above 95%. Below 90%, you are leaving significant performance on the table.
vfs.zfs.arc.max -- When to Set It
By default, ZFS on FreeBSD will claim up to 5/8 of physical RAM for the ARC. On a dedicated storage server or NAS, this is often too conservative -- you want the ARC to use as much RAM as possible. On a server running memory-hungry applications alongside ZFS (databases, jails, bhyve VMs), you need to cap it so the ARC does not starve other processes.
The rule is simple: if your server's primary job is serving data from ZFS, leave vfs.zfs.arc.max alone or raise it. If you are running applications that need guaranteed memory, set it explicitly. On a 64GB server running MySQL in a jail, you might set:
# /boot/loader.conf -- must be set at boot
vfs.zfs.arc.max="34359738368" # 32GB -- leave 32GB for jails and apps
The common mistake is the opposite: administrators see ZFS "using all the RAM" in top, panic, and set arc.max to something absurdly low like 1GB. The ARC is designed to release memory under pressure. Crippling it preemptively defeats the entire purpose of ZFS.
vfs.zfs.arc.min -- Setting a Floor
While arc.max sets a ceiling, vfs.zfs.arc.min sets a floor. When the system is under memory pressure, the ARC will shrink -- but it will not shrink below this value. On servers where consistent read performance matters more than anything else, set a meaningful minimum:
# /boot/loader.conf
vfs.zfs.arc.min="4294967296" # 4GB -- ARC never drops below this
Without a floor, aggressive memory consumers can push the ARC down to almost nothing, causing a cascade of cache misses that hammers your disks right when the system is already under stress.
The Documentation Problem
ZFS tuning on FreeBSD is made harder than it needs to be by a documentation landscape that is fragmented, stale, and frequently wrong about FreeBSD-specific details.
The FreeBSD zfs(4) man page was imported from the OpenZFS project and still contains Linux-specific parameter names that do not exist on FreeBSD. The OpenZFS documentation at openzfs.github.io has improved its FreeBSD coverage, but FreeBSD's own man pages lag behind. The FreeBSD Wiki's ZFSTuningGuide is explicitly marked as stale and has not been meaningfully updated in years.
The practical result: you will find tuning guides that tell you to set parameters like zfs_arc_free_target using the Linux module parameter naming convention. On FreeBSD, this fails silently or throws an error. The correct FreeBSD sysctl name is vfs.zfs.arc.free_target. This is not a minor namespace difference -- it affects every single ZFS tunable.
The fix is straightforward. Before adding any tunable to /boot/loader.conf or /etc/sysctl.conf, verify it exists on your system:
# Verify a tunable exists and read its description
sysctl -d vfs.zfs.arc.free_target
# This will fail on FreeBSD -- Linux naming convention
sysctl zfs_arc_free_target
# error: unknown oid 'zfs_arc_free_target'
# This works -- FreeBSD sysctl namespace
sysctl vfs.zfs.arc.free_target
# vfs.zfs.arc.free_target: 1048576
Always test with sysctl -d first. If the tunable does not exist, you have either a typo, a Linux-specific parameter, or a tunable that was renamed or removed in your OpenZFS version. Do not blindly paste sysctl.conf blocks from forum posts without verifying each line.
Recordsize: Match Your Workload
The recordsize property controls the maximum block size ZFS uses when writing data to a dataset. The default is 128K, which is a reasonable general-purpose choice. But for specific workloads, matching the record size to your application's I/O pattern can dramatically improve performance.
Database Workloads
MySQL's InnoDB engine uses 16K pages. PostgreSQL uses 8K pages. When ZFS stores a 16K database page inside a 128K record, it reads and writes 8x more data than necessary for every page operation. Set the recordsize to match:
# MySQL/InnoDB -- 16K pages
zfs set recordsize=16K zroot/mysql
# PostgreSQL -- 8K pages
zfs set recordsize=8K zroot/postgres
This must be set on the dataset before writing data. Changing recordsize only affects newly written blocks -- existing data retains its original record size. For production database migrations, create a new dataset with the correct recordsize and migrate the data.
Large Sequential Files
For datasets storing large files -- media, VM images, backups, log archives -- increase the recordsize to 1M. Larger records mean fewer metadata operations and better sequential throughput:
# Media storage, backups, VM images
zfs set recordsize=1M zroot/media
zfs set recordsize=1M zroot/backups
The tradeoff is that partial block writes waste more space with larger records, but for large sequential workloads this is negligible. The throughput improvement is not.
Compression: Always On
There is almost no reason to run a ZFS dataset without compression in 2026. The lz4 algorithm is so fast that it often improves overall performance -- compressing data means less I/O to disk, and the CPU cost of LZ4 compression is lower than the time saved by writing fewer bytes.
# Enable LZ4 compression on the entire pool
zfs set compression=lz4 zroot
# Verify compression ratio
zfs get compressratio zroot
On a typical server workload with logs, configuration files, database files, and application data, LZ4 compression ratios of 1.5x to 3x are common. That is 33% to 66% less disk I/O for effectively zero CPU overhead.
For cold storage and archival datasets where CPU time is cheap and space savings matter more than latency, zstd provides significantly better compression ratios than LZ4:
# Cold storage / archives -- better ratio, higher CPU
zfs set compression=zstd zroot/archives
Do not use gzip on production datasets. It offers marginal compression improvements over zstd at dramatically higher CPU cost. It is a legacy option at this point.
L2ARC and SLOG: When They Help (and When They Don't)
L2ARC and SLOG are the two most over-recommended ZFS features. Both involve adding SSDs to your pool for specific caching purposes, and both are useless -- or actively harmful -- when applied to the wrong workload.
L2ARC (SSD Read Cache)
The L2ARC is a second-level read cache that sits on a fast SSD. When the ARC (in RAM) cannot hold your entire working set, the L2ARC catches reads that would otherwise hit spinning disks. This sounds great in theory, but there is a critical caveat: the L2ARC consumes RAM to track its own index. Every block cached in L2ARC requires a small amount of ARC memory to store its metadata pointer.
On a system with 64GB of RAM, a 500GB L2ARC device can consume several gigabytes of ARC just for its index -- memory that would have been better spent as actual ARC cache. If your working set fits in RAM (or close to it), an L2ARC makes your ARC smaller and your performance worse.
Only add an L2ARC when you have proven, through monitoring kstat.zfs.misc.arcstats, that your ARC hit ratio is low and your working set genuinely exceeds available RAM. If your ARC hit ratio is above 90%, an L2ARC will not help.
SLOG (ZFS Intent Log on SSD)
The SLOG is a dedicated device for the ZFS Intent Log (ZIL). The ZIL records synchronous write transactions -- writes where the application demands confirmation that data has reached stable storage before continuing. NFS with sync=standard, databases with fsync(), and any application using O_SYNC generate synchronous writes.
A SLOG accelerates synchronous writes by allowing the ZIL to commit to a fast SSD instead of the main pool. For NFS servers and database workloads with heavy sync write traffic, a quality SLOG device (with power-loss protection) can transform performance.
But here is what the blogs do not tell you: most workloads are asynchronous. Standard file serving, web applications, and general-purpose storage rarely issue synchronous writes. Adding a SLOG to an async workload does nothing. The ZIL is only written for sync operations, so if your applications are not issuing them, the SLOG device sits idle.
Check your sync write volume before investing in SLOG hardware:
# Check ZIL commit statistics
sysctl kstat.zfs.misc.zil_stats
If zil_commit_count is low or zero, you do not need a SLOG.
Practical sysctl.conf for Production
Below is a commented sysctl.conf block with recommended production tunables for a FreeBSD ZFS server. Every value here has been tested with sysctl -d on FreeBSD 13.x and 14.x. Adjust the ARC values for your hardware and workload:
# /boot/loader.conf -- ZFS production tunables (boot-time only)
# ARC maximum -- set based on total RAM and competing workloads
# Example: 32GB on a 64GB server running jails
vfs.zfs.arc.max="34359738368"
# ARC minimum -- prevent the ARC from being starved under pressure
vfs.zfs.arc.min="4294967296"
# /etc/sysctl.conf -- ZFS production tunables (runtime)
# TXG timeout -- seconds between transaction group commits
# Lower values reduce data-at-risk window, higher values improve throughput
# Default is 5; reduce to 3 for databases, leave at 5 for general use
vfs.zfs.txg.timeout=5
# Async read queue depth -- increase for heavy random read workloads
# Default is 3; raise on NVMe pools with high IOPS capacity
vfs.zfs.vdev.async_read_max_active=3
# Sync write queue depth -- increase for sync-heavy workloads (NFS, databases)
# Default is 10; raise if you have a SLOG device
vfs.zfs.vdev.sync_write_max_active=10
A few critical reminders for production deployments:
- Always verify tunables exist on your FreeBSD version with
sysctl -d vfs.zfs.X - ARC tunables (
arc.max,arc.min) must go in/boot/loader.conf, notsysctl.conf - Runtime tunables in
/etc/sysctl.confcan be tested live withsysctl vfs.zfs.X=Y - Monitor ARC hit ratio after changes -- if it drops, revert
- Never copy sysctl blocks from Linux guides without converting to
vfs.zfs.*namespace
Need help designing your ZFS storage architecture? Schedule a consultation to discuss your storage needs.