Here are my results for an SSD, two encrypted, mirrored HDDs with a normal partition and an lz4 compressed partition.
SSD read : 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.529213 s, 2.0 GB/s
SSD write : 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.534627 s, 2.0 GB/s
ZFS read : 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.38249 s, 168 MB/s
ZFS write : 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.60697 s, 125 MB/s
ZFS read (lz4) : 1110415360 bytes (1.1 GB, 1.0 GiB) copied, 2.23535 s, 497 MB/s
ZFS write (lz4) : 1110415360 bytes (1.1 GB, 1.0 GiB) copied, 2.75426 s, 403 MB/s
The first two lines test my SSD. I have my rootfs (operating system and all programs) installed on there. I'm using the new M.2 connector which promises up to 32 Gib/s, and I'm getting 2 GB/s, which is about half of that. I'm not sure what the bottleneck is without further investigation.
The next two lines are two 4 TB hard drives (7200 rpm) using zfs "mirror" mode. You might be wondering why they're so slow? Well, there's an encryption layer sitting between the HDDs and ZFS which has a maximum throughput of about 230 MiB/s. Even with this slow speed, I don't really notice much impact in practice. ZFS does a good job of caching files to make up for it.
The last two lines are benchmarks of random text files being written and read from an LZ4 compressed partition. You see that even though the HDDs are being throttled to the 230 MiB/s maximum encryption throughput, being able to compress the data gives a significant increase in performance.
The first 4 were tested using
$ dd if=/dev/zero of=/path/to/partition bs=1M count=1024
$ dd if=/path/to/partition of=/dev/null bs=1M
(basically just reading and writing 1GiB worth of zeros to and from the drive)
The last two were tested by copying an uncompressed tarball of the linux kernel.