ISCSI CTL Configuration


After having explored Ceph on Proxmox, my next experiment was going to be ZFS over ISCSI, to allow for some easier backup systems to be put into place. For this however, I'd first have to set up some sort of ISCSI server (target) so that my Proxmox box could be its client (initiator).

The issue I ran into when trying to set up a simple iscsi target was two fold, firstly, I had never worked with disks/zfs under BSD (which XigmaNAS is based on) and secondly, the more modern ISCSI provider system is a so called "CTL" system, for which I could find very little documentation online.

As such, this blog post serves as a documentation on what I did to set up my experiment

Baby steps: ZFS on BSD

The core issue for me when trying to actually just build a zpool on a bunch of disks/partitions of my drives was that BSD works far different from my familiar Ubuntu. By default, my disks where in the /dev/ folder with various daX labels. However, this comes back to the classic ZFS problem of not wanting these "random" labels as the identifier for your ZFS pool, as these labels might change if you replace drives as such. Under Ubuntu I would have simply looked at the uuids under /dev/disks/by-uuid/ or something like that. However, after looking around a little bit it seems that the perferred solution is to use GPT labels ( As such, I used gpart to create a GPT partitioning scheme on the drives, then simply added a labeled freebsd-zfs partition to the drives.

These labels can then be natively used in the zfs commands: zpool create tank-backup mirror 10567ab9-4tb 95e6fdc3-4tb. A bunch of other commands I used to verify my actions: geom part list
geom disk list
gpart modify -i2 -l 96edc1d5-4tb ada0 "Adding label 96edc1d5-4tb to partition 2 on drive ada0"


Again, the core issue with setting up ISCSI on XigmNAS was the lack of documentation on how the CTL flavor was different from the ISTGT flavor. As such, take what I demonstrate in the proper context, it works, but I have no clue about potential issues (aside from the obvious lack of security). Between each of my steps I restarted the ctld daemon on the box to spot any upcoming issues. For this I used the command service ctld restart (I really do perfer systemd...)

1. Auth/Portal Groups

Step one was building an authentication group. For this I first added a initiator portal, which will be my client IPs. For me, all initiators would be in the range. The next step was creating the auth group itself, which had the previously created initiator portal selected and it's authentication type set to "None". I will refer to this as ag-test.

The next step was building a portal group, pg-test. For this I used ag-test as its Discovery Auth Group. The discovery filter was set to it's most lenient setting, returning all target for the group. The portal group was set to, listening on all interfaces and networks (i.e. not restricted to my network).

2. The LUN: ZFS Pool

ISCSI allows for various layers of abstraction and isolation, and LUNs are one such layer. They offer a way to logically offer disks, or in my case a ZFS pool. I made sure my ZFS pool was mounted, so that the LUN could use the file path. The LUN was set to a block device with a 4KB block size. The device type was set to disk, with the path being the mount path of my ZFS pool: /tank-test. No other settings where touched.

3. The Target Combo

An important point for naming the target is the ISCSI naming convention. By default this includes the date and domain as well as some sort of name for the underlying storage. I personally didn't much like the date aspect, and as such named my target This then was tied to the created portal group and authentication group. In addition to that the LUN was attached.