Disclaimer: I struggled a lot with the various packages and implementations of iSCSI that exist in the BSD and Unix world, don't take any of what I do in this post as proper procedure
This post serves as a description of my explorations of using iSCSI to serve ZFS volumes to my Proxmox cluster. This was mostly done to evaluate how I liked the features and limitations offered by ZFS over iSCSI from within Proxmox as I had never worked with that technology stack.
Globally, my goal was to create a Debian/Ubuntu based iSCSI target server so that the software could also be installed on my Proxmox hosts.
- Create iSCSI target on Ubuntu box
- Create Proxmox Storage iSCSI over ZFS to iSCSI target box
- Add virtual hard drive backed by iSCSI over ZFS to a test vm
- Test basic performance
iSCSI implementations & quirks
For iSCSI it seems there are quite a few conflicting implementations, each with various levels of support in general and in Proxmox specifically.
For the general case, I could find various comparisons, comparing performance, features and community.
- Comparison from SCST. Has a general comparison and compares the SCST implementation versus STGT and versus LIO. Seems to be biased strongly against STGT and LIO. Marks STGT as obsolete and dislikes LIO. Laments that SCST doesn't have the support that LIO has
- Comparison from LIO. Further reinforces that IET and STGT are obsolete. Seems to be indicate the SCST just lacks support
- A Comparative Analysis Of Open Source Storage Area Networks With Esxi 5.1: Thesis from a student at Purdue university, looks at IET, SCST and LIO and ISTGT within the context of Esxi. Does not seem to favor any particular implementation, except for mentioning stronger performance on IET and STGT, although they are obsolete
Proxmox itself contains a couple of implementations
- Comstar: for Comstar storage appliances I guess
- IET: Seems to be mostly obsolete, targeting Debian
- istgt: targetting FreeBSD or FreeNAS
- LIO: old school Linux implementation
- Community supported TrueNAS plugin
For Proxmox, the back-ends are actually mostly just wrappers around the various CLIs that get operated over SSH. This makes the implementations fairly fragile in the sense that they depend on specific file locations and operating systems modus operandi. (For istgt see the file
For example, to get the free space on the storage backend, it could use the command:
/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/<target_ip>_id_rsa root@<target_ip> zfs get -o value -Hp available,used tank/iscsitarget
For ZFS with SCST there is a patch available that adds support in Proxmox.
To experiment with iSCSI targets without messing up my Proxmox host I spun up a Debian host and decided to mess around with the istgt software (even though obsolete, it seemed to have the most available documentation).
As a high level overview these are the steps I took:
- Create a zpool with filesystem dataset
- Install istgt & Configure istgt targets
- Link various files for Proxmox initiator
- Add iSCSI storage in Proxmox
Creating the ZFS pool
The Debian host I spun up had one extra disk attached. I started out with creating a tank without any special vdevs.
apt install zfsutils-linux zpool create tank /dev/sdb
On this pool I created a filesystem vol. Importantly, for iSCSI, this should not be a volume From the man page:
zfs create [-p] [-o property=value] ... filesystem zfs create [-ps] [-b blocksize] [-o property=value] ... -V size volume
The top option is what I wanted:
zfs create tank/iscsitarget
root@istgt1:~# zfs list NAME USED AVAIL REFER MOUNTPOINT tank 214K 15.0G 24K /tank tank/iscsitarget 36K 15.0G 24K /tank/iscsitarget
Install istgt & Configure istgt targets
Installing istgt is pretty straightforward (but configuration is not):
apt install istgt
Now for configuration.
On Debian, the default configuration dir is:
In this folder, there is a reference to the example documentation at:
/usr/share/doc/istgt/examples/ which is what I based most of my configuration on.
Additionally, I found this helpful thread on the freebsd forums.
My basic istgt (with no regard for security at all!):
[Global] Comment "Global section" # node name (not include optional part) NodeBase "istgt1.test.portegi.es" # files PidFile /var/run/istgt.pid AuthFile /etc/istgt/auth.conf # directories # for removable media (virtual DVD/virtual Tape) MediaDirectory /var/istgt # syslog facility LogFacility "local7" # socket I/O timeout sec. (polling is infinity) Timeout 30 # NOPIN sending interval sec. NopInInterval 20 # authentication information for discovery session DiscoveryAuthMethod Auto MaxSessions 16 MaxConnections 4 MaxR2T 32 # iSCSI initial parameters negotiate with initiators # NOTE: incorrect values might crash MaxOutstandingR2T 16 DefaultTime2Wait 2 DefaultTime2Retain 60 FirstBurstLength 262144 MaxBurstLength 1048576 MaxRecvDataSegmentLength 262144 [UnitControl] Comment "Internal Logical Unit Controller" #AuthMethod Auto AuthMethod CHAP Mutual AuthGroup AuthGroup10000 # this portal is only used as controller (by istgtcontrol) # if it's not necessary, no portal is valid #Portal UC1 [::1]:3261 Portal UC1 127.0.0.1:3261 # accept IP netmask #Netmask [::1] Netmask 127.0.0.1 # wildcard address you may need if use DHCP # DO NOT USE WITH OTHER PORTALS [PortalGroup1] Comment "ANY IP" Portal DA1 0.0.0.0:3260 [InitiatorGroup1] Comment "Initiator Group1" # No security here at all, all IPs on my subnet can initiate the target InitiatorName "ALL" Netmask 192.168.100.0/24 [LogicalUnit1] TargetName disk1 Mapping PortalGroup1 InitiatorGroup1 AuthGroup AuthGroup1 UnitType Disk # Unsure if specifying QueueDepth is useful QueueDepth 32 # The basic zvol at its mount place LUN0 Storage /dev/zvol/tank/iscsitarget 15GB
Keep in mind that the configuration above has absolutely zero regard for security, performance or best practices. I only followed the basic comments in found in the example documentation and left out all security aspects.
After configuration, starting istgt is very simple:
my @CONFIG_FILES = ( '/usr/local/etc/istgt/istgt.conf', # FreeBSD, FreeNAS '/var/etc/iscsi/istgt.conf' # NAS4Free ); my @DAEMONS = ( '/usr/local/etc/rc.d/istgt', # FreeBSD, FreeNAS '/var/etc/rc.d/istgt' # NAS4Free );
Creating a Systemd service file
Of course I want the istgt server to run on boot so for that I used systemd. I added a systemd service based on an OpenSUSE Patch.
[Unit] Description=istgt iSCSI Daemon After=syslog.target network.target [Service] Type=forking PIDFile=/run/istgt.pid ExecStart=/usr/sbin/istgt -c /etc/istgt/istgt.conf Restart=on-abort ExecReload=/usr/bin/kill -HUP $MAINPID LimitNOFILE=16384 [Install] WantedBy=multi-user.target
For the most part this was not a particularly pleasant experiment. The many different iSCSI implementations, each with their own problems made it difficult to get started. The thin wrapper in Proxmox didn't make life much easier.
Lastly, the features and general benefits seemed to be rather low compared to other methods of just backing VMs by block files (aka
.raw images served over CIFS for example).
I haven't even tried benchmarking my half broken setup vs my Ceph cluster or serving data via CIFS.
Maybe I'll return to this at one point if either I can write a better plugin for Proxmox myself or if I run into performance issues with CIFS (highly unlikely for my bulk storage needs).
- Discussion on FS via iSCSI using LIO
- Mailing list from SCST complaining about LIO