Lately i’ve been working on setting up ownCloud on one of my virtual machines. I have grand schemes of having a storage server setup in the future, but monetary concerns dictate that my “storage server” is just another VM on the network.

I ran into a couple snags setting this up that weren’t immediately evident from googling my problems, so I thought I’d take the excuse to write a post about it. Most of this is from the Redhat documentation, and the Arch Wiki.


The target is the storage server in iSCSI terms. RHEL 7 (and Fedora) have a handy tool for managing the target configuration called targetcli. You’ll need to install targetcli and enable it using systemd:

# yum install targetcli
# systemctl enable target.service
# systemctl start target.service

The built-in help is actually pretty helpful (try help) The tool is organized like a filesystem hierarchy where the different “directories” allow you to change various parts of the configuration. Tab completion works too.

Next you’ll need to create a backstore. In my case, I had an LVM volume which was added to the virtual machine as a disk: sda. If you’re using a different kind of backstore, then change the following command according to the Redhat documentation linked above.

/> /backstores/block
/backstores/block> create name=sda dev=/dev/sda

Next, create the iSCSI target:

/backstores/block> /iscsi
/iscsi> create
Created target
Created TPG1

Note that freeze is the hostname of the storage server.

Next step is to create a portal:

/iscsi/iqn.20...d34db33f/tpg1> portals/ create

It will create a portal that binds to all IP addresses ( If you want to do something fancier, you’ll have to specifiy the ip address.

Create a LUN:

/iscsi/iqn.20...d34db33f/tpg1> luns/ create /backstores/block/sda

You can use ls to view a summary of the changes you’ve made.

Lastly, you need to create an ACL to allow your iSCSI client (initiator) to talk to your server (the target). To do that, you need to get the iSCSI qualified name of the initiator. This is auto-generated and stored in /etc/iscsi/initiatorname.iscsi. In this example, the target was, and the initiator was

/iscsi/iqn.20...d34db33f/tpg1> acls/
/iscsi/iqn.20...d34db33f/tpg1/acls> create


The target service doesn’t install a service description file for firewalld, so I had to create one myself. Save the following as /etc/firewalld/services/iscsi.xml.

<?xml version="1.0" encoding="utf-8"?>
  <description>iSCSI Target</description>
  <port protocol="tcp" port="3260"/>

Next, add the service to the default zone.

# firewall-cmd --add-service=iscsi
# firewall-cmd --permanent --add-service=iscsi


First, install theiscsi-initiator-utils and enable the iscsid service.

# yum install iscsi-initiator-utils
# systemctl enable iscsid.service
# systemctl start iscsid.service

On the initiator, you should now be able to scan the target like so:

# iscsiadm -m discovery -t sendtargets -p freeze.home.lan

Note that iscsiadm will tell you there’s no route to the host if it has trouble reaching the storage server (like if the firewall is denying traffic). This was one of the snags I ran into..

You should now be able to login to the target you just created.

# iscsiadm -m node -L all

This will create a device node that you can use to create a new filesystem. To find out what the device node is, you can run:

# iscsiadm -m session -P 3

Among other things, this will print out something like the following:

            Attached SCSI devices:
            Host Number: 2  State: running
            scsi2 Channel 00 Id 0 Lun: 0
                    Attached scsi disk sda          State: running

Note that the created device was sda because my initiator is a virtual machine (so the virtual disk it uses as a hard drive is /dev/vda). If your initiator is not a VM, then the created device will probably be sdb. Also, the fact that the backing store on the target was sda is irrelevant.

You should be able to use fdisk to inspect the partition table on the device.

# fdisk /dev/sda

Create a filesystem if necessary.

# mkfs.ext4 /dev/sda


Next, we’re going to setup the filesystem to mount at boot and move ownCloud’s storage onto the iSCSI device.

First, stop owncloud as we don’t want it writing to it’s index files while we’re moving things over.

# systemctl stop httpd.service

I opted to mount the filesystem by UUID so that it will still mount if the device is renamed for whatever reason.

Look in /dev/disk/by-uuid and see what points to sda. Take note of the UUID.

# ls -l /dev/disk/by-uuid

Now add the following to /etc/fstab:

UUID=insert_uuid_here /var/lib/owncloud ext4 defaults,_netdev       0 0

The _netdev option is important as it tells the init system to wait until the network is available before mounting the filesystem. The server will hang on boot without it.

I temporarily mounted the device to /mnt/tmp in order to copy over the owncloud data to it.

# mkdir /mnt/tmp
# mount /dev/sda /mnt/tmp
# cp -ar /var/lib/owncloud/* /mnt/tmp

Now we can unmount sda and remount it on /var/lib/owncloud.

# umount /mnt/tmp
# mount /var/lib/owncloud

Finally, we can restart owncloud.

# systemctl start httpd.service

That’s it! You should be able to access ownCloud again and have seamless remote storage. I should probably setup authentication for the iSCSI target, but as it’s running on a network that’s private to two virtual machines, I haven’t bothered. That’s on the TODO list.