I have installed ZFS(0.6.5) in my Centos 7 and I have also created a zpool, everything works fine apart from the fact that my datasets disappear on reboot.
I have been trying to debug this issue with the help of various online resources and blogs but couldn't get the desired result.
After reboot, when I issue the zfs list
command I get "no datasets available" , and zpool list
gives "no pools available"After doing a lot of online research, I could make it work by manually importing the cache file using zpool import -c cachefile, but still I had to run zpool set cachefile=/etc/zfs/zpool.cache Pool before the reboot so as to import it later on after reboot.
This is what systemctl status zfs-import-cache
looks like,
zfs-import-cache.service - Import ZFS pools by cache fileLoaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; static)Active: inactive (dead)
cat /etc/sysconfig/zfs
# ZoL userland configuration.# Run `zfs mount -a` during system start?ZFS_MOUNT='yes'# Run `zfs unmount -a` during system stop?ZFS_UNMOUNT='yes'# Run `zfs share -a` during system start?# nb: The shareiscsi, sharenfs, and sharesmb dataset properties.ZFS_SHARE='yes'# Run `zfs unshare -a` during system stop?ZFS_UNSHARE='yes'# Specify specific path(s) to look for device nodes and/or links for the# pool import(s). See zpool(8) for more information about this variable.# It supersedes the old USE_DISK_BY_ID which indicated that it would only# try '/dev/disk/by-id'.# The old variable will still work in the code, but is deprecated.#ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"# Should the datasets be mounted verbosely?# A mount counter will be used when mounting if set to 'yes'.VERBOSE_MOUNT='no'# Should we allow overlay mounts?# This is standard in Linux, but not ZFS which comes from Solaris where this# is not allowed).DO_OVERLAY_MOUNTS='no'# Any additional option to the 'zfs mount' command line?# Include '-o' for each option wanted.MOUNT_EXTRA_OPTIONS=""# Build kernel modules with the --enable-debug switch?# Only applicable for Debian GNU/Linux {dkms,initramfs}.ZFS_DKMS_ENABLE_DEBUG='no'# Build kernel modules with the --enable-debug-dmu-tx switch?# Only applicable for Debian GNU/Linux {dkms,initramfs}.ZFS_DKMS_ENABLE_DEBUG_DMU_TX='no'# Keep debugging symbols in kernel modules?# Only applicable for Debian GNU/Linux {dkms,initramfs}.ZFS_DKMS_DISABLE_STRIP='no'# Wait for this many seconds in the initrd pre_mountroot?# This delays startup and should be '0' on most systems.# Only applicable for Debian GNU/Linux {dkms,initramfs}.ZFS_INITRD_PRE_MOUNTROOT_SLEEP='0'# Wait for this many seconds in the initrd mountroot?# This delays startup and should be '0' on most systems. This might help on# systems which have their ZFS root on a USB disk that takes just a little# longer to be available# Only applicable for Debian GNU/Linux {dkms,initramfs}.ZFS_INITRD_POST_MODPROBE_SLEEP='0'# List of additional datasets to mount after the root dataset is mounted?## The init script will use the mountpoint specified in the 'mountpoint'# property value in the dataset to determine where it should be mounted.## This is a space separated list, and will be mounted in the order specified,# so if one filesystem depends on a previous mountpoint, make sure to put# them in the right order.## It is not necessary to add filesystems below the root fs here. It is# taken care of by the initrd script automatically. These are only for# additional filesystems needed. Such as /opt, /usr/local which is not# located under the root fs.# Example: If root FS is 'rpool/ROOT/rootfs', this would make sense.#ZFS_INITRD_ADDITIONAL_DATASETS="rpool/ROOT/usr rpool/ROOT/var"# List of pools that should NOT be imported at boot?# This is a space separated list.#ZFS_POOL_EXCEPTIONS="test2"# Optional arguments for the ZFS Event Daemon (ZED).# See zed(8) for more information on available options.#ZED_ARGS="-M"
I am not sure if this is a known issue,.. if yes, Is there any workaround for this? perhaps an easy way to preserve my datasets after reboot and preferably without the overhead of an cache file.