Install LucidLink within Proxmox cluster

  • Updated

Proxmox Virtual Environment (PVE) is an open source hyper-converged platform that integrates compute, storage and networking resources, manages highly available clusters, backup/restore as well as disaster recovery.  The project is developed and maintained by Proxmox Server Solutions GmbH.

Based on Debian, enables LucidLink installation within your PVE nodes. Filespaces are linked and mounted throughout hosts within your cluster, as PVE storage resources, backed by virtually infinite object storage. 

Filespaces provide PVE a universal mount-point to present storage resources across datacenters, without the perimeter limitations of traditional storage. Geographically separate clusters access data, in the same manner as individual servers or clusters.


Filespaces utilize cloud or on-premises object storage and present as local disk, a natural extension of PVE's file system. Backups, containers, container templates, ISOs and virtual machine images are accessible and streamed to your PVE cluster nodes, on-demand.

Implementation is easy: in our environment we have a 3x node cluster with a Filespace mounted as a /mnt/filespace on all nodes within our cluster. This mount-point is presented to the PVE cluster as a universal respository. 

Download the latest LucidLink Client and install on all Proxmox clustered servers that require access through each server's shell. We will create a systemd service to link and mount your Filespace:

wget -O lucidinstaller.deb
dpkg -i lucidinstaller.deb
apt-get install -f -y
nano /etc/systemd/system/LucidLink.service

Register with LucidLink and create a Filespace to populate the <filespace.domain> <lluser> and <lluserpwd> properties of your systemd service. Please consult our Getting Started Guide and Knowledge Base articles for further assistance:


ExecStart=/usr/bin/lucid daemon --fs <filespace.domain> --user <lluser> --password <lluserpwd> --mount-point /mnt/filespace --fuse-allow-other
ExecStop=/usr/bin/lucid exit


With systemctl enable, start and confirm the status of your systemd service is running. Using lucid status through the server shell  displays you are successfully linked connected with your Filespace mount-point:

systemctl enable LucidLink
systemctl start LucidLink
systemctl status LucidLink


We are now ready to present our cluster nodes our newly created persistent mount-point. Our systemd service will automatically ensure our mount-point survives cluster node reboots. Any data located on this mount-point will reside in our object storage account, and be available to all PVE nodes where it is mounted.

Via any node of your cluster (or via UI) you can configure storage, and assign the storage to required nodes within your cluster. You will be using the Proxmox Storage Type as directory storage pool type:

nano /etc/pve/storage.cfg

Specify the required content types to occupy your Filespace. In our example PVE cluster on our Filespace we will host backups, container, container templates, ISO and virtual machine images throughout all nodes in our cluster:

images (virtual machines), vztmpl (container templates), iso (ISO images), rootdir (container data), backup (backup files), snippets (snippet files, scripts)
dir: Filespace
    path /mnt/filespace/pve
    nodes pve1, pve2, pve3
    content images, backup, rootdir, iso, vztmpl

You can repeat these operations across multiple clusters, all clusters will liaise with datasets such as, template and backup images to schedule backups, perform recoveries and present consistent resources between locally available or highly distributed clustered environments.



Once implemented you manage your virtual machines, containers, highly available clusters, storage and networks with an integrated, easy-to-use web interface or via CLI - schedule backup tasks, perform recoveries, launch VMs and container templates directly from your Filespace.

Visit the Proxmox VE wiki for more detail information on installation, configuration, management and the unique features and functions of PVE


Was this article helpful?

0 out of 0 found this helpful