The Docker plugin for LucidLink makes it really easy to mount LucidLink Filespaces in Docker containers, all the containers have to do is request the volume by name, and no matter what host the containers move to, they will still be able to access the same data as this is stored in your object storage bucket.

alt text

Deploying LucidLink Docker Volume Plugin

  1. I'll be using Ubuntu 16 LTS, and will install Docker from its official repository, by adding an apt repository source.
    apt update
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    apt update && apt install docker-ce
    

    You may find that you need to run apt-upgrade, depending on when you last updated your system. Docker Engine volume plugin support requires Docker 18.03-1 or higher.

  2. Next we will deploy the alpha version of the LucidLink Filespaces Docker Volume Plugin. This will require network [host], mount [var/lib/docker/plugins], device [/dev/fuse], and CAP_SYS_ADMIN privileges. The total download size should be ~52 MB.
    docker plugin install lucidlinkcorp/docker-volume-lucid:alpha
    
  3. Now its time to create a LucidLink Filespace. If this is your first time using LucidLink, you have to sign up and choose a domain for your account. After this you can provision your first filespace. Select the 'Create a new filespace' option, at which point you can choose between 'LucidLink Storage', or 'Your Storage Provider'. I'm choosing my own storage provider and will then select AWS, a region and then click 'Create'. After this the portal will spend some time spinning up your Filespace instance.
  4. Don't forget that this filespace needs to be initialized. To do this we will download the LucidLink Client and use lucid init-s3.

    # install lucidLink client
    wget https://s3.amazonaws.com/lucidlink-builds/latest/lin64/lucid_1.12.1666_amd64.deb
    dpkg -i lucid_1.12.1666_amd64.deb
    apt install --fix-broken
    # start lucidLink daemon
    lucid daemon &
    # initialize lucidlink filespace
    lucid init-s3 --fs filespace.huttenga --password 123 --https --accesskey <accesskey> --secretkey <secretkey> --region <region> --provider AWS
    # exit lucidLink daemon
    lucid exit
    

    Be sure to use your filespace name and change the password, access and secret keys. Initialization only has to happen once per LucidLink Filespace and ensures that you are the only one with the password. If done correctly the initialization should show something like this.

    alt text

  5. Now we can create and use Docker volumes using the filespace we initialized. This same volume can be accessed from any container, running on any Docker host, on any platform, anywhere.

    docker volume create -d lucidlinkcorp/docker-volume-lucid:alpha filespace.huttenga -o password=123
    docker run -it --mount type=volume,src=filespace.huttenga,target=/mnt/filespace.huttenga nginx:latest /bin/bash
    

    alt text

This was a short overview of how to use the LucidLink Docker Volume Plugin, which, remember, is still in alpha and not for production use. That said, I can imagine LucidLink being pretty useful in development workflows. In fact that was the initial reason LucidLink was written -- to make it easier to share code bases across clouds, be it with developers or continuous integration tools.

Source: https://huttenga.net/blog/2019/01/lucidlink-docker-volume-plugin-for-persistent-storage