Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage volumes question #2

Open
phatpaul opened this issue Dec 14, 2020 · 6 comments
Open

storage volumes question #2

phatpaul opened this issue Dec 14, 2020 · 6 comments

Comments

@phatpaul
Copy link

Thanks for sharing your setup. I'm trying to set up something similar on 4 AtomicPis (x86). I have a RAID5 NAS (CIFS or NFS) which I want to hold all the data and db.

How did set up your volumes?

@WhiteBahamut
Copy link
Owner

oh man, forgot I even had this up here 👍
I had simple local path mapping. All my PIs had a nfs mounted in /mnt/volumes and there I had subfolder for my apps, e.g. /mnt/volumes/nextcloud/data. had some trouble with it and I would suggest to go directly do nfs volumes within the compose (see here)

volumes:
  nextcloud-data:
    driver: local
    driver_opts:
      type: nfs
      o: addr=192.168.1.1,rw
      device: ":/mnt/volumes/nextcloud/data"

Hope it helps.

@cravas
Copy link

cravas commented Mar 28, 2021

Hello all
I would really appreciate some tips/pointers to clear some doubts I'm having that are somewhat similar to the question above:

I already have a running docker stack with NC/MariaDB/Redis/Swag on a Pi4-armhf (soon to be aarch64).
I'm building a small cluster of +3x Pi4 to make docker swarm workers with the previous Pi (running standalone) becoming the manager.

I've read a lot of info on how to create a swarm to services but none has explain (at least, I don't see it anywhere) how to use a already done docker-compose.yml.

What I would like to know is:
Is it just a matter of running "docker swarm init" and then, add the workers to it?
Or does it require more commands to it?
Like "docker service scale "

Or it's just a matter of adding to the yml services "replicas=3" (4?!?)

And the volumes?
My NC data is attached locally on the Pi4 manager (via USB external HDD). Will it be replicated to the other workers? Or only the image services (configs) will be replicated?

Sorry for all the questions but, I'm really at a blank about this.

Thank you for reading and for all input you may share

docker-compose.yml:

version: "2.1"
services:
nextcloud:
image: linuxserver/nextcloud
container_name: nextcloud
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Lisbon
volumes:
- /srv/dev-disk-by-label-DATA/appdata/nextcloud/config:/config
- /srv/dev-disk-by-label-DATA/appdata/nextcloud/data:/data
depends_on:
- mariadb
ports: # uncomment this and the next line if you want to bypass the proxy
- 450:443
restart: unless-stopped
mariadb:
image: webhippie/mariadb:latest
container_name: mariadb
environment:
- MARIADB_ROOT_PASSWORD=******
- MARIADB_DATABASE=nextcloud
- MARIADB_USERNAME=*****
- MARIADB_PASSWORD=*****
volumes:
- /srv/dev-disk-by-label-DATA/appdata/mariadb_10.5/mysql:/var/lib/mysql
- /srv/dev-disk-by-label-DATA/appdata/mariadb_10.5/conf.d:/etc/mysql/conf.d
- /srv/dev-disk-by-label-DATA/appdata/mariadb_10.5/backup:/var/lib/backup
restart: unless-stopped
swag:
image: linuxserver/swag
container_name: swag
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Lisbon
- URL=*******
- SUBDOMAINS=*****
- VALIDATION=******
- EMAIL=@*
volumes:
- /srv/dev-disk-by-label-DATA/appdata/swag:/config
ports:
- 444:443
- 81:80
restart: unless-stopped
redis:
image: redis
container_name: redis
hostname: redis
volumes:
- /srv/dev-disk-by-label-DATA/appdata/redis:/data
restart: unless-stopped
motioneye:
image: ccrisan/motioneye:master-armhf
container_name: motioneye
volumes:
- /etc/localtime:/etc/localtime:ro #Timezone Config / Do Not Change
- /srv/dev-disk-by-label-DATA/appdata/motioneye:/etc/motioneye #Config Storage
- /srv/dev-disk-by-label-DATA/media/motioneye:/var/lib/motioneye #File Storage
devices:
- /dev/video0
ports:
- 8765:8765
- 58001:58001
hostname: motioneye
restart: unless-stopped

@WhiteBahamut
Copy link
Owner

Hi,

just init your swarm with docker swarm init. Once done a join command will be displayed, Copy it and execute it on the worker nodes you want to join this swarm (you can view the token on the master with docker swarm join-token or so).
The swarm will automatically take care of failover in case node goes down. If you want to scale you can specify it in the docker-compose service deploy section or a ui like portainer. Scale nextcloud and not the db (unless you know hwo to setup a cluster with your db). If you want to scale nextcloud you need to set your redis server also as an env var on the service (at least in the offical image).
Your data is not replicated by the swarm. In your setup I suggest do make your manger (the one with the usb drive) also a nfs-server and share the usb folder with all other nodes (ubuntu wiki has good explainations on nfs-server, -client and so on). Mount the shared usb folder on all devices to the same directory (e.g. /srv/dev-disk-by-label-DATA/). So if nextcloud runs on worker1 it will look at /srv/dev-disk-by-label-DATA/... on the node. When nextcloud is started on worker2 it will also look at /srv/dev-disk-by-label-DATA/ for the data.

@cravas
Copy link

cravas commented Mar 28, 2021

Just init your swarm with docker swarm init. Once done a join command will be displayed, Copy it and execute it on the worker nodes you want to join this swarm (you can view the token on the master with docker swarm join-token or so).
The swarm will automatically take care of failover in case node goes down

This was what I was thinking (and read) but didn't knew it was so easy/linear.

If you want to scale you can specify it in the docker-compose service deploy section or a ui like portainer

My idea is to spread the load (php calls) from the standalone Pi. Is it the same?
Since I launch the stack via docker-compose CLI, is it doable through it (CLI). Or is it easier with Portainer?
Don't really like Portainer but perhaps it's time to learn better about it, :)

Scale nextcloud and not the db (unless you know hwo to setup a cluster with your db)

Copy that. Will leave it to another time (more reading/learning to do)

If you want to scale nextcloud you need to set your redis server also as an env var on the service (at least in the offical image).

My NC config.php is already configured as the NC docs to use it as 'memcache.distributed'https://docs.nextcloud.com/server/20/admin_manual/configuration_server/caching_configuration.html?highlight=memcache#id2
Is it enough?

Your data is not replicated by the swarm. In your setup I suggest do make your manger (the one with the usb drive) also a nfs-server and share the usb folder with all other nodes....

Ok, here I lost it: I assumed that since the DATA lives in the manager, the workers would know it when they would ingress the swarm.
From what I understand, they will also need access to the folder, correct?

I use OMV in the manager (where all the shares live), maybe I can use it on the workers and create a remote mount to that manager DATA folder.
Didn't really wanted to install more "stuff" other than RaspiLite and docker but, if it makes it simple, I'll go for it.

@WhiteBahamut
Copy link
Owner

In normal/personal use case you might not even need to scale nextcloud. You should be able to scale via CLI. But just to make sure, docker-compose cli is not the same as swarm. You can use compose files to deploy your stack. But they are differnt commands (docker-compose up vs docker stack deploy -f compose.yml mystack).

Redis as env var will be used for other php session sharing between nodes as well. At least I had problems with scaled NC without the env var for redis. If you scale NC, your replicas might end up on different nodes in your swarm.

DATA is on your manger node. Docker swarm will not replicate data. Docker swarm just ensures the state you deployed with your compose configuration (services, networks, volumes) is reached. Your compose just has a pointer to where the data should be.
In your case you tell it to look for a local path for DATA. So the local path must exist on all nodes. For that I used nfs.

@cravas
Copy link

cravas commented Mar 28, 2021

Thank you for the explanation.

Since I'm still configuring everything, I'll make a testing scenario with pseudo data to see if I have a better view of it running live.
Only after I'm confident it is running properly, will I migrate it.

;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants