-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage volumes question #2
Comments
oh man, forgot I even had this up here 👍 volumes:
nextcloud-data:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.1,rw
device: ":/mnt/volumes/nextcloud/data" Hope it helps. |
Hello all I already have a running docker stack with NC/MariaDB/Redis/Swag on a Pi4-armhf (soon to be aarch64). I've read a lot of info on how to create a swarm to services but none has explain (at least, I don't see it anywhere) how to use a already done docker-compose.yml. What I would like to know is: Or it's just a matter of adding to the yml services "replicas=3" (4?!?) And the volumes? Sorry for all the questions but, I'm really at a blank about this. Thank you for reading and for all input you may share docker-compose.yml:
|
Hi, just init your swarm with |
This was what I was thinking (and read) but didn't knew it was so easy/linear.
My idea is to spread the load (php calls) from the standalone Pi. Is it the same?
Copy that. Will leave it to another time (more reading/learning to do)
My NC config.php is already configured as the NC docs to use it as 'memcache.distributed'https://docs.nextcloud.com/server/20/admin_manual/configuration_server/caching_configuration.html?highlight=memcache#id2
Ok, here I lost it: I assumed that since the DATA lives in the manager, the workers would know it when they would ingress the swarm. I use OMV in the manager (where all the shares live), maybe I can use it on the workers and create a remote mount to that manager DATA folder. |
In normal/personal use case you might not even need to scale nextcloud. You should be able to scale via CLI. But just to make sure, docker-compose cli is not the same as swarm. You can use compose files to deploy your stack. But they are differnt commands (docker-compose up vs docker stack deploy -f compose.yml mystack). Redis as env var will be used for other php session sharing between nodes as well. At least I had problems with scaled NC without the env var for redis. If you scale NC, your replicas might end up on different nodes in your swarm. DATA is on your manger node. Docker swarm will not replicate data. Docker swarm just ensures the state you deployed with your compose configuration (services, networks, volumes) is reached. Your compose just has a pointer to where the data should be. |
Thank you for the explanation. Since I'm still configuring everything, I'll make a testing scenario with pseudo data to see if I have a better view of it running live. ;) |
Thanks for sharing your setup. I'm trying to set up something similar on 4 AtomicPis (x86). I have a RAID5 NAS (CIFS or NFS) which I want to hold all the data and db.
How did set up your volumes?
The text was updated successfully, but these errors were encountered: