-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support persistent data when restarting ACS in Docker Compose deployment #649
Comments
See the helm deployment for a long running setup with volumes that allows restarting. The docker compose deployment is meant for local development and tested only using the Until now there was not enough demand for a set up like you suggest with instructions to use the same compose files via the > docker stack deploy --compose-file=docker-compose.yml acs
Compose file contains unsupported options:
mem_limit: Set resource limits using deploy.resources We're happy to accept PRs though, with CI tests :) |
For persistence with docker swarm and compose files you just need to make volume maps for the things you want to persist. If running a swarm across multiple nodes (VMs or physical host) you also have to connect each node to a shared file system like NFS or SMB share and store the things needing to persist there, as the files and locations being mapped in the compose file could be accessed from a service running on any of the nodes in the swarm. We have a fully functional 7.1.0.1 swarm cluster running on 6 Ubuntu 20.04 VMs, with persistent data being shared from a NFS server. We mapped the alf_data file repository, PostgreSQL database directory, and SOLR index data directory, and some custom configuration files like our alfresco-global.properties, SSL certificates, custom model files, etc. So while Kubernetes does some really nice orchestration for you and is easy to deploy using Alfresco provided documentation if using AWS hosted services, if you are not at a place where you can take advantage of that, swarm is a good starting point. |
Thanks @torgerss that's great to learn, I'm not too familiar with swarm/stack but would it make sense for you to create a PR with an extra compose file with your settings to add to the CLI, some notes and ideally a GH workflow to test this scenario - if you can start one we can try and help there and maintain with new upcoming versions |
Thanks @torgerss for sharing the experience. I also had a look into the .properties file of alfersco and it is empty during runtime, so I assume that the configuration is passed differently. If it is somehow possible for them and they could share their compose-file with us, that would be very helpful for us and we could test if this configuration runs with us as well. Thanks a lot for the support. |
Hello together,
for evaluation purposes, I wanted to test your images to deploy Alfresco via Docker.
Since we use Docker-Stack for production, we would also like to use Alfresco with stack.
After a few setup hurdles, it basically works quite well.
However we have a massive problem when restarting the services.
The containers start, but the following appears on the root URL:
In addition, the share and all other components are no longer usable.
In posts like the one below, it is described that the file either needs to be included correctly or the database needs to be rebuilt:
https://hub.alfresco.com/t5/alfresco-content-services-forum/ensure-that-the-dir-root-property-c-alfres-4-alf-data-is/td-p/244923
But I have little influence on the creation of the data. What I don't understand, the properties file is created via the image, but is empty. On initial creation Alfresco works, after a restart no longer.
I have tried several things, but no matter how I configure alfresco, the problem is still there. I have tried compose, no proxy, bindings etc. The latest it fails on the next reboot. Except when I use compose with down & up. But then the restart command does not work either.
Therefore my request. How do they manage that the data remains existent after a restart of the container or the system? Is there a way to work around this error so that Alfresco can be restarted? In my attempts, I tested with the community version. For Docker stack, I adjusted the configuration a bit. If you have any questions, please do not hesitate to contact me.
Many greetings
Eric
The text was updated successfully, but these errors were encountered: