Home:ALL Converter>Persist docker volume on swarm

Persist docker volume on swarm

Ask Time:2019-04-27T04:59:00         Author:Marco

Json Formatter

So im building a swarm of Elasticsearch nodes and ideally i would like to see two thing happen.

  1. Make each node save all its data on a folder on the host.
  2. Even if the stack is destroyed ones a new container is initiallized it should be able to pick up where the one before was killed using the same volume.

This is what im doing:

docker volume create --opt type=none --opt device=/mnt/data --opt o=bind --name=elastic-data

docker-compose.yml

version: '3'
services:
  elastic-node1:
    image: amazon/opendistro-for-elasticsearch:0.8.0
    environment:
      - cluster.name=elastic-cluster
      - bootstrap.memory_lock=false
      - "ES_JAVA_OPTS=-Xms32g -Xmx32g"
      - opendistro_security.ssl.http.enabled=false
      - discovery.zen.minimum_master_nodes=1
    volumes:
      - elastic-data:/mnt/data
    ports:
      - 9200:9200
      - 9600:9600
      - 2212:2212  
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - elastic-net
    deploy:
      mode: replicated
      replicas: 1

volumes:
  elastic-data:
    external: true

And then i would start the stack, post some data, remove the stack and staring it again but the data is not being retained.

docker stack deploy --compose-file docker-compose.yml opendistrostack

Im a little bit confused about volumes and im not being able to find a good documentation with a detail explanation for each use case. Could you point me on the right direction?

Thanks.

Author:Marco,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/55874871/persist-docker-volume-on-swarm
yy