Manage GlusterFS pool.
salt.states.glusterfs.
add_volume_bricks
(name, bricks)¶Add brick(s) to an existing volume
myvolume:
glusterfs.add_volume_bricks:
- bricks:
- host1:/srv/gluster/drive1
- host2:/srv/gluster/drive2
Replicated Volume:
glusterfs.add_volume_bricks:
- name: volume2
- bricks:
- host1:/srv/gluster/drive2
- host2:/srv/gluster/drive3
salt.states.glusterfs.
max_op_version
(name)¶New in version 2019.2.0.
Add brick(s) to an existing volume
myvolume:
glusterfs.max_op_version:
- name: volume1
- version: 30707
salt.states.glusterfs.
op_version
(name, version)¶New in version 2019.2.0.
Add brick(s) to an existing volume
myvolume:
glusterfs.op_version:
- name: volume1
- version: 30707
salt.states.glusterfs.
peered
(name)¶Check if node is peered.
peer-cluster:
glusterfs.peered:
- name: two
peer-clusters:
glusterfs.peered:
- names:
- one
- two
- three
- four
salt.states.glusterfs.
started
(name)¶Check if volume has been started
mycluster:
glusterfs.started: []
salt.states.glusterfs.
volume_present
(name, bricks, stripe=False, replica=False, device_vg=False, transport='tcp', start=False, force=False, arbiter=False)¶Ensure that the volume exists
use every third brick as arbiter (metadata only)
New in version 2019.2.0.
myvolume:
glusterfs.volume_present:
- bricks:
- host1:/srv/gluster/drive1
- host2:/srv/gluster/drive2
Replicated Volume:
glusterfs.volume_present:
- name: volume2
- bricks:
- host1:/srv/gluster/drive2
- host2:/srv/gluster/drive3
- replica: 2
- start: True
Replicated Volume with arbiter brick:
glusterfs.volume_present:
- name: volume3
- bricks:
- host1:/srv/gluster/drive2
- host2:/srv/gluster/drive3
- host3:/srv/gluster/drive4
- replica: 3
- arbiter: True
- start: True
Docs for previous releases are available on readthedocs.org.
Latest Salt release: latest_release