Overcloud deployment Ceph nodes via instack-virt-setup requires some additional repos to be installed on VIRTHOST (32 GB ) and on INSTACK VM
as well as exporting DIB_YUM_REPO_CONF referencing delorean trunks
Newton "current-passed-ci" and CentOS-Ceph-Jewel.repo in INSTACK stack's
shell before building overcloud images prior to overcloud deployment.
I am aware of official trpipleo quickstart release 1.0.0 for RDO Newton, however
I still want to make sure instack-virt-setup && package instack-undercloud provide ability to perform successfully :-
$ sudo yum install -y python-tripleoclient
$ openstack undercloud install
$ source stackrc
$ openstack overcloud image build --all
Tripleo QuickStart became pretty silent after logging into undercloud been built via uploading into libvirt pool prebuilt _undercloud.qcow2 requires running all following bellow commands and some other mentioned down here in current post
$ openstack overcloud image upload
$ openstack baremetal import instackenv.json
$ openstack baremetal configure boot
$ openstack baremetal introspection bulk start
Follow http://lxer.com/module/newswire/view/234586/index.html setting up instack VM and configuring "centos7-newton/current-passed-ci" based delorean repos on VIRTHOST and INSTACK . After log into "instack VM" (undercloud VM) create 4GB swap file and restart "instack VM"
Restart and logging again
=======================
VIRTHOST - configuration
=======================
Create user stack
Create stack's .bachrc && relogin to stack
************************************************
*****************************************
Set up Newton DLRN repos
*****************************************
$ sudo yum -y update
$ sudo yum install -y instack-undercloud
$ instack-virt-setup
***********************
INSTACK VM SETUP
***********************
[stack@instack ~]# su - stack
[stack@instack ~]$ cat .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
export NODE_DIST=centos7
export USE_DELOREAN_TRUNK=1
export DELOREAN_TRUNK_REPO=" http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-newton-tested/"
export DELOREAN_REPO_FILE="delorean.repo"
export DIB_YUM_REPO_CONF=/etc/yum.repos.d/delorean*
# User specific aliases and functions
RELOGIN as stack=>root=>stack
$ sudo yum install -y python-tripleoclient
$ openstack undercloud install
$ source stackrc
$ openstack overcloud image build --all
$ openstack overcloud image upload
$ openstack baremetal import instackenv.json
$ openstack baremetal configure boot
$ openstack baremetal introspection bulk start
In case when this command hangs, then start from scratch
follow http://tripleo.org/advanced_deployment/introspect_single_node.html
It might take long time and resolve the issue.
$ neutron subnet-list
$ neutron subnet-update 1b7d82e5-0bf1-4ba5-8008-4aa402598065 --dns-nameserver 192.168.122.1
********************************************************************************************
Next step :-
$ sudo vi /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-mon.yaml
Update line :-
ceph::profile::params::osd_pool_default_size: 1
instead of default value "3". This step is acceptable only in Virtual Environment.
Setting the osd_pool_default_size set to 1, you will only have one copy of the object. As a general rule, you should run your cluster with more than one OSD and a pool size greater than 1 object replica. So having 48GB RAM on VIRTHOST the optimal setting is osd_pool_default_size = 3 (at least 2) ********************************************************************************************
outputs:
role_data:
description: Role data for the Ceph Monitor service.
value:
service_name: ceph_mon
monitoring_subscription: {get_param: MonitoringSubscriptionCephMon}
config_settings:
map_merge:
- get_attr: [CephBase, role_data, config_settings]
- ceph::profile::params::ms_bind_ipv6: {get_param: CephIPv6}
ceph::profile::params::mon_key: {get_param: CephMonKey}
ceph::profile::params::osd_pool_default_pg_num: 32
ceph::profile::params::osd_pool_default_pgp_num: 32
ceph::profile::params::osd_pool_default_size: 1 <== instead of "3"
# repeat returns items in a list, so we need to map_merge twice
tripleo::profile::base::ceph::mon::ceph_pools:
map_merge:
- map_merge:
repeat:
for_each:
<%pool%>:
- {get_param: CinderRbdPoolName}
- {get_param: CinderBackupRbdPoolName}
- {get_param: NovaRbdPoolName}
- {get_param: GlanceRbdPoolName}
- {get_param: GnocchiRbdPoolName}
******************************
Set up Network isolation
******************************
[stack@instack ~]$ sudo vi /etc/sysconfig/network-scripts/ifcfg-vlan10
DEVICE=vlan10
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSIntPort
BOOTPROTO=static
IPADDR=10.0.0.1
NETMASK=255.255.255.0
OVS_BRIDGE=br-ctlplane
OVS_OPTIONS="tag=10"
[stack@instack ~]$ sudo ifup vlan10
*********************************************
192.168.122.134 is IP of "instack VM"
*********************************************
[stack@instack ~]$ cat network_env.yaml
{
"parameter_defaults": {
"ControlPlaneDefaultRoute": "192.0.2.1",
"ControlPlaneSubnetCidr": "24",
"DnsServers": [
"192.168.122.134"
],
"EC2MetadataIp": "192.0.2.1",
"ExternalAllocationPools": [
{
"end": "10.0.0.250",
"start": "10.0.0.4"
}
],
"ExternalNetCidr": "10.0.0.1/24",
"NeutronExternalNetworkBridge": ""
}
}
[stack@undercloud ~]$ sudo iptables -A BOOTSTACK_MASQ -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE -t nat
[stack@undercloud ~]$ sudo touch -f /usr/share/openstack-tripleo-heat-templates/puppet/post.yaml
**********************************
Script overcloud-deploy.sh
**********************************
[stack@instack ~]$ ssh heat-admin@192.0.2.16
Last login: Mon Nov 14 12:04:57 2016 from 192.0.2.1
[heat-admin@overcloud-controller-0 ~]$ sudo su -
Last login: Mon Nov 14 12:05:04 UTC 2016 on pts/0
[root@overcloud-controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Last updated: Mon Nov 14 13:20:24 2016 Last change: Mon Nov 14 12:01:54 2016 by root via cibadmin on overcloud-controller-0
Stack: corosync
Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum
3 nodes and 19 resources configured
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Full list of resources:
Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
ip-10.0.0.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.2.13 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-1 ]
Slaves: [ overcloud-controller-0 overcloud-controller-2 ]
ip-192.0.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.3.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.1.9 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0
PCSD Status:
overcloud-controller-0: Online
overcloud-controller-1: Online
overcloud-controller-2: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
{"name":"overcloud-controller-2","rank":2,"state":"peon","election_epoch":6,"quorum":[0,1,2],"outside_quorum":[],"extra_probe_peers":["172.16.1.5:6789\/0","172.16.1.6:6789\/0"],"sync_provider":[],"monmap":{"epoch":1,"fsid":"b21584d4-aa5b-11e6-bf47-525400121514","modified":"2016-11-14 11:38:10.770478","created":"2016-11-14 11:38:10.770478","mons":[{"rank":0,"name":"overcloud-controller-1","addr":"172.16.1.5:6789\/0"},{"rank":1,"name":"overcloud-controller-0","addr":"172.16.1.6:6789\/0"},{"rank":2,"name":"overcloud-controller-2","addr":"172.16.1.11:6789\/0"}]}}
[root@overcloud-controller-0 ~]# ceph quorum_status
{"election_epoch":6,"quorum":[0,1,2],"quorum_names":["overcloud-controller-1","overcloud-controller-0","overcloud-controller-2"],"quorum_leader_name":"overcloud-controller-1","monmap":{"epoch":1,"fsid":"b21584d4-aa5b-11e6-bf47-525400121514","modified":"2016-11-14 11:38:10.770478","created":"2016-11-14 11:38:10.770478","mons":[{"rank":0,"name":"overcloud-controller-1","addr":"172.16.1.5:6789\/0"},{"rank":1,"name":"overcloud-controller-0","addr":"172.16.1.6:6789\/0"},{"rank":2,"name":"overcloud-controller-2","addr":"172.16.1.11:6789\/0"}]}}
as well as exporting DIB_YUM_REPO_CONF referencing delorean trunks
Newton "current-passed-ci" and CentOS-Ceph-Jewel.repo in INSTACK stack's
shell before building overcloud images prior to overcloud deployment.
I am aware of official trpipleo quickstart release 1.0.0 for RDO Newton, however
I still want to make sure instack-virt-setup && package instack-undercloud provide ability to perform successfully :-
$ sudo yum install -y python-tripleoclient
$ openstack undercloud install
$ source stackrc
$ openstack overcloud image build --all
Tripleo QuickStart became pretty silent after logging into undercloud been built via uploading into libvirt pool prebuilt _undercloud.qcow2 requires running all following bellow commands and some other mentioned down here in current post
$ openstack overcloud image upload
$ openstack baremetal import instackenv.json
$ openstack baremetal configure boot
$ openstack baremetal introspection bulk start
Follow http://lxer.com/module/newswire/view/234586/index.html setting up instack VM and configuring "centos7-newton/current-passed-ci" based delorean repos on VIRTHOST and INSTACK . After log into "instack VM" (undercloud VM) create 4GB swap file and restart "instack VM"
[root@instack ~]# dd if=/dev/zero of=/swapfile bs=1024 count=4194304
4194304+0 records in
4194304+0 records out
4294967296 bytes (4.3 GB) copied, 6.13213 s, 700 MB/s
[root@instack ~]# mkswap /swapfile
Setting up swapspace version 1, size = 4194300 KiB
no label, UUID=5d32541b-09f1-4fdd-a4a8-fd284c358255
[root@instack ~]# chmod 600 /swapfile
[root@instack ~]# swapon /swapfile
[root@instack ~]# echo "/swapfile swap swap defaults 0 0" >> /etc/fstab
Restart and logging again
=======================
VIRTHOST - configuration
=======================
Create user stack
useradd stack
echo "stack:stack" | chpasswd
echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack
sudo chmod 0440 /etc/sudoers.d/stack
su - stack
************************************************ Create stack's .bachrc && relogin to stack
************************************************
export NODE_DIST=centos7 export NODE_DISK=45 export UNDERCLOUD_NODE_DISK=35 export NODE_CPU=1 # KSM is enabled export NODE_MEM=6700 export NODE_COUNT=5 export UNDERCLOUD_NODE_CPU=4 export UNDERCLOUD_NODE_MEM=8000 export FS_TYPE=ext4
*****************************************
Set up Newton DLRN repos
*****************************************
sudo yum -y install yum-plugin-priorities
sudo curl -o /etc/yum.repos.d/delorean-newton.repo http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-newton-tested/delorean.repo
sudo curl -o /etc/yum.repos.d/delorean-deps-newton.repo https://trunk.rdoproject.org/centos7-newton/delorean-deps.repo
sudo yum -y install --enablerepo=extras centos-release-ceph-jewel
sudo sed -i -e 's%gpgcheck=.*%gpgcheck=0%' /etc/yum.repos.d/CentOS-Ceph-Jewel.repo
$ sudo yum -y update
$ sudo yum install -y instack-undercloud
$ instack-virt-setup
***********************
INSTACK VM SETUP
***********************
sudo yum -y install yum-plugin-priorities sudo curl -o /etc/yum.repos.d/delorean-newton.repo http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-newton-tested/delorean.repo sudo curl -o /etc/yum.repos.d/delorean-deps-newton.repo https://trunk.rdoproject.org/centos7-newton/delorean-deps.repo
############################################################
sudo yum -y upgrade mariadb-libs ( in case CentOS 7.3)
############################################################
sudo yum -y install --enablerepo=extras centos-release-ceph-jewel
sudo sed -i -e 's%gpgcheck=.*%gpgcheck=0%' /etc/yum.repos.d/CentOS-Ceph-Jewel.repo
[stack@instack ~]# su - stack
[stack@instack ~]$ cat .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
export NODE_DIST=centos7
export USE_DELOREAN_TRUNK=1
export DELOREAN_TRUNK_REPO=" http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-newton-tested/"
export DELOREAN_REPO_FILE="delorean.repo"
export DIB_YUM_REPO_CONF=/etc/yum.repos.d/delorean*
# User specific aliases and functions
RELOGIN as stack=>root=>stack
[stack@instack ~]$ export DIB_YUM_REPO_CONF="$DIB_YUM_REPO_CONF /etc/yum.repos.d/CentOS-Ceph-Jewel.repo" [stack@instack ~]$ echo $DIB_YUM_REPO_CONF /etc/yum.repos.d/delorean-deps-newton.repo /etc/yum.repos.d/delorean-newton.repo /etc/yum.repos.d/CentOS-Ceph-Jewel.repo
$ sudo yum install -y python-tripleoclient
$ openstack undercloud install
$ source stackrc
$ openstack overcloud image build --all
$ openstack overcloud image upload
$ openstack baremetal import instackenv.json
$ openstack baremetal configure boot
$ openstack baremetal introspection bulk start
In case when this command hangs, then start from scratch
follow http://tripleo.org/advanced_deployment/introspect_single_node.html
It might take long time and resolve the issue.
$ neutron subnet-list
$ neutron subnet-update 1b7d82e5-0bf1-4ba5-8008-4aa402598065 --dns-nameserver 192.168.122.1
********************************************************************************************
Next step :-
$ sudo vi /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-mon.yaml
Update line :-
ceph::profile::params::osd_pool_default_size: 1
instead of default value "3". This step is acceptable only in Virtual Environment.
Setting the osd_pool_default_size set to 1, you will only have one copy of the object. As a general rule, you should run your cluster with more than one OSD and a pool size greater than 1 object replica. So having 48GB RAM on VIRTHOST the optimal setting is osd_pool_default_size = 3 (at least 2) ********************************************************************************************
outputs:
role_data:
description: Role data for the Ceph Monitor service.
value:
service_name: ceph_mon
monitoring_subscription: {get_param: MonitoringSubscriptionCephMon}
config_settings:
map_merge:
- get_attr: [CephBase, role_data, config_settings]
- ceph::profile::params::ms_bind_ipv6: {get_param: CephIPv6}
ceph::profile::params::mon_key: {get_param: CephMonKey}
ceph::profile::params::osd_pool_default_pg_num: 32
ceph::profile::params::osd_pool_default_pgp_num: 32
ceph::profile::params::osd_pool_default_size: 1 <== instead of "3"
# repeat returns items in a list, so we need to map_merge twice
tripleo::profile::base::ceph::mon::ceph_pools:
map_merge:
- map_merge:
repeat:
for_each:
<%pool%>:
- {get_param: CinderRbdPoolName}
- {get_param: CinderBackupRbdPoolName}
- {get_param: NovaRbdPoolName}
- {get_param: GlanceRbdPoolName}
- {get_param: GnocchiRbdPoolName}
******************************
Set up Network isolation
******************************
[stack@instack ~]$ sudo vi /etc/sysconfig/network-scripts/ifcfg-vlan10
DEVICE=vlan10
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSIntPort
BOOTPROTO=static
IPADDR=10.0.0.1
NETMASK=255.255.255.0
OVS_BRIDGE=br-ctlplane
OVS_OPTIONS="tag=10"
[stack@instack ~]$ sudo ifup vlan10
*********************************************
192.168.122.134 is IP of "instack VM"
*********************************************
[stack@instack ~]$ cat network_env.yaml
{
"parameter_defaults": {
"ControlPlaneDefaultRoute": "192.0.2.1",
"ControlPlaneSubnetCidr": "24",
"DnsServers": [
"192.168.122.134"
],
"EC2MetadataIp": "192.0.2.1",
"ExternalAllocationPools": [
{
"end": "10.0.0.250",
"start": "10.0.0.4"
}
],
"ExternalNetCidr": "10.0.0.1/24",
"NeutronExternalNetworkBridge": ""
}
}
[stack@undercloud ~]$ sudo iptables -A BOOTSTACK_MASQ -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE -t nat
[stack@undercloud ~]$ sudo touch -f /usr/share/openstack-tripleo-heat-templates/puppet/post.yaml
**********************************
Script overcloud-deploy.sh
**********************************
#!/bin/bash -x source /home/stack/stackrc openstack overcloud deploy \ --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 \ --libvirt-type qemu \ --ntp-server pool.ntp.org \ --templates /usr/share/openstack-tripleo-heat-templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e $HOME/network_env.yaml[stack@instack ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | c87707d2-127e-489b-831b-f744dd3cf783 | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=192.0.2.7 | | dc65f365-23e8-4f1d-888a-5eceb6892ca8 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.16 | | 356e2b13-07b8-4723-a055-64abb03d1f76 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.8 | | ed7bd0d8-c941-40da-a0c0-c33e24c56da6 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.12 | | 361b990f-4709-416f-8e33-4ab91ec45613 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.13 | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
[stack@instack ~]$ ssh heat-admin@192.0.2.16
Last login: Mon Nov 14 12:04:57 2016 from 192.0.2.1
[heat-admin@overcloud-controller-0 ~]$ sudo su -
Last login: Mon Nov 14 12:05:04 UTC 2016 on pts/0
[root@overcloud-controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Last updated: Mon Nov 14 13:20:24 2016 Last change: Mon Nov 14 12:01:54 2016 by root via cibadmin on overcloud-controller-0
Stack: corosync
Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum
3 nodes and 19 resources configured
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Full list of resources:
Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
ip-10.0.0.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.2.13 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-1 ]
Slaves: [ overcloud-controller-0 overcloud-controller-2 ]
ip-192.0.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.3.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.1.9 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0
PCSD Status:
overcloud-controller-0: Online
overcloud-controller-1: Online
overcloud-controller-2: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@overcloud-controller-0 ~]# ceph status cluster b21584d4-aa5b-11e6-bf47-525400121514 health HEALTH_OK monmap e1: 3 mons at {overcloud-controller-0=172.16.1.6:6789/0,overcloud-controller-1=172.16.1.5:6789/0,overcloud-controller-2=172.16.1.11:6789/0} election epoch 6, quorum 0,1,2 overcloud-controller-1,overcloud-controller-0,overcloud-controller-2 osdmap e20: 1 osds: 1 up, 1 in flags sortbitwise pgmap v269: 224 pgs, 6 pools, 2419 MB data, 1106 objects 10719 MB used, 36365 MB / 47084 MB avail 224 active+clean
[root@overcloud-controller-0 ~]# ceph osd df tree ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME -1 0.04489 - 47084M 10719M 36365M 22.77 1.00 0 root default -2 0.04489 - 47084M 10719M 36365M 22.77 1.00 0 host overcloud-cephstorage-0 0 0.04489 1.00000 47084M 10719M 36365M 22.77 1.00 224 osd.0 TOTAL 47084M 10719M 36365M 22.77 MIN/MAX VAR: 1.00/1.00 STDDEV: 0
[root@overcloud-controller-0 ~]# ceph osd df plain ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS 0 0.04489 1.00000 47084M 10719M 36365M 22.77 1.00 224 TOTAL 47084M 10719M 36365M 22.77 MIN/MAX VAR: 1.00/1.00 STDDEV: 0
[root@overcloud-controller-0 ~]# glance image-list +--------------------------------------+-------------+ | ID | Name | +--------------------------------------+-------------+ | faa02ab1-2587-427e-a148-3994d3154065 | VF24Cloud | | fd959a88-0a2c-4fd1-b643-d40487b8ea6b | XenialCloud | +--------------------------------------+-------------+
[root@overcloud-controller-0 ~]# rbd -p images ls faa02ab1-2587-427e-a148-3994d3154065 fd959a88-0a2c-4fd1-b643-d40487b8ea6b
[root@overcloud-controller-0 ~]# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | e14cdb4d-e413-452c-95cf-abc5646cd0ea | in-use | XenialVolume | 7 | - | true | 1264533d-2c14-4126-bbd8-38775a9df1d5 | | fde55b71-ccf0-4b19-b744-6f656fd3d2da | in-use | vf24volume | 7 | - | true | 22361b1a-77a6-43a2-bf5d-329efe1279dc | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
[root@overcloud-controller-0 ~]# rbd -p volumes ls volume-e14cdb4d-e413-452c-95cf-abc5646cd0ea volume-fde55b71-ccf0-4b19-b744-6f656fd3d2da
[root@overcloud-controller-0 ~]# ceph health detail --format=json-pretty { "health": { "health_services": [ { "mons": [ { "name": "overcloud-controller-1", "kb_total": 48214840, "kb_used": 8089300, "kb_avail": 40125540, "avail_percent": 83, "last_updated": "2016-11-14 14:25:11.326142", "store_stats": { "bytes_total": 15784212, "bytes_sst": 14735620, "bytes_log": 983040, "bytes_misc": 65552, "last_updated": "0.000000" }, "health": "HEALTH_OK" }, { "name": "overcloud-controller-0", "kb_total": 48214840, "kb_used": 8601580, "kb_avail": 39613260, "avail_percent": 82, "last_updated": "2016-11-14 14:25:18.765405", "store_stats": { "bytes_total": 15784221, "bytes_sst": 14735629, "bytes_log": 983040, "bytes_misc": 65552, "last_updated": "0.000000" }, "health": "HEALTH_OK" }, { "name": "overcloud-controller-2", "kb_total": 48214840, "kb_used": 8088904, "kb_avail": 40125936, "avail_percent": 83, "last_updated": "2016-11-14 14:25:15.004109", "store_stats": { "bytes_total": 15784221, "bytes_sst": 14735629, "bytes_log": 983040, "bytes_misc": 65552, "last_updated": "0.000000" }, "health": "HEALTH_OK" } ] } ] }, "timechecks": { "epoch": 6, "round": 68, "round_status": "finished", "mons": [ { "name": "overcloud-controller-1", "skew": 0.000000, "latency": 0.000000, "health": "HEALTH_OK" }, { "name": "overcloud-controller-0", "skew": 0.000000, "latency": 0.003400, "health": "HEALTH_OK" }, { "name": "overcloud-controller-2", "skew": 0.000000, "latency": 0.005235, "health": "HEALTH_OK" } ] }, "summary": [], "overall_status": "HEALTH_OK", "detail": [] }[root@overcloud-controller-0 ~]# ceph mon_status
{"name":"overcloud-controller-2","rank":2,"state":"peon","election_epoch":6,"quorum":[0,1,2],"outside_quorum":[],"extra_probe_peers":["172.16.1.5:6789\/0","172.16.1.6:6789\/0"],"sync_provider":[],"monmap":{"epoch":1,"fsid":"b21584d4-aa5b-11e6-bf47-525400121514","modified":"2016-11-14 11:38:10.770478","created":"2016-11-14 11:38:10.770478","mons":[{"rank":0,"name":"overcloud-controller-1","addr":"172.16.1.5:6789\/0"},{"rank":1,"name":"overcloud-controller-0","addr":"172.16.1.6:6789\/0"},{"rank":2,"name":"overcloud-controller-2","addr":"172.16.1.11:6789\/0"}]}}
[root@overcloud-controller-0 ~]# ceph quorum_status
{"election_epoch":6,"quorum":[0,1,2],"quorum_names":["overcloud-controller-1","overcloud-controller-0","overcloud-controller-2"],"quorum_leader_name":"overcloud-controller-1","monmap":{"epoch":1,"fsid":"b21584d4-aa5b-11e6-bf47-525400121514","modified":"2016-11-14 11:38:10.770478","created":"2016-11-14 11:38:10.770478","mons":[{"rank":0,"name":"overcloud-controller-1","addr":"172.16.1.5:6789\/0"},{"rank":1,"name":"overcloud-controller-0","addr":"172.16.1.6:6789\/0"},{"rank":2,"name":"overcloud-controller-2","addr":"172.16.1.11:6789\/0"}]}}
No comments:
Post a Comment