Tuesday, October 27, 2015

VRRP four nodes setup on RDO Liberty (CentOS 7.1)

UPDATE 10/28/2015

    I clearly understand that following advanced RH's technology as RDO Manager, I am supposed to set up 1 VM for undercloud, 3 VMs for HA overcloud Controller and 2 VMs for overcloud Compute Nodes. So, 6 VMs should be installed on desktop  box. One person wrote (on RDO Mailing list) that the only problem is 32 GB of RAM and he is ready to go with i7 4770 Haswell CPU. Doing sample bellow (just 4 VMs) on 32 GB desktop with i7 4790 I was experiencing performance issues not due to memory swap, but due to 4 Core limitation for any i7 Haswell CPU. I am forced to run packstack for POC tasks due to insufficient power of desktop Haswell CPUs. Actually, CPU like Intel® Xeon® Processor E5-2690 (8 Cores,16 threads) would  allow to test virtual configs of RDO Manager.

END UPDATE

   Sample bellow demonstrates uninterrupted access, providing via HA Neutron router,  to cloud VMs  running on Compute node, when two installed Network Nodes node are swapping MASTER and BACKUP roles (as members of keepalived pair).

Due to Lxer's  moderators posted (sic) before  "uninterrupted access", I have to notice that some downtime obviously always occurs ( several minutes ) required for BACKUP box comes to MASTER state.

    Following bellow is brief instruction for 4 node deployment test Controller & 2xNetwork & Compute on RDO Liberty (CentOS 7.1), which was performed on Fedora 21 host with KVM/Libvirt Hypervisor  (32 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) .Four VMs (4 GB RAM, 4 VCPUS)  have been setup.
  Controller VM one (management subnet) VNIC, 2xNetwork Nodes VM three VNICS (management,vtep's external subnets),  Compute Node VM two VNICS (management,vtep's subnets)

Setup :-

192.169.142.127 - Controller Node
192.169.142.147,192.169.142.157 - Network Nodes
192.169.142.137 - Compute Node


*******************************************
Three Libvirt networks created
*******************************************

# cat openstackvms.xml

<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>


# cat external.xml

<network>
   <name>external</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
   </ip>
 </network>

# cat vteps.xml

<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>


# virsh net-list

 Name                 State      Autostart     Persistent

--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms          active        yes           yes
 external              active        yes           yes
 vteps                 active        yes          yes


*********************************************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 4 VM are attached to this subnet
*********************************************************************************************************
2. Second Libvirt subnet "external" serves for simulation external network 
Network Nodes attached to "external",latter on "eth2" interfaces (belongs to "external") which are supposed to be converted into OVS ports of br-ex(s) on Network Nodes. This Libvirt subnet via bridge virbr2 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
*********************************************************************************************************
3. Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Nodes are attached to this subnet.
*********************************************************************************************************

***************************************
Answer file (answer4Node.txt)
***************************************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_SAHARA_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147,192.169.142.157

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_SAHARA_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.127
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=33cade531a764c858e4e6c22488f379f
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=09e304c52d714220
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_REDIS_MASTER_HOST=192.169.142.127
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=02f168ee8edd44e4
**************************************
At this point run on Controller:-
**************************************
 # yum -y  install centos-release-openstack-liberty
 # yum -y  install openstack-packstack
 # packstack --answer-file=./answer4Node.txt

***********************************************************
Upon completion on Network node 192.169.142.147
***********************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.229"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***********************************************************
Upon completion on Network node 192.169.142.157
***********************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.230"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on both Network Nodes :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

*****************************************************
On each Network Node
*****************************************************
# systemctl start keepalived
# systemctl enable keepalived
****************************************************************************
On Controller and both Network Nodes
Update /etc/neutron/neutron.conf as follows
****************************************************************************
[DEFAULT]
 router_distributed = False
 l3_ha = True
 max_l3_agents_per_router = 2
dhcp_agents_per_network  = 2
*****************************************************************************

All nodes restart


******************************************************************
Creating HA Neutron Router belongs tenant demo
******************************************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from keystoneclient.v2_0 import client
>>> token = '3ad2de159f9649afb0c342ba57e637d9'
>>> endpoint = 'http://192.169.142.127:35357/v2.0'
>>> keystone = client.Client(token=token, endpoint=endpoint)
>>>  keystone.tenants.list()
>>> keystone.tenants.list()
[<Tenant {u'enabled': True, u'description': u'Tenant for the openstack services', u'name': u'services', u'id': u'20d1f633cb384e07b9019cb01ee9f02c'}>, <Tenant {u'enabled': True, u'description': u'admin tenant', u'name': u'admin', u'id': u'cce9a541723a4c26b70b746bab051f6c'}>, <Tenant {u'enabled': True, u'description': u'default tenant', u'name': u'demo', u'id': u'd9d06a467fb54b6e9612cbb1a245c370'}>]
>>>

# neutron router-create --ha True --tenant_id  d9d06a467fb54b6e9612cbb1a245c370 RouterHA

Attach demo_network and external network to RouterHA


[root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterHA
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| id                                   | host                                   | admin_state_up | alive | ha_state |
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| 1e8aec09-e4a4-473a-91c7-9771e0499b1c | ip-192-169-142-157.ip.secureserver.net | True           | :-)   | active   |
| 33b5ec51-33b6-49ee-b5bf-1c66c283b818 | ip-192-169-142-147.ip.secureserver.net | True           | :-)   | standby  |
+--------------------------------------+----------------------------------------+----------------+--

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list | grep "147"
| 30c38f80-4dee-4144-a2aa-a088629f33fb | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
| 33b5ec51-33b6-49ee-b5bf-1c66c283b818 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 8390e450-c5ff-4697-aff3-7cfd66873055 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| d01a0e08-31ab-41d9-bf4b-11888d82bc41 | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list | grep "157"
| 1e8aec09-e4a4-473a-91c7-9771e0499b1c | L3 agent           | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 84ce6181-1eaa-445b-8f14-e865c3658bad | DHCP agent         | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| bf54ed7a-e478-4e0f-b38a-612cc89af26c | Open vSwitch agent | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| f1a1d7fc-6cc2-44c0-9254-367d9dcbb74c | Metadata agent     | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-show RouterHA
+-----------------------+--------------------------------------------------------------------------------------------+
| Field  | Value                |
+-----------------------+---------------------------------------------------------------------------------------------+
| admin_state_up | True
| distributed        | False
| external_gateway_info | {"network_id": "b87a1cdf-8635-424b-b986-347aa1b2d4a7", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "65472e1a-f6ff-4549-b7e8-ab2010b88c69", "ip_address": "172.24.4.227"}]} |
| ha | True                   |
| id  | a4bdf550-76a5-4069-9d03-075b8668f3c5                  |
| name  | RouterHA                 |
| routes                |
| status   | ACTIVE              |
|tenant_id  | d9d06a467fb54b6e9612cbb1a245c370
+-----------------------+---------------------------------------------------------------------------------------------------+

Verify VRRP advertisements from the master node HA interface IP address on corresponding network interface:


   Verification status of neutron services  on each one of Network Nodes
    


    Running VMs
   
 
   Connectivity verification


   Current MASTER is 192.169.142.157

 
   MASTER 192.169.142.157 stopped , 192.169.142.147 changing state from
   BACKUP to MASTER   

 
    Connectivity verification to 172.24.4.231

 
  192.169.142.157 brought up again

  

   192.169.142.157 goes to MASTER State again due to 192.169.142.147 shutdown.
  

   **************************************
   Network node 192.169.142.147
   **************************************
   [root@ip-192-169-142-147 ~]# ovs-vsctl show
5b798479-567a-4d14-bbb7-d014e001307c
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a00009d"
            Interface "vxlan-0a00009d"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.157"}

        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}

        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        fail_mode: secure
        Port "tap9b85b5b7-4c"
            tag: 2
            Interface "tap9b85b5b7-4c"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-299a4e77-af"
            tag: 2
            Interface "qr-299a4e77-af"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "ha-3c63186b-f7"
            tag: 1
            Interface "ha-3c63186b-f7"
                type: internal
    Bridge br-ex
        Port "qg-c88a6f64-88"
            Interface "qg-c88a6f64-88"
                type: internal
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    ovs_version: "2.4.0"
**************************************
Network node 192.169.142.157
**************************************
[root@ip-192-169-142-157 ~]# ovs-vsctl show
15fa30fd-6900-4de7-ac1b-69760ccdfa4f
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
        Port "qg-c88a6f64-88"
            Interface "qg-c88a6f64-88"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.157", out_key=flow, remote_ip="10.0.0.137"}

        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000093"
            Interface "vxlan-0a000093"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.157", out_key=flow, remote_ip="10.0.0.147"}

    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "ha-083e9c72-69"
            tag: 2
            Interface "ha-083e9c72-69"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-299a4e77-af"
            tag: 1
            Interface "qr-299a4e77-af"
                type: internal
    ovs_version: "2.4.0"


Sunday, October 25, 2015

Quick VRRP verification on RDO Liberty (CentOS 7.1)

 Sample bellow demonstrates uninterrupted access, providing via HA Neutron router,  to cloud VM running on second Compute node, when Controller and (L3-router enabled) first Compute node are swapping MASTER and BACKUP roles (as members of keepalived pair).
Convert DVR configuration been built in RDO Liberty DVR Neutron workflow on CentOS 7.1  in the same way as it was done in ([ 1 ]) .

Setup configuration
- Controller node: Nova, Keystone, Cinder, Glance, 

   Neutron (using Open vSwitch plugin &amp;&amp; VXLAN )
- (2x) Compute node: Nova (nova-compute),
         Neutron (openvswitch-agent,l3-agent,metadata-agent )
*****************************************************
On Controller and first Compute Node
*****************************************************
# yum install keepalived
*************************************************************************
Stop and disable neutron-l3-agent on Second Compute Node
Update /etc/neutron/neutron.conf as follows
**************************************************************************
[DEFAULT]
 router_distributed = False
 l3_ha = True
 max_l3_agents_per_router = 2

*****************************************************************
Switch agent_mode to legacy on all nodes
Update /etc/neutron/plugins/ml2/openvswitch_agent.ini
*****************************************************************
[agent]
enable_distributed_routing = False

All nodes restart

************************************************************
Create HA router belongs tenant demo
*************************************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from keystoneclient.v2_0 import client
>>> token = '3ad2de159f9649afb0c342ba57e637d9'
>>> endpoint = 'http://192.169.142.127:35357/v2.0'
>>> keystone = client.Client(token=token, endpoint=endpoint)
>>> keystone.tenants.list()
[<Tenant {u'enabled': True, u'description': u'default tenant', u'name': u'demo', u'id': u'0d166b0ff5fb40a2bf6453e81b27962e'}>, <Tenant {u'enabled': True, u'description': u'admin tenant', u'name': u'admin', u'id': u'21e6a247384f4208a70983d852562cc7'}>, <Tenant {u'enabled': True, u'description': u'Tenant for the openstack services', u'name': u'services', u'id': u'ea97cf808f664f7f8d8810ab164de9ec'}>]
>>>

# neutron router-create --ha True --tenant_id  0d166b0ff5fb40a2bf6453e81b27962 RouterHA

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-list
+-----------------------------------------------------------------------------------------------------------------------
| id | distributed | ha   |
+-----------------------------------------------------------------------------------------------------------------------3d4a0d41-5838-49bd-b691-ecc9946d6e19 | RouterHA | {"network_id": "1b202547-e1de-4c35-86a9-3119d6844f88", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "e6473e85-5a4c-4eea-a42b-3a63def678c5", "ip_address": "192.169.142.159"}]} | False       | True |


[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-show RouterHA
+-----------------------------------------------------------------------------------------------------------------------
| Field | Value|
+-----------------------------------------------------------------------------------------------------------------------
| admin_state_up| True  |
| distributed       | False  |
| external_gateway_info | {"network_id": "1b202547-e1de-4c35-86a9-3119d6844f88", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "e6473e85-5a4c-4eea-a42b-3a63def678c5", "ip_address": "192.169.142.159"}]} |
| ha | True |
| id | 3d4a0d41-5838-49bd-b691-ecc9946d6e19 |
| name | RouterHA |
| routes|                 |
| status | ACTIVE |
| tenant_id | 0d166b0ff5fb40a2bf6453e81b27962e |
+-----------------------------------------------------------------------------------------------------------------------

Attach public && private networg to RouterHA


# neutron net-list

 +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
| id                                   | name                                               | subnets                                               |
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
| 1b202547-e1de-4c35-86a9-3119d6844f88 | public               |
 e6473e85-5a4c-4eea-a42b-3a63def678c5 192.169.142.0/24 |
| 596eb520-da47-41a7-bfc1-8ace58d7ee98 | HA network tenant 0d166b0ff5fb40a2bf6453e81b27962e | c7d12fde-47f4-4744-bc88-78a4a7e91755 169.254.192.0/18 |
| 267c9192-29e2-41e2-8db4-826a6155dec9 | demo_network                                       | 89704ab3-5535-4c87-800e-39255a0a11d9 50.0.0.0/24      |
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+

# neutron router-port-list RouterHA

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-port-list RouterHA
+--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name                                            | mac_address       | fixed_ips                                                                              |
+--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+
| 0a823561-8ce6-4c7d-8943-525e74f61210 |                                                 | fa:16:3e:14:ca:12 | {"subnet_id": "e6473e85-5a4c-4eea-a42b-3a63def678c5", "ip_address": "192.169.142.159"} |
| 1981fd35-3025-45ff-a6e5-ab5bc7d8af3e | HA port tenant 0d166b0ff5fb40a2bf6453e81b27962e | fa:16:3e:b8:d6:14 | {"subnet_id": "c7d12fde-47f4-4744-bc88-78a4a7e91755", "ip_address": "169.254.192.2"}   |
| 4b4ac14c-a3a9-4fc0-9c3a-36d0ae1f4b11 | HA port tenant 0d166b0ff5fb40a2bf6453e81b27962e | fa:16:3e:c5:b2:4b | {"subnet_id": "c7d12fde-47f4-4744-bc88-78a4a7e91755", "ip_address": "169.254.192.1"}   |

| 6d989cb9-dfc8-4e08-8629-3c1186268511 |                                                 | fa:16:3e:cf:e2:a0 | {"subnet_id": "89704ab3-5535-4c87-800e-39255a0a11d9", "ip_address": "50.0.0.1"}        |
+--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+

***************************************************************************
Start up configuration. Compute Node 1 is in MASTER STATE
***************************************************************************
Pinging running VM (FIP is 192.169.142.153 ) 

[boris@fedora21wks01 Downloads]$ ping 192.169.142.153
PING 192.169.142.153 (192.169.142.153) 56(84) bytes of data.
64 bytes from 192.169.142.153: icmp_seq=2 ttl=63 time=0.608 ms
64 bytes from 192.169.142.153: icmp_seq=3 ttl=63 time=0.402 ms
64 bytes from 192.169.142.153: icmp_seq=4 ttl=63 time=0.452 ms



  *************************************************************************
  Compute Node 1 shutdown . Controller went to MASTER STATE
  *************************************************************************
  Pinging running VM (FIP is 192.169.142.153 ) 

  [boris@fedora21wks01 Downloads]$ ping 192.169.142.153
  PING 192.169.142.153 (192.169.142.153) 56(84) bytes of data.
  64 bytes from 192.169.142.153: icmp_seq=10 ttl=63 time=0.568 ms
  64 bytes from 192.169.142.153: icmp_seq=12 ttl=63 time=0.724 ms
  64 bytes from 192.169.142.153: icmp_seq=13 ttl=63 time=0.448 ms


   [root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-  3d4a0d41-5838-49bd-b691-ecc9946d6e19 ip a |grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-1981fd35-30
    inet 169.254.0.1/24 scope global ha-1981fd35-30
    inet 50.0.0.1/24 scope global qr-6d989cb9-df
    inet 192.169.142.153/32 scope global qg-0a823561-8c
    inet 192.169.142.159/24 scope global qg-0a823561-8c

   *******************************************
   Compute Node 1 brought up again
   *******************************************
  
   *******************************************************************
   Controller (192.169.142.127)  has been rebooted
   *******************************************************************
  


    **************************************************************************
    Now Compute Node 1 goes to  MASTER  STATE again 
    **************************************************************************
   [root@ip-192-169-142-147 ~]# systemctl restart  neutron-l3-agent

  [boris@fedora21wks01 Downloads]$ ping 192.169.142.153
  PING 192.169.142.153 (192.169.142.153) 56(84) bytes of data.
  64 bytes from 192.169.142.153: icmp_seq=22 ttl=63 time=0.640 ms
  64 bytes from 192.169.142.153: icmp_seq=23 ttl=63 time=0.553 ms
  64 bytes from 192.169.142.153: icmp_seq=24 ttl=63 time=0.516 ms
 

On Controller :-

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-3d4a0d41-5838-49bd-b691-ecc9946d6e19 ip a |grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-1981fd35-30
[root@ip-192-169-142-127 ~(keystone_admin)]# ssh 192.169.142.147
Last login: Sun Oct 25 12:50:43 2015

On Compute :-

[root@ip-192-169-142-147 ~]# ip netns exec qrouter-3d4a0d41-5838-49bd-b691-ecc9946d6e19 ip a |grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-4b4ac14c-a3
    inet 169.254.0.1/24 scope global ha-4b4ac14c-a3
    inet 50.0.0.1/24 scope global qr-6d989cb9-df
    inet 192.169.142.153/32 scope global qg-0a823561-8c
    inet 192.169.142.159/24 scope global qg-0a823561-8c

*********************************
Keepalived status :-
*********************************


   Generated keepalived.conf

  

Thursday, October 22, 2015

RDO Liberty / Mitaka Set up for three Nodes (Controller+Network+Compute) ML2&OVS&VXLAN on CentOS 7.2

Update 11/02/2017

                                              It's hard to know what the right thing is.
                                             Once you know it's hard not to do it.
                                                 Harry Fertig (Kingsley,The Confession film 1999)


Currently "Views count" hit 5873 Please, be aware of in oncoming Newton RDO release you are supposed to see following output. Yes, it does work and in fact drops packstack as tool for production deployments ( storage.pp is missing ). No matter do you clearly understand what you are doing or don't.

[root@Server72CentOS templates]# pwd
/usr/lib/python2.7/site-packages/packstack/puppet/templates

[root@Server72CentOS templates]# ls -l
total 16
-rw-r--r--. 1 root root 2204 Aug  9 12:33 compute.pp
-rw-r--r--. 1 root root 5918 Aug  9 12:33 controller.pp
-rw-r--r--. 1 root root 1516 Aug  9 12:33 network.pp 

Further reading makes sense for RDO Mitaka, otherwise you may quit at this point and enjoy TripleO deployments on bare metal, which are supposed to make you really happy. Specifically when you would attempt VLAN  either DVR overcloud set up. I believe that in meantime RH actually forces people

shoot a gun on sparrows

presuming that customers ( RDO community members ) are not responsible
do decide on their own when to switch to TripleO ( TripleO QuickStart )  really providing huge benefits like PCS/Corosync HA Controller's cluster , automated deployment of Ceph cluster Nodes , i.e. invoking python-tripleoclient, in turn performing Overcoud's Heat Stack deployment on undercloud node as was pre-required, does make sense and when simple Controller+N*Compute+ Storage Cluster might be painlessly deployed by packstack with no IPMI
requirements for boxes on landscape.

Per https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface 

The Intelligent Platform Management Interface (IPMI) is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware (BIOS or UEFI) and operating system. IPMI defines a set of interfaces used by system administrators for out-of-band management of computer systems and monitoring of their operation. For example, IPMI provides a way to manage a computer that may be powered off or otherwise unresponsive by using a network connection to the hardware rather than to an operating system or login shell.  

See also :- TripleO Installer, Production Ready?

http://alesnosek.com/blog/2017/01/15/tripleo-installer-production-ready/
  
END UPDATE

 

UPDATE 06/01/2016


 Currently "Views count" hit 2522. Been written on 10/22/2015 ( just after RDO
 Liberty release ) this post obviously stays out of all my other writings.

 Another post RDO Liberty DVR Neutron workflow on CentOS 7.2
http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html
hit "Views count" 2127 been written on 10/11/2015.

 Actually, I see only one important point. This post clearly explains how external
OVS bridge br-ex and corresponding port eth2 are supposed to be configured
on Network or Controller/Network Node to provide inbound/outbound connectivity . 

I have to notice that in meantime packstack does support ML2&OVS&VLAN
deployments on RDO Mitaka and RDO Liberty. On Mitaka following block is responsible for correct VLAN configuration on Compute Nodes, assuming
that eth1 is responsible for Controler/Network && Compute Node vlan
vm/data connectivity

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan,flat
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:100:200
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=
CONFIG_NEUTRON_ML2_VXLAN_GROUP=
CONFIG_NEUTRON_ML2_VNI_RANGES=
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_ML2_SUPPORTED_PCI_VENDOR_DEVS=['15b3:1004', '8086:10ca']
CONFIG_NEUTRON_ML2_SRIOV_AGENT_REQUIRED=n
CONFIG_NEUTRON_ML2_SRIOV_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1
CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-eth1


Regarding Triple0 pro and cons see :-
http://alesnosek.com/blog/2016/03/27/tripleo-installer-the-good/
 I share official opinion regarding Triple0 to become a core tool for prod  
 deployments. See :-
    The core issue for RDO packstack on production is disability to set up 3 Node HA Controllers. But finally this problem is supposed to be solved via Triple0

 END UPDATE 


UPDATE 01/23/2015


Adding new Compute Node to Cluster && Getting EXCLUDE_SERVERS to work

See https://bugzilla.redhat.com/show_bug.cgi?id=1254389
Issue already seems to be fixed for RDO Liberty.
Check installed version of openstack-packstack , download
openstack-packstack-7.0.0-0.7.dev1661.gaf13b7e.el7.src.rpm

I was able rebuild mentioned src.rpm with patch
https://review.openstack.org/#/c/257033/

[root@ip-192-169-142-127 SPECS]# rpmbuild -bb ./openstack-packstack.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.4vcKlK
Patch #0 (0001-Add-symlink-to-support-hiera-3.0.patch):
patching file packstack/modules/ospluginutils.py
Patch #1 (0002-Do-not-enable-EPEL-when-installing-RDO.patch):
patching file packstack/plugins/prescript_000.py
Hunk #1 succeeded at 1106 (offset 14 lines).
Patch #2 (003-Fix-exclude-server.patch):
patching file docs/packstack.rst
Hunk #1 succeeded at 838 (offset -19 lines).
patching file packstack/plugins/neutron_350.py
Hunk #2 succeeded at 515 (offset -59 lines).
Hunk #3 succeeded at 611 (offset -59 lines).

Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.WJHEIM
running build
running build_py
creating build
creating build/lib
creating build/lib/tests
. . . . . . . 

[root@ip-192-169-142-127 noarch]# cat install.sh
sudo yum install openstack-packstack-7.0.0-1.7.dev1661.gaf13b7e.el7.centos.noarch.rpm \
openstack-packstack-doc-7.0.0-1.7.dev1661.gaf13b7e.el7.centos.noarch.rpm \
openstack-packstack-puppet-7.0.0-1.7.dev1661.gaf13b7e.el7.centos.noarch.rpm

Afterwards you should be able cleanly add new node to cluster, using CONFIG_NEUTRON_OVS_TUNNEL_SUBNETS as comma-separated list of subnets (for example, 192.168.10.0/24,192.168.11.0/24)  used for sending tunneling packets.

Works for me. Fisrt deployment done with
     CONFIG_NEUTRON_OVS_TUNNEL_SUBNETS=12.0.0.0/24

Adding Compute Node 192.169.142.137

EXCLUDE_SERVERS=192.169.142.127,192.169.142.157
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.157,192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.127

END UPDATE


As advertised officially
 In addition to the comprehensive OpenStack services, libraries and clients, this release also provides Packstack, a simple installer for proof-of-concept installations, as small as a single all-in-one box and RDO Manager an OpenStack deployment and management tool for production environments based on the OpenStack TripleO project

  In posting bellow I intend to test packstack on Liberty to perform classic three
node deployment.  If packstack will succeed then post installation  actions  like VRRP or DVR setups might be committed as well. One of the real problems for packstack is HA Controller(s) setup. Here RDO Manager is supposed to get a significant advantage, replacing with comprehensive CLI a lot of manual configuration.




   Following bellow is brief instruction  for three node deployment test Controller&&Network&&Compute for RDO Liberty, which was performed on Fedora 22 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,4 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep's external subnets), Compute Node VM two VNICS (management,vtep's subnets)

SELINUX stays in enforcing mode.

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related
with VM serves as RDO Liberty Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.

Three Libvirt networks created

# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>


[root@vfedora22wks ~]# cat public.xml
<network>
   <name>public</name>
   <uuid>d1e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
  </ip>
 </network>


[root@vfedora22wks ~]# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d2e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-list
 Name                 State      Autostart     Persistent
--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms    active        yes           yes
 public                active        yes           yes
 vteps                 active         yes          yes
*********************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet "public" serves for simulation external network  Network Node attached to public,latter on "eth2" interface (belongs to "public") is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.225 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.

***********************************************************************************
3.Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet. ***********************************************************************************

*********************
Answer-file :-
*********************
[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer-fileRHTest.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_SAHARA_INSTALL=n
CONFIG_HEAT_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=n
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_SAHARA_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.127
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=a25b5ece9db24e2aba8d3a2b4d908ca5
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=9dc9936819ac4e90
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_REDIS_MASTER_HOST=192.169.142.127
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=PW_PLACEHOLDER

**************************************
At this point run on Controller:-
**************************************
Keep SELINUX=enforcing ( RDO Liberty is supposed to handle this)

 # yum -y  install centos-release-openstack-liberty
 # yum -y  install openstack-packstack
 # packstack --answer-file=./answer3Node.txt

**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.232"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

  OVS PORT should be eth2 (third Ethernet interface on Network Node)
  Libvirt bridge VIRBR2 in real deployment is a your router to External
  network. OVS BRIDGE br-ex should have IP belongs to External network 

*******************
On Controller :-
*******************


  


[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -lntp |  grep 35357
tcp6       0      0 :::35357                :::*                    LISTEN      7047/httpd
       
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 7047
root      7047     1  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
keystone  7089  7047  0 11:22 ?        00:00:07 keystone-admin  -DFOREGROUND
keystone  7090  7047  0 11:22 ?        00:00:02 keystone-main   -DFOREGROUND
apache    7092  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7093  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7094  7047  0 11:22 ?        00:00:03 /usr/sbin/httpd -DFOREGROUND
apache    7095  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7096  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7097  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7098  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7099  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7100  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7101  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7102  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
root     28963 17739  0 12:51 pts/1    00:00:00 grep --color=auto 7047
********************
On Network Node
********************

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| 217fb0f5-8dd1-4361-aae7-cc9a7d18d6e4 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 5dabfc17-db64-470c-9f01-8d2297d155f3 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 5e3c6e2f-3f6d-4ede-b058-bc1b317d4ee1 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| f0f02931-e7e6-4b01-8b87-46224cb71e6d | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| f16a5d9d-55e6-47c3-b509-ca445d05d34d | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
9221d1c1-008a-464a-ac26-1e0340407714
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
        Port "qg-1deeaf96-e8"
            Interface "qg-1deeaf96-e8"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "qr-1909e3bb-fd"
            tag: 2
            Interface "qr-1909e3bb-fd"
                type: internal
        Port "tapfdf24cad-f8"
            tag: 2
            Interface "tapfdf24cad-f8"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    ovs_version: "2.4.0"

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[    2.233302] device ovs-system entered promiscuous mode
[    2.273206] device br-int entered promiscuous mode
[    2.274981] device qr-838ad1f3-7d entered promiscuous mode
[    2.276333] device tap0f21eab4-db entered promiscuous mode
[    2.312740] device br-tun entered promiscuous mode
[    2.314509] device qg-2b712b60-d0 entered promiscuous mode
[    2.315921] device br-ex entered promiscuous mode
[    2.316022] device eth2 entered promiscuous mode
[   10.704329] device qr-838ad1f3-7d left promiscuous mode
[   10.729045] device tap0f21eab4-db left promiscuous mode
[   10.761844] device qg-2b712b60-d0 left promiscuous mode
[  224.746399] device eth2 left promiscuous mode
[  232.173791] device eth2 entered promiscuous mode
[  232.978909] device tap0f21eab4-db entered promiscuous mode
[  233.690854] device qr-838ad1f3-7d entered promiscuous mode
[  233.895213] device qg-2b712b60-d0 entered promiscuous mode
[ 1253.611501] device qr-838ad1f3-7d left promiscuous mode
[ 1254.017129] device qg-2b712b60-d0 left promiscuous mode
[ 1404.697825] device tapfdf24cad-f8 entered promiscuous mode
[ 1421.812107] device qr-1909e3bb-fd entered promiscuous mode
[ 1422.045593] device qg-1deeaf96-e8 entered promiscuous mode
[ 6111.042488] device tap0f21eab4-db left promiscuous mode



[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip route
default via 172.24.4.225 dev qg-1deeaf96-e8
50.0.0.0/24 dev qr-1909e3bb-fd  proto kernel  scope link  src 50.0.0.1
172.24.4.224/28 dev qg-1deeaf96-e8  proto kernel  scope link  src 172.24.4.227 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-1deeaf96-e8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.24.4.227  netmask 255.255.255.240  broadcast 172.24.4.239
        inet6 fe80::f816:3eff:fe93:12de  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:93:12:de  txqueuelen 0  (Ethernet)
        RX packets 864432  bytes 1185656986 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 382639  bytes 29347929 (27.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-1909e3bb-fd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:feae:d1e0  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:ae:d1:e0  txqueuelen 0  (Ethernet)
        RX packets 382969  bytes 29386380 (28.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 864601  bytes 1185686714 (1.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-153edd99-9152-49ad-a445-7280aa9df187 ip route
default via 50.0.0.1 dev tapfdf24cad-f8
50.0.0.0/24 dev tapfdf24cad-f8  proto kernel  scope link  src 50.0.0.10 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-153edd99-9152-49ad-a445-7280aa9df187 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapfdf24cad-f8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 50.0.0.10  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fe98:c66  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:98:0c:66  txqueuelen 0  (Ethernet)
        RX packets 63  bytes 6445 (6.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14  bytes 2508 (2.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
16: qr-1909e3bb-fd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:ae:d1:e0 brd ff:ff:ff:ff:ff:ff
    inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-1909e3bb-fd
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feae:d1e0/64 scope link
       valid_lft forever preferred_lft forever
17: qg-1deeaf96-e8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:93:12:de brd ff:ff:ff:ff:ff:ff
    inet 172.24.4.227/28 brd 172.24.4.239 scope global qg-1deeaf96-e8
       valid_lft forever preferred_lft forever
    inet 172.24.4.229/32 brd 172.24.4.229 scope global qg-1deeaf96-e8
       valid_lft forever preferred_lft forever
    inet 172.24.4.230/32 brd 172.24.4.230 scope global qg-1deeaf96-e8
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe93:12de/64 scope link
       valid_lft forever preferred_lft forever