Sunday, October 11, 2015

RDO Liberty DVR Neutron workflow on CentOS 7.2

UPDATE 10/23/2015
   Post updated for final RDO Liberty Release
END UPDATE

Per http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-ovs-dvr.html 
DVR is supposed to address following problems which has traditional 3 Node
deployment schema:-

Problem 1: Intra VM traffic flows through the Network Node
In this case even VMs traffic that belong to the same tenant
on a different subnet has to hit the Network Node to get routed
between the subnets. This would affect Performance.

Problem 2: VMs with FloatingIP also receive and send packets
through the Network Node Routers.
FloatingIP (DNAT) translation done at the Network Node and also
the external network gateway port is available only at the Network Node.
So any traffic that is intended for the External Network from
the VM will have to go through the Network Node.

In this case the Network Node becomes a single point of failure
and also the traffic load will be heavy in the Network Node.
This would affect the performance and scalability.


Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, 

   Neutron (using Open vSwitch plugin && VXLAN )

- (2x) Compute node: Nova (nova-compute),
         Neutron (openvswitch-agent,l3-agent,metadata-agent )


Three CentOS 7.1 VMs (4 GB RAM, 4 VCPU, 2 VNICs ) has been built for testing
at Fedora 22 KVM Hypervisor. Two libvirt sub-nets were used first "openstackvms" for emulating External && Mgmt Networks 192.169.142.0/24 gateway virbr1 (192.169.142.1) and  "vteps" 10.0.0.0/24 to support two VXLAN tunnels between Controller and Compute Nodes.

# cat openstackvms.xml

<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>

# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-define openstackvms.xml
# virsh net-start  openstackvms
# virsh net-autostart  openstackvms

Second libvirt sub-net maybe defined and started same way.


ip-192-169-142-127.ip.secureserver.net - Controller/Network Node
ip-192-169-142-137.ip.secureserver.net - Compute Node
ip-192-169-142-147.ip.secureserver.net - Compute Node

**************************************
At this point run on Controller:-
**************************************
 # yum -y  install centos-release-openstack-liberty
 # yum -y  install openstack-packstack
 # packstack --answer-file=./answer3Node.txt

**********************
Answer File :-
*********************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_SAHARA_INSTALL=n
CONFIG_HEAT_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=n
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_SAHARA_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.127
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=a25b5ece9db24e2aba8d3a2b4d908ca5
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=7dff3f6090f74445
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_REDIS_MASTER_HOST=192.169.142.127
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=02f168ee8edd44e4


********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
********************************************************
# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.1(X)7"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.255"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

# cat ifcfg-eth0
DEVICE="eth0"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
***************************
Then run script
***************************
#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node

**********************************
General information   ( [3] )
**********************************
Enabling l2pop :-

On the Neutron API node, in the conf file you pass
to the Neutron service (plugin.ini/ml2_conf.ini):
[ml2]
mechanism_drivers = openvswitch,l2population

On each compute node, in the conf file you pass
to the OVS agent (plugin.ini/ml2_conf.ini):
[agent]
l2_population = True

Enable the ARP responder:
On each compute node, in the conf file
you pass to the OVS agent (plugin.ini/ml2_conf.ini):
[agent]
arp_responder = True

*****************************************
On Controller update neutron.conf
*****************************************
router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

 [root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
[AGENT]

*********************************
On each Compute Node
*********************************

[root@ip-192-169-142-147 neutron]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr
[AGENT]


 [root@ip-192-169-142-147 neutron]# cat metadata_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
auth_url = http://192.169.142.127:5000/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
nova_metadata_protocol = http
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =4
metadata_backlog = 4096
cache_url = memory://?default_ttl=5
[AGENT]

[root@ip-192-169-142-147 ml2]# pwd
/etc/neutron/plugins/ml2

[root@ip-192-169-142-147 ml2]# cat ml2_conf.ini | grep -v ^$ | grep -v ^#
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[ml2_type_geneve]
[securitygroup]
enable_security_group = True
# On Compute nodes
[agent]
l2_population=True

********************************************************************************
Please, be asvised that command like ( [ 2 ] ) :-
# rsync -av root@192.169.142.127:/etc/neutron/plugins/ml2 /etc/neutron/plugins
been run on Liberty Compute Node 192.169.142.147 will overwrite file
/etc/neutron/plugins/ml2/openvswitch_agent.ini
So, local_ip after this command should be turned backed to it's initial value.
********************************************************************************
 [root@ip-192-169-142-147 ml2]# cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

**********************************************************************************
On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started via script
**********************************************************************************
 #!/bin/bash -x
 yum install  openstack-neutron-ml2  -y ;
 systemctl start neutron-l3-agent ;
 systemctl start neutron-metadata-agent ;
 systemctl restart neutron-openvswitch-agent ;
 systemctl enable neutron-l3-agent ;
 systemctl enable neutron-metadata-agent


[root@ip-192-169-142-147 ~]# systemctl | grep openstack
openstack-ceilometer-compute.service                                                loaded active running   OpenStack ceilometer compute agent
openstack-nova-compute.service                                                      loaded active running   OpenStack Nova Compute Server

[root@ip-192-169-142-147 ~]# systemctl | grep neutron
neutron-l3-agent.service                                                            loaded active running   OpenStack Neutron Layer 3 Agent
neutron-metadata-agent.service                                                      loaded active running   OpenStack Neutron Metadata Agent
neutron-openvswitch-agent.service                                                   loaded active running   OpenStack Neutron Open vSwitch Agent
neutron-ovs-cleanup.service                                                         loaded active exited    OpenStack Neutron Open vSwitch Cleanup Utility

******************************************************************************************************** 
When floating IP gets assigned to  VM ,  what actually happens ( [1] ) :-
The same explanation may be found in ([4]) , the only style would not be in step by step manner, in particular it contains detailed description of reverse network flow and ARP Proxy functionality.
********************************************************************************************************

1.The fip-<netid> namespace is created on the local compute node (if it does not already exist)
2.A new port rfp-<portid> gets created on the qrouter-<routerid> namespace (if it does not already exist)
3.The rfp port on the qrouter namespace is assigned the associated floating IP address
4.The fpr port on the fip namespace gets created and linked via point-to-point  network to the rfp port of the qrouter namespace
5.The fip namespace gateway port fg-<portid> is assigned an additional address
  from the public network range to set up  ARP proxy point
6.The fg-<portid> is configured as a Proxy ARP

***************************************
Network flow itself  ( [1] ):
***************************************

1.The VM, initiating transmission, sends a packet via default gateway
   and br-int forwards the traffic to the local DVR gateway port (qr-<portid>).
2.DVR routes the packet using the routing table to the rfp-<portid> port
3.The packet is applied NAT rule, replacing the source-IP of VM to
   the assigned floating IP, and then it gets sent through the rfp-<portid> port,
   which connects to the fip namespace via point-to-point network
   169.254.31.28/31
4. The packet is received on the fpr-<portid> port in the fip namespace
and then routed outside through the fg-<portid> port



 *********************************************************
In case of particular deployment :-
*********************************************************

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron net-list
+--------------------------------------+--------------+-------------------------------------------------------+
| id                                   | name         | subnets                                               |
+--------------------------------------+--------------+-------------------------------------------------------+
| 1b202547-e1de-4c35-86a9-3119d6844f88 | public       | e6473e85-5a4c-4eea-a42b-3a63def678c5 192.169.142.0/24 |
| 267c9192-29e2-41e2-8db4-826a6155dec9 | demo_network | 89704ab3-5535-4c87-800e-39255a0a11d9 50.0.0.0/24      |
+--------------------------------------+--------------+------------------------------------------


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
fip-1b202547-e1de-4c35-86a9-3119d6844f88
qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf

[root@ip-192-169-142-147 ~]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ip rule
0:    from all lookup local
32766:    from all lookup main
32767:    from all lookup default
57480:    from 50.0.0.15 lookup 16
57481:    from 50.0.0.13 lookup 16

838860801:    from 50.0.0.1/24 lookup 838860801

[root@ip-192-169-142-147 ~]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ip route show table 16
default via 169.254.31.29 dev rfp-51ed47a7-3
 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ip route
50.0.0.0/24 dev qr-b0a8a232-ab  proto kernel  scope link  src 50.0.0.1
169.254.31.28/31 dev rfp-51ed47a7-3  proto kernel  scope link  src 169.254.31.28

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf iptables-save -t nat | grep "^-A"|grep l3-agent

-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A neutron-l3-agent-OUTPUT -d 192.169.142.153/32 -j DNAT --to-destination 50.0.0.13
-A neutron-l3-agent-OUTPUT -d 192.169.142.156/32 -j DNAT --to-destination 50.0.0.15

-A neutron-l3-agent-POSTROUTING ! -i rfp-51ed47a7-3 ! -o rfp-51ed47a7-3 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 192.169.142.153/32 -j DNAT --to-destination 50.0.0.13
-A neutron-l3-agent-PREROUTING -d 192.169.142.156/32 -j DNAT --to-destination 50.0.0.15

-A neutron-l3-agent-float-snat -s 50.0.0.13/32 -j SNAT --to-source 192.169.142.153
-A neutron-l3-agent-float-snat -s 50.0.0.15/32 -j SNAT --to-source 192.169.142.156
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec fip-1b202547-e1de-4c35-86a9-3119d6844f88 ip route

default via 192.169.142.1 dev fg-58e0cabf-07
169.254.31.28/31 dev fpr-51ed47a7-3  proto kernel  scope link  src 169.254.31.29
192.169.142.0/24 dev fg-58e0cabf-07  proto kernel  scope link  src 192.169.142.154
192.169.142.153 via 169.254.31.28 dev fpr-51ed47a7-3
192.169.142.156 via 169.254.31.28 dev fpr-51ed47a7-3 

   [root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-b0a8a232-ab: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fe23:586c  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:23:58:6c  txqueuelen 0  (Ethernet)
        RX packets 88594  bytes 6742614 (6.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 173961  bytes 234594118 (223.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

rfp-51ed47a7-3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.31.28  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::282e:4bff:fe52:3bca  prefixlen 64  scopeid 0x20<link>
        ether 2a:2e:4b:52:3b:ca  txqueuelen 1000  (Ethernet)
        RX packets 173514  bytes 234542852 (223.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 87837  bytes 6670792 (6.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
fe2f4449-82fc-45e9-8827-6c6d9c8cc92d
    Bridge br-int
        fail_mode: secure
        Port "qr-b0a8a232-ab"
            tag: 1
            Interface "qr-b0a8a232-ab"

                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo19855b4d-3b"
            tag: 1
            Interface "qvo19855b4d-3b"
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port "qvobd487c99-41"
            tag: 1
            Interface "qvobd487c99-41"
    Bridge br-ex
        Port "fg-58e0cabf-07"
            Interface "fg-58e0cabf-07"

                type: internal
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a00007f"
            Interface "vxlan-0a00007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.127"}
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.4.0"

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec fip-1b202547-e1de-4c35-86a9-3119d6844f88 ifconfig
fg-58e0cabf-07: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.169.142.154  netmask 255.255.255.0  broadcast 192.169.142.255
        inet6 fe80::f816:3eff:fe15:efff  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:15:ef:ff  txqueuelen 0  (Ethernet)
        RX packets 173587  bytes 234547834 (223.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 87751  bytes 6665500 (6.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

fpr-51ed47a7-3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.31.29  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::a805:e5ff:fe38:3bb1  prefixlen 64  scopeid 0x20<link>
        ether aa:05:e5:38:3b:b1  txqueuelen 1000  (Ethernet)
        RX packets 87841  bytes 6671008 (6.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 173518  bytes 234543068 (223.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

**********************************************************************************
I've described North-South network traffic in details due to it is my major concern.Regarding East-West traffic via distributed routers  see ( [4] ) and ( [1] ).
**********************************************************************************
**************
On Controller
**************
  
  




*******************************************
Creating distributed router via CLI
*******************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from keystoneclient.v2_0 import client
>>> token = '3ad2de159f9649afb0c342ba57e637d9'
>>> endpoint = 'http://192.169.142.127:35357/v2.0'
>>> keystone = client.Client(token=token, endpoint=endpoint)
>>> keystone.tenants.list()
[<Tenant {u'enabled': True, u'description': u'admin tenant', u'name': u'admin', u'id': u'1bcab59d22c4493890d7b8d497a430dc'}>, <Tenant {u'enabled': True, u'description': u'default tenant', u'name': u'demo', u'id': u'42960b3ee2e94939b309d02733681bce'}>, <Tenant {u'enabled': True, u'description': u'Tenant for the openstack services', u'name': u'services', u'id': u'65d9dd9f947d4454ad5ddb7bc6472e68'}>]
>>>
[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-create --distributed True \
> --tenant_id 42960b3ee2e94939b309d02733681bce RouterDM
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                        |
| distributed       | True                                 |
| external_gateway_info |                             |
| ha                  | False                                   |
| id                   | ee5cc4cf-10ad-4248-9544-5b0c057ab1ac |
| name             | RouterDM                           |
| routes            |                                            |
| status             | ACTIVE                              |
| tenant_id        | 42960b3ee2e94939b309d02733681bce  |
+-----------------------+--------------------------------------+


[root@ip-192-169-142-137 ~]# ip netns
fip-d8803504-93dd-4604-b3ed-d6bce93a29b7
qrouter-ee5cc4cf-10ad-4248-9544-5b0c057ab1ac

[root@ip-192-169-142-137 ~]# ip netns exec qrouter-ee5cc4cf-10ad-4248-9544-5b0c057ab1ac ip rule
0:    from all lookup local
32766:    from all lookup main
32767:    from all lookup default
57481:    from 110.0.0.12 lookup 16
1845493761:    from 110.0.0.1/24 lookup 1845493761

[root@ip-192-169-142-137 ~]# ip netns exec qrouter-ee5cc4cf-10ad-4248-9544-5b0c057ab1ac ip route sho table 16
default via 169.254.31.29 dev rfp-ee5cc4cf-1

[root@ip-192-169-142-137 ~]# ip netns exec qrouter-ee5cc4cf-10ad-4248-9544-5b0c057ab1ac ip route
110.0.0.0/24 dev qr-b574ba74-2d  proto kernel  scope link  src 110.0.0.1
169.254.31.28/31 dev rfp-ee5cc4cf-1  proto kernel  scope link  src 169.254.31.28

[root@ip-192-169-142-137 ~]# ip netns exec fip-d8803504-93dd-4604-b3ed-d6bce93a29b7 ip route
default via 192.169.142.1 dev fg-d0767175-09
169.254.31.28/31 dev fpr-ee5cc4cf-1  proto kernel  scope link  src 169.254.31.29
192.169.142.0/24 dev fg-d0767175-09  proto kernel  scope link  src 192.169.142.153
192.169.142.157 via 169.254.31.28 dev fpr-ee5cc4cf-1

[root@ip-192-169-142-137 ~]# . keystonerc_demo

[root@ip-192-169-142-137 ~(keystone_demo)]# nova list
+--------------------------------------+------------+--------+------------+-------------+-----------------------------------------+
| ID                                   | Name       | Status | Task State | Power State | Networks                                |
+--------------------------------------+------------+--------+------------+-------------+-----------------------------------------+
| 6ae40cd7-e836-47b6-a57a-1ae42c9ab1ae | VF22Devs01 | ACTIVE | -          | Running     | demo_network=70.0.0.16, 192.169.142.154 |
| 8964118e-d7c2-440c-923b-3cbf50151b00 | VF22Devs07 | ACTIVE | -          | Running     | private=110.0.0.12, 192.169.142.157     |
+--------------------------------------+------------+--------+------------+-------------+-----------------------------------------+

[root@ip-192-169-142-137 ~(keystone_demo)]# ip netns exec qrouter-ee5cc4cf-10ad-4248-9544-5b0c057ab1ac  iptables-save -t nat | grep "^-A"|grep l3-agent

-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A neutron-l3-agent-OUTPUT -d 192.169.142.157/32 -j DNAT --to-destination 110.0.0.12
-A neutron-l3-agent-POSTROUTING ! -i rfp-ee5cc4cf-1 ! -o rfp-ee5cc4cf-1 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 192.169.142.157/32 -j DNAT --to-destination 110.0.0.12
-A neutron-l3-agent-float-snat -s 110.0.0.12/32 -j SNAT --to-source 192.169.142.157

-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat

*****************************************************************
12/17/2015 Reproduced on RDO Liberty per Dave's  request
******************************************************************


   Cloud VM VF23Devs01 downloading via fip-namespace, `iftop -i eth0`  is
   running on 192.169.142.137 Compute Node console
  


9 comments:

  1. hi, your guide was very helpful in getting started, but I ran into several issues:
    1) packstack initial install fails if network manager is not disabled prior to packstack run. in your walkthrough you disable it after.
    2) packstack run failed on compute nodes with minimal centos installation, compalining of openstack-selinux. I had to yum install packstack on compute nodes in the same way you show for controller.
    3) you show editing ml2.ini and agent on nodes. Then you show installation of neutron - the files will only be there when neutron is installed first
    4) at the end of the blog you stop clearly mentioning which side the changes need to be made on :-( i had to follow IP/hostnames. No offence I know its free.

    Finally after all is said and done I have neutron agent and metadata running on each node, I can create instances and assign internal/floating IPs. But, unfortunately, the DVR is not working :-( the netns shows that network namespace is created on controller node still.. :-(
    In my case I have Controller running compute as well, would that be considered a candidate for same "compute" node ml2 changes as other nodes?

    Any suggestions on where to check why DVR is not working?

    Thanks!

    ReplyDelete
  2. I suggest you install ServerGUI during CentOS 7.1 setup.
    You packstack run has nothing in common with DVR setup.
    It should not fail on Compute Nodes. If does run on all nodes
    # yum -y install centos-release-openstack-liberty
    Just run simple RDO Controller/Network+ 2xCompute cleanly
    when done, contact me via Linkedin. This post was on RH
    RDO planet news wire. Latter on RH closed RDO Planet for
    public access.

    ReplyDelete
  3. DVR setup requires only `yum install openstack-neutron-ml2`
    l3-agent.ini gets updated, metadata_agent.ini copied from
    Controller, ovswitch_agent.ini gets updated. You add just
    one service neutron-metadata-agent

    ReplyDelete
  4. Dave,
    Reproduced from scratch just now ( snapshot added in post just
    for Dave request)
    [root@ip-192-169-142-127 ~(keystone_admin)]# date
    Thu Dec 17 21:30:50 MSK 2015
    [root@ip-192-169-142-127 ~(keystone_admin)]# ip netns
    qdhcp-bbf8b663-2c96-434c-9a00-1990686ae0c9
    snat-75353bb3-3308-4d99-9537-9bac5edca898
    qrouter-75353bb3-3308-4d99-9537-9bac5edca898
    qdhcp-0d1cf1bb-3fc7-44a1-8195-fd12411daaf2
    qrouter-24406d62-ed3a-4778-a4b0-8e089df8c665

    [root@ip-192-169-142-147 ~]# date
    Thu Dec 17 21:31:20 MSK 2015
    [root@ip-192-169-142-147 ~]# ip netns
    fip-0091543e-07e3-4133-918a-be2638a54c28
    qrouter-75353bb3-3308-4d99-9537-9bac5edca898

    [root@ip-192-169-142-137 ~]# date
    Thu Dec 17 21:38:19 MSK 2015
    [root@ip-192-169-142-137 ~]# ip netns
    fip-0091543e-07e3-4133-918a-be2638a54c28
    qrouter-75353bb3-3308-4d99-9537-9bac5edca898

    ReplyDelete
  5. Hi,
    See picture in the bottom of blog . It has been uploaded for Dave. Just make him trustful to all what I am posting.
    There was a problems during packstack deployment. Yes, RH is
    working hard ( several new bugs have been caught in packstack ), but DVR ( itself ) is in good shape, no problems.

    ReplyDelete
  6. Hi Boris Derzhavets, Nice post on servicing and mechanism of dvr. Reading posts like these are very helpful to us because we are dvr dealers in Chennai and we often have requirement of service. Write more about ip configuration and other cctv issues. Thank you.

    ReplyDelete

  7. That is nice article from you , this is informative stuff . Hope more articles from you . I also want to share some information about red hat openstack training and tomcat tutorial

    ReplyDelete
  8. It is amazing and wonderful to visit your site. Thanks for sharing this information, this is useful to me.
    Visit us for onboarding kit.

    ReplyDelete
  9. Thank you for being a beacon of consistency in the ever-changing landscape of online content. Your dedication to regular, high-quality posts is truly admirable.

    Marble Supplier in Dubai

    ReplyDelete