Saturday, May 31, 2025

Why a cluster of 3-node controllers is important in the OpenStack cloud

This posting is just a brief extract from https://docs.redhat.com/en/documentation/red_hat_openstack_platform/10/single/understanding_red_hat_openstack_platform_high_availability/index














 

Following below is a kind of comment on the posts of Zen@yandex.com authors who write with enthusiasm about investments in cloud deployments due to the execution of virtual machines in the Cloud as cloud's major advantage according to the authors. Openstack's Cloud Fault Tolerance advantage vs traditional Client Server Unix/Lnux architecture seems to be completely ignored in writings mentioned above.

A three-node controller cluster is important in OpenStack because it provides high availability and fault tolerance for control plane services. With three nodes, the cluster can tolerate the failure of a single node without disrupting cloud functionality, according to Red Hat documentation.

This ensures that important services such as authentication, image storage, and networking remain operational even if one controller node experiences problems.

Here's a more detailed explanation:

High Availability (HA):

With three nodes, the cluster can maintain a quorum (majority vote) even if one node fails. This ensures that the remaining two nodes can continue to manage the cloud infrastructure.

Fault Tolerance:

If one controller node fails, the other two can take over its workload, preventing service disruption and data loss.

Redundancy:

The three nodes act as backups for each other, meaning if one node fails, another can immediately take over its responsibilities.

Quorum:

OpenStack high availability setup relies on quorum (majority voting) to ensure that the cluster can make decisions and maintain consistency even if some nodes are down or unavailable. A three-node cluster is ideal for achieving quorum.

Production Environments:

The Red Hat OpenStack platform requires three controller nodes for a production-grade environment to ensure high availability and reliability. In fact, a three-node controller cluster is essential for a reliable and fault-tolerant OpenStack cloud environment, especially in production environments where downtime is unacceptable.

This is the reason why you should seriously invest in Cloud deployments. Running virtual machines on Compute Nodes has nothing to do with the fault tolerance of a Cloud deployment.

Red Hat documentation is open source. Per. "Understanding Red Hat OpenStack Platform High Availability"

Most of the high availability (HA) coverage in this document pertains to controller nodes. There are two primary HA (High Availability) technologies used on Red Hat OpenStack Platform controller nodes:

Pacemaker: By configuring virtual IP addresses, services, and other functions as resources in a cluster, Pacemaker ensures that a specific set of OpenStack cluster resources are up and available. When a service or an entire node in a cluster fails, Pacemaker can restart the service, remove the node from the cluster, or reboot the node. Requests to most of these services are made through HAProxy.

HAProxy: When you configure more than one controller node with a director in Red Hat OpenStack Platform, HAProxy is configured on those nodes to load balance traffic to some of the OpenStack services running on those nodes.

Galera: Red Hat OpenStack Platform uses the MariaDB Galera cluster to manage database replication.

Highly available services in OpenStack operate in one of two modes:

Active/active: In this mode, the same service is started on multiple controller nodes using Pacemaker, then traffic can either be distributed among the nodes running the requested service using HAProxy, or directed to a specific controller via a single IP address. In some cases, HAProxy distributes traffic among active/active services on a round-robin basis. Performance can be improved by adding more controller nodes.

Active/passive: Services that are not capable or reliable enough to operate in active/active mode operate in active/passive mode. This means that only one instance of the service is active at a time. For Galera, HAProxy uses stick-table settings to ensure that incoming connections are directed to a single backend service. Galera master-master mode can become blocked when services access the same data from multiple Galera nodes at the same time.

As you begin exploring the high availability services described in this document, keep in mind that "the director system (called the undercloud) itself runs OpenStack. The purpose of the undercloud is to create and maintain the systems that will become your OpenStack running environment." The environment you create from the undercloud is called the overcloud. To get into your overcloud, this document asks you to log into your undercloud, then select which overcloud node you want to explore.

Monday, May 19, 2025

Setup KDE Plasma 6.5 Dev on openSUSE Tumbleweed

 First step would be to enable required repos, so run as root following script and reboot instance . Versions of some packages have been installed were verified via commands

$ sudo zypper search -s plasma*desktop*

$ sudo zypper search -s plasma*workspace*

localhost:/home/boris # cat instRepo.sh
zypper ar -fp 75 \ https://download.opensuse.org/repositories/KDE:/Unstable:/Qt/openSUSE_Tumbleweed/ KDE:Unstable:Qt ;
zypper ar -fp 75 \ https://download.opensuse.org/repositories/KDE:/Unstable:/Frameworks/openSUSE_Factory/ KDE:Unstable:Frameworks ;
zypper ar -fp 75 \ https://download.opensuse.org/repositories/KDE:/Unstable:/Applications/KDE_Unstable_Frameworks_openSUSE_Factory/ KDE:Unstable:Applications ;
zypper ar -fp 75 \ https://download.opensuse.org/repositories/KDE:/Unstable:/Extra/KDE_Unstable_Frameworks_openSUSE_Factory/ KDE:Unstable:Extra ;
zypper -v dup --allow-vendor-change

.   .   .   .   .   .

The following 9 patterns are going to change vendor:
kde                20240311-2.4 -> 20231206-142.3  openSUSE -> obs://build.opensuse.org/KDE:Unstable
 kde_games          20240311-2.4 -> 20231206-142.3  openSUSE -> obs://build.opensuse.org/KDE:Unstable
 kde_internet       20240311-2.4 -> 20231206-142.3  openSUSE -> obs://build.opensuse.org/KDE:Unstable
 kde_multimedia     20240311-2.4 -> 20231206-142.3  openSUSE -> obs://build.opensuse.org/KDE:Unstable
 kde_office         20240311-2.4 -> 20231206-142.3  openSUSE -> obs://build.opensuse.org/KDE:Unstable
 kde_pim            20240311-2.4 -> 20231206-142.3  openSUSE -> obs://build.opensuse.org/KDE:Unstable
 kde_utilities      20240311-2.4 -> 20231206-142.3  openSUSE -> obs://build.opensuse.org/KDE:Unstable
 kde_utilities_opt  20240311-2.4 -> 20231206-142.3  openSUSE -> obs://build.opensuse.org/KDE:Unstable
 kde_yast           20240311-2.4 -> 20231206-142.3  openSUSE -> obs://build.opensuse.org/KDE:Unstable

The following 7 NEW packages are going to be installed:
 libKPim6AkonadiAgentWidgetBase6  25.07.70git.20250519T185713~6aa21297-ku.12.1
 libKPim6AkonadiCalendarCore6     25.07.70git.20250518T014117~d41da6f-ku.6.1
 libkwin-x11-6                    6.3.80git.20250519T183312~72d6ff38-ku.45.1
 libQt5OpenGL5                    5.15.16+kde130-ku.1.35
 libQt6Location6                  6.9.0-1.1
 phonon4qt5-backend-gstreamer     4.11.60git24~2bfadef-ku.106.13
 qt6-location                     6.9.0-1.1

495 packages to upgrade, 100 to downgrade, 7 new, 595  to change vendor.

Package download size:   339.9 MiB

Package install size change:
             |     873.1 MiB  required by packages that will be installed
   56.1 MiB  |  -  817.0 MiB  released by packages that will be removed

Backend:  classic_rpmtrans
Continue? [y/n/v/...? shows all options] (y): y

Log into updated Tumbleweed instance and invoke "Info system"


 Verify enabled repos via `zypper repos`






Verification repos enabled via YAST











Obtain versions of some plasma packages been installed and which repositories they belong





















Reports above show up

plasma6-desktop    | package    | 6.3.5-1.1                                    | x86_64 | repo-oss

plasma6-desktop    | srcpackage | 6.3.80git.20250520T080957~18b56dbb-ku.101.1  | noarch | KDE:Unstable:Frameworks

plasma6-workspace  | package    | 6.3.5-1.1                                   | x86_64 | repo-oss

plasma6-workspace  | srcpackage | 6.3.80git.20250520T111850~3d998e62-ku.101.1 | noarch | KDE:Unstable:Frameworks



Same reports look as follows












































REFERENCES

Thursday, May 1, 2025

Setup firewalld and KVM on openSUSE Leap 16 Beta manually

UPDATE as of 05/10/2025

Per https://www.theregister.com/2025/05/09/opensuse_ditches_deepin/ 

You beta, you beta, you bet

Aside from the retreat on the Eastern front, where is openSUSE going next? Well, there is a roadmap to give the general direction. We looked at the plans for Leap 16 at the start of last year, and now the beta is here.

It sounds startlingly different from the openSUSE of old. The announcement delivers several shocks. It says it is "expected to be Wayland-only," although "some Xorg remnants remain for now." That will dramatically cut the range of desktops for a start – and eliminate most of this vulture's favorites.

But there's more sad news. "The traditional YaST stack is retired." Instead, users will get the Red Hat-backed Cockpit for web-based server management, and the new Myrlyn graphical package manager as a replacement for YaST's software tool.

====================================================

Finally I had decided to skip deployment templates for KVM Host and Cockpit Web Console for openSUSE Leap 16 Beta KVM Guest.  Due to absence of YAST on  openSUSE Leap 16 Beta guest I just issued following sequence of CLI directives which brought me to success with linux bridge been setup via Cockpit Web Console.

     $ sudo zypper ref

     $ sudo zypper update

     $ sudo zypper install firewalld

     $ sudo systemctl enable firewalld

     $ sudo systemctl start firewalld

     $ sudo systemctl status firewalld

     $ ls -l /usr/lib/firewalld/zones/

     $ sudo firewall-cmd --get-zones

     $ sudo grep -i DefaultZone /etc/firewalld/firewalld.conf

     $ sudo firewall-cmd --get-active-zones

     $ sudo firewall-cmd --zone=public --add-port=9090/tcp --permanent

     $ sudo firewall-cmd --zone=public --add-port=22/tcp --permanent

     $ sudo zypper install libvirt virt-manager

     $ sudo systemctl start libvirtd.service

     $ sudo systemctl enable libvirtd.service

     $ sudo systemctl status libvirtd.service

     $ sudo usermod -aG libvirt boris

     $ grep libvirt /etc/group

           libvirt:x:108:boris

     $ sudo reboot

     $ sudo zypper install net-tools-deprecated

     $ sudo zypper install cockpit cockpit-machines

     $ sudo systemctl enable cockpit.socket

     $  sudo systemctl start cockpit.socket

     $  sudo netstat -antp | grep 9090

      Launched Cockpit Web console and brought up bridge0 attached to enp1s0

     






























Troubleshooting status of port 9090 via `sudo netstat -antp | grep 9090`











F42 running as L2 KVM Guest inside openSUSE Leap 16 Beta virtual machine