1. Controller HAproxy cluster, we will use one active and two standby servers. for each service queueing/mysql which has its individual cluster technique.node
mysql: Galeramysql
2. hyperrvisor choices for comput nodesql
KVM/QEMU/ESX/XEN/LXC/DOCKERapp
3. storage dom
3.1 Object storageide
Object storage allows a user to store data in the form of objects by using the RESTful HTTP APIs. If you compare an object storage system to traditional NAS or SAN storage, it might be claimed that object storage is much better than the latter. You can refer to an object as a file representation in a traditional way. Let's take a closer look at how they differ:
• Objects are stored in a flat and vast namespace. Unlike a traditional storage system, they do not preserve any specific structure or a particular hierarchy.
• The stored objects are not user friendly.
• Accessing the Object Storage Devices (OSDs) by using an API such as REST or SOAP cannot be done via any file protocol such as BFS, SMB, or CIFS.
• Object storages are not suitable for high-performance requirements or structured data that is frequently changed, such as databases.ui
3.2 Block storagethis
Types of volumeslua
• Thin provisioning: In this, the volume is virtually provisioned and can be allocated as needed.
• Thick provisioning: Here, the volume is allocated during the volume creation and is fully provisioned.
• Deduplicated provisioning: Here, the volumes are virtually provisioned and made deduplication-aware. In this case, the storing of volumes in the VNX devices will be done in a more efficient way by eliminating the duplicated segments in the incoming data and storing only the unique one.
• Compressed provisioning: In this, the volumes are virtually compressed and made compression-aware. In this case, the block storage devices may gain more capacity with better, efficient usage by freeing up a greater amount of valuable storage space with lower performance overheads. The compressed provisioning applies to all the volumes, unlike the deduplication provisioning.spa
4 Network
Types of network traffic
• Management
• API
• External
• Guest
4.1 Management network
The management network, also referred to as the internal network in some distributions, is used for internal communication between hosts for services such as the messaging service and database service.All hosts will communicate with each other over this network.
4.2 API network
The API network is used to expose OpenStack APIs to users of the cloud and services within the cloud.Endpoint addresses for services, such as Keystone,Neutron, Glance, and Horizon, are procured from the API network.
4.3 External network
An external network provides Neutron routers with network access. Once a router has been configured and attached to the external network, the network becomes the source of floating IP addresses for instances and other network resources. IP addresses in an external network are expected to be routable and reachable by clients on a corporate network or the Internet.
4.4 Guest network
The guest network is a network dedicated to instance traffic. Options for guest networks include local networks restricted to a particular node, flat or VLAN-tagged networks, or virtual overlay networks made possible with GRE or VXLAN encapsulation.
Neutron Components:
Network: A network is an isolated layer 2 broadcast domain. Typically reserved for the tenants that created them, networks could be shared among tenants if configured accordingly. The network is the core entity of the Neutron API. Subnets and ports must always be associated with a network.
Subnet: A subnet is an IPv4 or IPv6 address block from which IP addresses can be assigned to virtual machine instances. Each subnet must have a CIDR and must be associated with a network. Multiple subnets can be associated with a single network and can be noncontiguous. A DHCP allocation range can be set for a subnet that limits the addresses provided to instances.
Port: A port in Neutron represents a virtual switch port on a logical virtual switch. Virtual machine interfaces are mapped to Neutron ports, and the ports define both the MAC address and the IP address to be assigned to the interfaces plugged into them. Neutron port definitions are stored in the Neutron database, which is then used by the respective plugin agent to build and connect the virtual switching infrastructure.
Network types supported by Neutron
• Local
• Flat
• VLAN
• VXLAN
• GRE
A local network is one that is isolated from other networks and nodes. Instances connected to a local network may communicate with other instances in the same network on the same compute node but may be unable to communicate with instances in the same network that reside on another host. Because of this designed limitation, local networks are recommended for testing purposes only.
In a flat network, no VLAN tagging or other network segregation takes place. In some configurations, instances can reside in the same network as the host machines. VLAN networks are networks that utilize 802.1q tagging to segregate network traffic. Instances in the same VLAN are considered part of the same network and are in the same layer 2 broadcast domain. InterVLAN routing, or routing between VLANs, is only possible through the use of a router.
A VXLAN network uses a unique segmentation ID, called VNI, to differentiate traffic from other VXLAN networks. Traffic from one instance to another is encapsulated by the host using the VNI and sent over an existing layer 3 network using UDP, where it is decapsulated and forwarded to the instance. The use of
VXLAN to encapsulate packets over an existing network is meant to solve limitations of VLANs and hysical switching infrastructures
A GRE network is similar to a VXLAN network in that traffic from one instance to another is encapsulated and sent over an existing layer 3 network. A unique segmentation ID is used to differentiate traffic from ther GRE networks. Rather than using UDP as the transport mechanism, GRE traffic uses IP protocol 47.
Virtual Ethernet,or veth, cables are virtual interfaces that mimic network patch cables. An Ethernet frame sent to one end of a veth cable is received by the other end, much like a real network patch cable. Neutron also makes use of veth cables to make connections between various network resources, including namespaces and bridges
Open vSwitch has a built-in port type that mimics the behavior of a Linux veth cable,but it is optimized for use with OVS bridges. When connecting two Open vSwitch bridges, a port on each switch is reserved as a patch port. Patch ports are configured with a peer name that corresponds to the patch port on the other switch. Graphically,it looks something similar to this:
5. HA in openstack
HA levels in OpenStack
• L1: This includes physical hosts, network and storage devices, and hypervisors
• L2: This includes OpenStack services, including compute, network, and storage controllers, as well as databases and message queuing systems
• L3: This includes the virtual machines running on hosts that are managed by OpenStack services
• L4: This includes applications running in the virtual machines themselves
> MySQL high availability through Galera active/active multimaster deployment and Keepalived
> RabbitMQ active-active high availability using mirrored queues and HAProxy for load balancing
> The OpenStack API services' inclusion of nova-scheduler and glance-registry in cloud controllers nodes in the active-passive model using Pacemaker and Corosync
> Neutrons agents using Pacemaker
5.1 Mysql HA
• Master/slave replication
• MMM replication
• MySQL shared storage
• Block-level replication(DRDB)
• MySQL Galera multimaster replication(Certification Based Replication)
5.2 HA in the queue(active/active)
• RabbitMQ clustering
• RabbitMQ mirrored queues
5.3 HA the openstack controller/network services with pacemaker and corosync