How KubeMQ customers build scalable messaging platforms with Kubernetes Operators
This article published on Red Hat marketplace explains how KubeMQ customers build scalable messaging platforms with Kubernetes Operators.
Higlights from the article:
“What exactly are Operators, and how do they help manage these stateful applications? Let’s take a look at Operators in detail, how they work within Kubernetes, and how the KubeMQ messaging platform uses Operators to help you build complex and scalable messaging services with minimal coding and overhead.”
“This allows for better performance, scalability, and resilience. One key to success with this approach is utilizing Operators as the deployment and management tool for KubeMQ. The Operator deploys the clusters and ensures that the various KubeMQ bridges, sources, and targets are configured correctly for each cluster. This extends to how KubeMQ was written utilizing Go. This makes KubeMQ fast and helps hook KubeMQ into native Kubernetes data models, events, and APIs, making it less complicated to manage the state of the clusters. It also allows for easier configuration validation.”
“Deploying as an operator also helps KubeMQ keep overhead to a minimum. For example, a large financial company with high volumes of real-time messages for price quotes, transactions, and client funding leverages KubeMQ to decrease the number of servers previously required to fulfill their needs.”
“The KubeMQ Operator also helps track state—a key reason for leveraging KubeMQ for reliable cross/hybrid cloud deployments. First, this state can validate that the desired capacity and configuration are in place for each cluster. Comparing the desired state in the CR against the existing state in Kubernetes allows the Operator to ensure that failures are caught and addressed, capacity is added as required, and the various bridges, sources, and targets are configured”
How to Automatically Setup Micro-Services Communication: Examples and Demo
This tutorial shows how to connect microservices to each other automatically in minutes. We will demonstrate how KubeMQ Cluster can connect three essential services – Redis Cache, PostgreSQL Database, and Elastic Database with two other microservices applications – in just a few configuration settings and without spending time to write code.
To achieve this, KubeMQ Connectors are used to enable access for these two types of applications:
1. An application that sends and receives requests to the three services and gets back responses using the KubeMQ query pattern.
2. An application that uses KubeMQ as an API gateway to communicate via a REST call to the three services.
First thing’s first; using Kubernetes command-line tool (kubectl) CLI you need to check that all services are deployed and are working. Here’s how you do it.
–> On your main page, type “kubectl get pods –A”. You should see that PostgreSQL and Redis are running.
–> For Elastic, type “kubectl get sts – A.” You should see that Elastic Search is indicated as “logging.”
Deploying KubeMQ Cluster and Connectors to Kubernetes Cluster
The fastest way to deploy KubeMQ is by downloading it from KubeMQ quick start page. After deploying KubeMQ, use kubemqctl CLI tool to verify the following:
–> To check the readiness of your cluster, type “kubemqctl g c.”
–> To check whether the connectors are running, type “kubemqctl g con”
Adding Integrations to the 3 Services
After deploying KubeMQ components, the next step is to add the integrations to the services. This will be done by using kubemqctl CLI automated building and management tools.
1. Type “kubemqctl m” execute and select your manager option as “Manage KubeMQ Integrations”
2. For the next steps, type or select the following options:
i. Manage Integrations Option: Add Integration ii. Cluster Destination: kubemq/kubemq-cluster iii. Add Integration Type: Target Integration for kubemq/kubemq-cluster iv. Unique Integration Name: redis v. Select Target Kind: cache.redis vi. Redis url: redis://redis-svc.redis:6379 vii. KubeMQ Source Kind: kubemq.query (It means you will be able to send and receive data for redis) viii. KubeMQ grpc endpoint address: kubemq-cluster-grpc.kubemq:50000 ix. Query Channel: redis
3. Proceed to set values to defaults but do not add middlewares to the binding when prompted. Save your Integration progress.
A notification acknowledging Redis Integration has been successfully added will appear.
Repeat the same procedure for both PostgreSQL and Elastic databases.
Testing the 3 Integrations With an Application
Once all the 3 integrations are using KubeMQ query for sending and receiving, next thing is to test the integrations with an application connected to KubeMQ. The application is designed to send and receive requests for the 3 integrations (Redis, PostgreSQL, and Elastic).
For Redis, the application performs the following requests:
1. Set – Sending Redis data with a random key 2. Get – Getting back the data 3. Delete – Deleting the key from Redis
For PostgreSQLs, the requests to be run are as follows:
1. Transaction – Several queries are running one after another, entering the data in rows into the table. If the table doesn’t exist, a new table will be created for the data to be added. 2. Query – Retrieve data from the table
For Elastic, the 6 requests to be run are as follows:
The first 3 requests are managing indexes:
1. Check Index existence – checking if the log index exists in the elastic database 2. Delete Index – if exist it will be deleted 3. Create Index – recreate the same log index
The next 3 requests are managing document for the elastic search
1. Set – Set and save the document to elastic 2. Get – Retrieve the document 3. Delete – Delete the document
To summarize our progress thus far, this application is simulating the request and response we can perform with the Redis Cache, PostgreSQL Database, and Elastic Database integrations. The next step is to add an HTTP source to allow other services to communicate with Redis, PostgreSQLs, and Elastic.
Adding A Source Integration
Use kubemqctl CLI:
1. Type “kubemqctl m” execute and select your manager option as “Manage KubeMQ Integrations”
2. Under “Integration Type”, select “Source Integration for kubemq/kubemq-cluster”.
3. Both unique integration name and source kind are “http”.
4. When prompted for supported methods, use the post method.
5. Use dynamic mapping for HTTP requests. This means that every request will forward to a specific channel in KubeMQ.
6. For the next steps, type or select the following options: a. KubeMQ Target Kind: kubemq.querry b. KubeMQ grpc endpoint address: kubemq-cluster-grpc.kubemq:50000 c. Channel Mapping Mode: Dynamic
7. Proceed to set values to defaults but do not add middlewares to the binding when prompted. Save your Integration progress.
The new integration is can now receive requests in port 8080 and send data to KubeMQ query Target.
Testing The HTTP Service Integration
To test the integration, a similar procedure to the application testing will be performed. The HTTP service communication with KubeMQ cluster will be established through the “Source” integration we have setup (that is listening to port 8080). From KubeMQ cluster the messages will be processed to the 3 integrations (Redis, PostgreSQL, and Elastic) the same as in the previous testing. After running the HTTP API service, you can immediately tell that the requests were processed successfully; the same as described in the application testing section.
This tutorial shows how KubeMQ CLI tool is used to easily create and manage communication between microservices, with automatic network creation functionalities. Using the CLI tool, a basic microservices connected backend was setup in a few configurations commands saving major code work and time. A smooth integration is a click away for a long list of over 100 external services such as AWS, GCP, and Azure popular services; as well as independent DB, cache, messaging, and storage services supported by KubeMQ Sources and Targets connectors.
This article describes how to migrate an existing service using RabbitMQ external server to Kubernetes using KubeMQ while preserving the ability to communicate with other legacy services connected to the RabbitMQ server. For this purpose, we will use the KubeMQ Targets and Sources connectors to convert messages between RabbitMQ and KubeMQ as follows:
KubeMQ Sources connector will consume messages from RabbitMQ topic named “Ping”, then convert the messages to a KubeMQ events format and send the messages to events.rabbit.ping channel on the KubeMQ cluster.
KubeMQ targets connector subscribes to KubeMQ “Pong” channel events.rabbit.pong. It then converts the message to RabbitMQ format and publishes the messages to the Pong topic.
How It Works
We will use 2 applications here. The first one is the ping application, which is the Legacy Service. The ping application sends ping messages to the RabbitMQ ping topic and consumes messages from the pong topic. The second application, which is the migrated to Kubernetes one, subscribes to the events.rabbit.ping channel and sends back pong responses to events.rabbit.pong channel.
Before building the integrations, you must deploy a KubeMQ cluster. This can be done by typing, “kubemqctl g c” from KubeMQ CTL console, we can see that we have already deployed the KubeMQ cluster on the KubeMQ namespace with 3 pods.
We will start by managing and adding the integration needed for the migration. After typing “kubemqctl m” command we will select “Manage KubeMQ Integrations”
Next, we will add 2 integrations. The first one is a source integration for connecting to a RabbitMQ server and forwarding all the messages to KubeMQ. The second one will be a target integration that will listen to messages that are coming from KubeMQ and send it to the RabbitMQ server.
Setting KubeMQ Source Integration
To add the KubeMQ Source integration, follow this is the procedure:
1. Select “Source Integration” 2. Setting a unique name such as “rabbit.ping”. 3. Under “Source Kind,” select “messaging.rabbitmq”. 4. Enter the connection string of the external RabbitMQ. 5. When prompted to set dynamic channel mapping, decline. 6. Set queue name as “ping”. 7. For Target Kind, select “kubemq.events”. 8. Set your channel mapping mode to “implicit,” and when prompted to use 'events.rabbit.ping’ as the events channel, accept it. 9. When prompted to add middlewares, decline, and then save your settings.
Setting KubeMQ Target Integration
To add the KubeMQ Target integration, follow this procedure:
1. Set your integration as “rabbit.pong” 2. Select your target kind as “messaging.rabbitmq”. 3. Your connection string is the same as the one you used previously. 4. For KubeMQ Source Kind, select “kubem.events”. This will allow you to receive replies coming from the pong application. 5. Set events channel as “evenrts.rabbit.pong”. When prompted to add middlewares, decline just like you did earlier. 6. Save all your settings.
Now you have two integrations: Source and Target. KubeMQ Source integration receives ping message from RabbitMQ, connects to RabbitMQ server, and converts them to events messages in KubeMQ. The KubeMQ Target integration listens to the events channel (events.rabbit.pong) and sends it to the RabbitMQ server.
Running the Legacy Service Ping and the Migrated Service Pong
After we have configured the Sources and Targets connectors we can now test the applications. The first one is the Legacy Service Ping, which is connected only to RabbitMQ, sending and receiving messages from topics in RabbitMQ. The second one is the Migrated to Kubernetes application Pong, which is directly linked to the KubeMQ cluster. Run RabbitMQ Ping Sender App. It should immediately start connecting to RabbitMQ. However, no response should be received because the pong app is not yet live. Therefore, go ahead and run Pong Service App. You should now see that the pong is connecting directly to the KubeMQ cluster, and there is back and forth communication between the Ping and the Pong applications.
We have demonstrated how to Migrate RabbitMQ connected service from a legacy environment to Kubernetes by using KubeMQ platform components. This type of migrations can be easily configured by using KubeMQ CTL automatic connectors creation functionalities, saving developers, and DevOps team precious time and costs in their migration to Kubernetes journey.
KubeMQ Control Center is Upgraded with Automatic Network Creation Functionalities
We are proud to announce the upgrade of the control center, used to easily create and manage message-based connectivity of multiple Kubernetes deployments, with automatic network creation functionalities. The automatic network creation functionalities are being launched during KubeCon NA 2020 where visitors are invited to get a firsthand impression of the automatic functionality efficiency and ease of use in live demonstrations at the KubeMQ booth.
The automatic network creation is a CLI that easily and transparently creates and connects, KubeMQ connectors and bridges while eliminating the hurdle of setting parameters and editing configuration files. Using the control center automatic functionalities, the network architecture is managed, elements are being added or modified, all from a single easy to use interface. Smooth integration is a click away for a long list of external services such as DB, cache, messaging, and storage. KubeMQ cluster creation and management are available, including KubeMQ cluster quick duplication and editing.
KubeMQ platform is a Kubernetes native, enterprise-grade message broker and message queue with ready-to-use connectors, bridges, and control center. KubeMQ simplifies dramatically the deployment and management of the messaging system. With support of all messaging patterns KubeMQ is a one-stop shop for building fast and efficient microservices architecture.
KubeMQ platform enables microservices from multiple environments to communicate with each other and build one hybrid infrastructure solution across multi-cloud, on-premises, and at the edge. KubeMQ platform enables enterprises to gradually migrate their monolithic or microservices infrastructure to a hybrid cloud solution safely, seamlessly and without service disruption.
KubeMQ components are:
KubeMQ server supports all messaging patterns such as Queue, Stream, Pub/Sub, and RPC.
KubeMQ bridges, provide a perfect way to bridge, replicate, or aggregate messaging between Kubernetes clusters across cloud environments.
KubeMQ targets enable instant connection between microservices and a rich set of cloud and external services.
KubeMQ sources support gradual migration from a monolithic environment with legacy messaging systems to an advanced Kubernetes hybrid solution.
And KubeMQ control center, which enables developers to easily create and manage multiple infrastructure Kubernetes deployments.
The need for Kubernetes Native Messaging Platform in Hybrid Cloud Environment
A fast-growing trend in IT infrastructure today, Hybrid Clouds are becoming increasingly popular among enterprise organizations worldwide. Used by major market leaders to connect their on-premises infrastructures, private cloud services, and third-party, public cloud into one flexible and efficient superstructure, Hybrid Clouds are a more efficient structure for running an organization’s applications and workloads. This Hybrid Clouds deployments, ensures that organizations meet their technical and business objectives with effectiveness and significantly improved cost-efficiency than can be done with just a public or private cloud.
Managing Messaging Connectivity in Hybrid Clouds
A critical component of any Hybrid Cloud System is the messaging connectivity across the applications and data contained within the system, hence the importance of managing said messaging connectivity. Regardless of the use of the Hybrid Cloud strategy, connectivity is key in ensuring that every component of the hybrid system works seamlessly together.
To achieve this, modern messaging platforms need to both provide complete transparency of the hybrid cloud system and support integration at the microservice level. When building such an advanced hybrid cloud infrastructure, some microservices are utilized in one environment while others are utilized in the other to enjoy the best of both environments.
Building such an environment efficiently requires a Kubernetes native messaging platform. In this blog post, we will be discussing the need and advantages of building hybrid cloud infrastructure using an innovative Kubernetes native messaging platform and how it works.
Building a Kubernetes-based Solution in a Hybrid Cloud Environment
The most common concerns over the hybrid deployment, we have come across, deals with complexity and risk. In most enterprises, there is a need to come to grips with the management and operation of both on-premises and cloud environments, making sure the environments are always in sync and doing so with security in mind. Building a hybrid cloud infrastructure creates a challenge of managing the communication complexity in a stable, reliable, and scalable approach. There, however, is a ray of hope, and this can be found in a unified Kubernetes native messaging platform across the environments with multi-cluster support.
Such a Kubernetes native messaging platform was developed for this specific environment with the support of all messaging patterns and therefore simplify messaging creation and maintenance, regardless of where you run applications. This ensures that organizations using such a messaging platform enjoy benefits not just from the enterprise-grade Kubernetes solutions to support hybrid cloud solutions but also from the native abilities of said solutions to enable the instant connection between microservices and a rich set of cloud and external services. This makes it possible for enterprise developers to create and manage multiple Kubernetes deployments from the messaging platform’s control center easily.
Portability Between All Deployments
A Kubernetes native messaging platform provides the perfect means to bridge, replicate, or aggregate Kubernetes clusters, providing an abundant set of connectors to instantly connect the various available microservices with cloud web and external services within the clusters across cloud environments, on-premises deployments, and the edge.
How to Successfully Migrate from On-premises to a Hybrid Deployment
Modern messaging platforms should enable enterprises to gradually migrate their current IT infrastructure on the fly to a hybrid cloud solution easily and without service disruption. They should also provide multi-cluster support, allowing for communication between the on-prem microservices and cloud microservices seamlessly, enabling two different Kubernetes environments to work together as one solution. This also ensures that enterprises can gradually transfer services from the on-premises environment to the cloud and vice-verse safely, transparently, and without delay. Furthermore, the messaging platform supports the gradual migration of microservices from a monolithic environment with legacy massaging systems to an advanced Kubernetes hybrid solution using the messaging platform source and target connectors.
KubeMQ is now available through the Red Hat Marketplace
KubeMQ the Kubernetes Message queue, and message broker, is now available through Red Hat Marketplace, a newly launched open cloud marketplace that makes it easier to discover and access certified software for container-based environments across the hybrid cloud. Through the marketplace, customers can take advantage of robust maintenance, support, and professional services, as well as streamlined billing, contracting, and simplified governance.
KubeMQ is the Kubernetes native enterprise-grade message broker and message queue, scalable, highly available, and secured. As a Red Hat Marketplace partner, customers gain easier access to deploy KubeMQ using our Certified OpenShift Operator in one click. KubeMQ technology adds to the ease of use offered by Red Hat Marketplace by providing a Kubernetes native message broker and message queue that supports all messaging patterns, helping enterprises to build stable microservices solutions that can be easily scaled and managed.
For companies building cloud-native infrastructure and applications, Red Hat Marketplace is an essential destination to unlock the value of cloud investments. Red Hat Marketplace enables enterprises to more easily manage workloads in hybrid environments. Deploying our cloud-agnostic KubeMQ message queue provides the technology that enables hybrid clouds to smoothly and transparently connect and interact, allowing workloads to alternate between cloud providers when capacity and costs change.
Built-in partnership by Red Hat and IBM, Red Hat Marketplace is designed to meet the unique needs of developers, procurement teams, and IT leaders through simplified and streamlined access to popular enterprise software. All solutions available through the marketplace have been tested and certified for Red Hat OpenShift Container Platform, the industry’s most comprehensive enterprise Kubernetes platform, allowing them to run anywhere OpenShift runs.
KubeMQ, Kubernetes Message Queue Broker, at OSCONF – An Open Source Conference
KubeMQ will be presented by Suman Chakraborty, SAP Labs in “Message broker implementation in Kubernetes using KubeMQ”, at the OSCONF on Saturday, April 25th. The full-day event will focus on development and implementation in Kubernetes environment, using leading products such as Radis, Rancher, and Traefik.
The event is free and hosted online, feel free to join.
KubeMQ Achieves Red Hat OpenShift Operator Certification, Automating Software Installation and Maintenance Across the Hybrid Cloud
Today, KubeMQ, the Kubernetes Message queue, and message broker product company, announced that KubeMQ’s Kubernetes Operator has achieved Red Hat OpenShift Operator Certification. As a part of this partner ecosystem, OpenShift Operator Certification offers customers and independent software vendors (ISVs) greater confidence when building their next-generation software projects on Red Hat OpenShift, the most comprehensive enterprise Kubernetes and containers application platform. As a Red Hat OpenShift Certified Operator, customers gain easier access to deploy KubeMQ in one click via the Operator catalog section on Red Hat OpenShift, helping to support KubeMQ adoption in the enterprise as the standard message queue, message broker and stream for a microservices architecture, containers and Kubernetes.
Benefits of using KubeMQ
KubeMQ is a Kubernetes message queue broker, enterprise-grade, scalable, highly available and more secure. Helping enterprises to build stable microservices solutions that can be easily scaled as well as enabling additional microservices to be quickly developed and added to the solution.
• Kubernetes Native: Innovative and modern message queue and message broker in a lightweight container developed to run in Kubernetes, certified in the CNCF landscape and connects natively to the cloud-native ecosystem.
• Ease of use: simple deployment in Kubernetes generally in less than 1 minute. Developer friendly by simple to use SDKs and elimination of the many developers and DevOps-centered challenges to define exchanges, brokers, channels, routes, and predefined topics.
• Enterprise-gradeassurance: Enterprises have access to KubeMQ Operator from the Red Hat Container and Operator Catalog, a marketplace overseen and managed by Red Hat, with an assurance of the Red Hat certification, and enterprise support.
• All messaging patterns:KubeMQ available with all messaging patterns such as Queue, Stream, Pub/Sub and RPC. KubeMQ supports diversified messaging patterns, enabling flexibility in creating different Kubernetes messaging use cases.
KubeMQ’s Operator on Red Hat OpenShift
Kubernetes is a leading orchestration platform for organizations to build their microservices solution. Kubernetes Operators are designed to help DevOps engineers deploy and manage Kubernetes applications in a more effective and automated way. As well, enterprises are selecting Red Hat OpenShift as their production Kubernetes platform for enhanced reliability backed by Red Hat’s support and expertise. “We are proud to deliver a Red Hat OpenShift Certified Operator. It is an important milestone for KubeMQ as it contributes to earning industry recognition as a qualified enterprise solution. The KubeMQ Operator will provide enterprises with simple and robust access to our Kubernetes native message queue,” said Gil Eyal, KubeMQ’s CEO.
“We are pleased that KubeMQ’s Kubernetes Operator has achieved Red Hat OpenShift Operator Certification and is now part of the Red Hat Partner Connect ecosystem,” said Julio Tapia, senior director, Cloud Platforms ecosystem, Red Hat. “Kubernetes Operators are appealing because they help encode the human operational logic normally required to manage services running as a Kubernetes-native application and aim to make day-to-day operations easier. By providing Operators on Red Hat OpenShift, users can begin experiencing the next level of benefits from a Kubernetes-native infrastructure, with services designed to ‘just work’ across the cloud where Kubernetes runs.”
Challenges when moving From Monolith to Microservices
Many organizations are considering migrating from their current monolith architecture to a microservices architecture by breaking the service to microservices, containerize them (usually by Docker) and deploying them to Kubernetes. Naturally, a big segment is companies which the core technology is Windows-based and using .Net framework as the main development technology together with MSMQ for messaging and queueing services. These companies usually have their revenues generated from the current monolith technology, making the migration process a risky change for their business and revenues. Hence, these companies would look for failure points to avoid and ways to mitigate the risk for downtime during the migration process. Moreover, these companies are looking to continue to develop their apps by adding new features in the Kubernetes while keeping the old monolith available until migration completion.
MSMQ is a major hurdle for the migration process
MSMQ is a very popular message queue system in use among .NET framework services’ companies. When starting the migration journey to Kubernetes, the services are commonly being containerized by moving from old .NET framework to .NET core framework as .Net core is supported by Linux based containers. However, since MSMQ is a proprietary Windows server-based messaging solution that is not supported by Linux based containers, a big showstopper created to the company containerization process and migration into Kubernetes.
Safe and graduate migration from MSMQ based architecture to Kubernetes
KubeMQ is an enterprise-grade Kubernetes Message Queue and message broker that solves the MSMQ migration issue. It is designed to enable enterprise companies to gradually break their monolith system into small containerize microservices and migrate on the fly to Kubernetes, seamlessly and without service disruption. KubeMQ provides a .Net services bridge from Windows Servers running legacy MSMQ based services to KubeMQ (deployed in Kubernetes), allowing seamless bi-directional transposing messages between the services in the legacy monolith environment and the microservices-based deployment in Kubernetes environment. The bridge installed in the legacy environment is “listening” to the MSMQ on behalf of the relevant services in the Kubernetes environment, ensuring the designated messages are transferred from the monolith to Kubernetes and vice versa.
Demo for migrating MSMQ based app to Kubernetes
The following demo demonstrates how an enterprise company running a monolith financial software can gradually migrate their On-Premise windows and MSMQ service-based architecture into microservices containers running in Kubernetes on Azure cloud. In the demo, we break parts from the monolith and migrate them to AKS while both remain synced by using the KubeMQ bridge to MSMQ and the KubeMQ message queue. The demo is a financial application with the flow of real-time quotes (live currency exchange rates) reflected to end-users in a frontend web client.
Adding KubeMQ and migrating services
Setup AKS, deploy KubeMQ and install KubeMQ bridge to MSMQ.
In Windows Azure, we have created a Kubernetes cluster using the AKS program. Afterward, we continued by deploying a KubeMQ cluster into the Kubernetes in one command line using the KubeMQ CLI tool (kubemqctl – link: https://github.com/kubemq.io/kubemqctl). In the On-Premise Windows environment, we have installed the KubeMQ bridge and connected it between MSMQ and KubeMQ. Both environments are now connected as described in the scheme below, messages can go from one env to the other, creating a perfect synchronization between them. We are now ready to migrate apps from the On-Premise environment to AKS.
Migrating services on the fly
In the next step we have containerized the API service in a Linux docker container using .NET core framework, the frontend was containerized as well, and both deployed to AKS and connected to KubeMQ. The services can communicate with each other in AKS as well as with the On-Prem windows services via the KubeMQ bridge to MSMQ. In this mode, both environments are working together as one environment and more and more services can be gradually migrated from the monolith to Kubernetes until migration completion.
You can view the MSMQ-KubeMQ interoperability demo and the code in GitHub.