KubeMQ use case by Ericsson for 5G Edge architecture
In a recently published article “Enhancing service mobility in the 5G edge cloud and beyond” researchers from Ericsson recommend KubeMQ as the enabler for creating a multi-edge cloud, Kubernetes-native, in which latency is a critical factor.
“Like a cloud that follows you wherever you go, edge cloud services need to match the mobility of terminals to deliver latency-critical 5G and future 6G. Part of this challenge may require moving the terminal’s state into the new edge cloud. In this blog post, we show how this can be achieved in a cloud-native way using Kubernetes-based edge clouds.”
Read the full article “Enhancing service mobility in the 5G edge cloud and beyond” on the Ericsson website.
Introducing KubeMQ Dashboard – a web interface for monitoring and controlling KubeMQ.
The KubeMQ Dashboard web interface enables users to experience the power of KubeMQ messaging patterns without writing code. It allows users to monitor and handle all message traffic and view the status of all KubeMQ nodes.
Main features of the KubeMQ Dashboard web interface:
Dashboard Get a clear picture of KubeMQ message traffic in one view
Messaging view (Queues/PubSub/Command&Queries) Manage and control messages easily by drilling down to each messaging pattern.
Send messages instantly Using the KubeMQ Dashboard, you can send messages through a simple interface. This allows you to get familiar with messaging patterns and their capabilities without having to write a line of code.
How KubeMQ customers build scalable messaging platforms with Kubernetes Operators
This article published on Red Hat marketplace explains how KubeMQ customers build scalable messaging platforms with Kubernetes Operators.
Higlights from the article:
“What exactly are Operators, and how do they help manage these stateful applications? Let’s take a look at Operators in detail, how they work within Kubernetes, and how the KubeMQ messaging platform uses Operators to help you build complex and scalable messaging services with minimal coding and overhead.”
“This allows for better performance, scalability, and resilience. One key to success with this approach is utilizing Operators as the deployment and management tool for KubeMQ. The Operator deploys the clusters and ensures that the various KubeMQ bridges, sources, and targets are configured correctly for each cluster. This extends to how KubeMQ was written utilizing Go. This makes KubeMQ fast and helps hook KubeMQ into native Kubernetes data models, events, and APIs, making it less complicated to manage the state of the clusters. It also allows for easier configuration validation.”
“Deploying as an operator also helps KubeMQ keep overhead to a minimum. For example, a large financial company with high volumes of real-time messages for price quotes, transactions, and client funding leverages KubeMQ to decrease the number of servers previously required to fulfill their needs.”
“The KubeMQ Operator also helps track state—a key reason for leveraging KubeMQ for reliable cross/hybrid cloud deployments. First, this state can validate that the desired capacity and configuration are in place for each cluster. Comparing the desired state in the CR against the existing state in Kubernetes allows the Operator to ensure that failures are caught and addressed, capacity is added as required, and the various bridges, sources, and targets are configured”
How to Automatically Setup Micro-Services Communication: Examples and Demo
This tutorial shows how to connect microservices to each other automatically in minutes. We will demonstrate how KubeMQ Cluster can connect three essential services – Redis Cache, PostgreSQL Database, and Elastic Database with two other microservices applications – in just a few configuration settings and without spending time to write code.
To achieve this, KubeMQ Connectors are used to enable access for these two types of applications:
1. An application that sends and receives requests to the three services and gets back responses using the KubeMQ query pattern.
2. An application that uses KubeMQ as an API gateway to communicate via a REST call to the three services.
First thing’s first; using Kubernetes command-line tool (kubectl) CLI you need to check that all services are deployed and are working. Here’s how you do it.
–> On your main page, type “kubectl get pods –A”. You should see that PostgreSQL and Redis are running.
–> For Elastic, type “kubectl get sts – A.” You should see that Elastic Search is indicated as “logging.”
Deploying KubeMQ Cluster and Connectors to Kubernetes Cluster
The fastest way to deploy KubeMQ is by downloading it from KubeMQ quick start page. After deploying KubeMQ, use kubemqctl CLI tool to verify the following:
–> To check the readiness of your cluster, type “kubemqctl g c.”
–> To check whether the connectors are running, type “kubemqctl g con”
Adding Integrations to the 3 Services
After deploying KubeMQ components, the next step is to add the integrations to the services. This will be done by using kubemqctl CLI automated building and management tools.
1. Type “kubemqctl m” execute and select your manager option as “Manage KubeMQ Integrations”
2. For the next steps, type or select the following options:
i. Manage Integrations Option: Add Integration ii. Cluster Destination: kubemq/kubemq-cluster iii. Add Integration Type: Target Integration for kubemq/kubemq-cluster iv. Unique Integration Name: redis v. Select Target Kind: cache.redis vi. Redis url: redis://redis-svc.redis:6379 vii. KubeMQ Source Kind: kubemq.query (It means you will be able to send and receive data for redis) viii. KubeMQ grpc endpoint address: kubemq-cluster-grpc.kubemq:50000 ix. Query Channel: redis
3. Proceed to set values to defaults but do not add middlewares to the binding when prompted. Save your Integration progress.
A notification acknowledging Redis Integration has been successfully added will appear.
Repeat the same procedure for both PostgreSQL and Elastic databases.
Testing the 3 Integrations With an Application
Once all the 3 integrations are using KubeMQ query for sending and receiving, next thing is to test the integrations with an application connected to KubeMQ. The application is designed to send and receive requests for the 3 integrations (Redis, PostgreSQL, and Elastic).
For Redis, the application performs the following requests:
1. Set – Sending Redis data with a random key 2. Get – Getting back the data 3. Delete – Deleting the key from Redis
For PostgreSQLs, the requests to be run are as follows:
1. Transaction – Several queries are running one after another, entering the data in rows into the table. If the table doesn’t exist, a new table will be created for the data to be added. 2. Query – Retrieve data from the table
For Elastic, the 6 requests to be run are as follows:
The first 3 requests are managing indexes:
1. Check Index existence – checking if the log index exists in the elastic database 2. Delete Index – if exist it will be deleted 3. Create Index – recreate the same log index
The next 3 requests are managing document for the elastic search
1. Set – Set and save the document to elastic 2. Get – Retrieve the document 3. Delete – Delete the document
To summarize our progress thus far, this application is simulating the request and response we can perform with the Redis Cache, PostgreSQL Database, and Elastic Database integrations. The next step is to add an HTTP source to allow other services to communicate with Redis, PostgreSQLs, and Elastic.
Adding A Source Integration
Use kubemqctl CLI:
1. Type “kubemqctl m” execute and select your manager option as “Manage KubeMQ Integrations”
2. Under “Integration Type”, select “Source Integration for kubemq/kubemq-cluster”.
3. Both unique integration name and source kind are “http”.
4. When prompted for supported methods, use the post method.
5. Use dynamic mapping for HTTP requests. This means that every request will forward to a specific channel in KubeMQ.
6. For the next steps, type or select the following options: a. KubeMQ Target Kind: kubemq.querry b. KubeMQ grpc endpoint address: kubemq-cluster-grpc.kubemq:50000 c. Channel Mapping Mode: Dynamic
7. Proceed to set values to defaults but do not add middlewares to the binding when prompted. Save your Integration progress.
The new integration is can now receive requests in port 8080 and send data to KubeMQ query Target.
Testing The HTTP Service Integration
To test the integration, a similar procedure to the application testing will be performed. The HTTP service communication with KubeMQ cluster will be established through the “Source” integration we have setup (that is listening to port 8080). From KubeMQ cluster the messages will be processed to the 3 integrations (Redis, PostgreSQL, and Elastic) the same as in the previous testing. After running the HTTP API service, you can immediately tell that the requests were processed successfully; the same as described in the application testing section.
This tutorial shows how KubeMQ CLI tool is used to easily create and manage communication between microservices, with automatic network creation functionalities. Using the CLI tool, a basic microservices connected backend was setup in a few configurations commands saving major code work and time. A smooth integration is a click away for a long list of over 100 external services such as AWS, GCP, and Azure popular services; as well as independent DB, cache, messaging, and storage services supported by KubeMQ Sources and Targets connectors.
This article describes how to migrate an existing service using RabbitMQ external server to Kubernetes using KubeMQ while preserving the ability to communicate with other legacy services connected to the RabbitMQ server. For this purpose, we will use the KubeMQ Targets and Sources connectors to convert messages between RabbitMQ and KubeMQ as follows:
KubeMQ Sources connector will consume messages from RabbitMQ topic named “Ping”, then convert the messages to a KubeMQ events format and send the messages to events.rabbit.ping channel on the KubeMQ cluster.
KubeMQ targets connector subscribes to KubeMQ “Pong” channel events.rabbit.pong. It then converts the message to RabbitMQ format and publishes the messages to the Pong topic.
How It Works
We will use 2 applications here. The first one is the ping application, which is the Legacy Service. The ping application sends ping messages to the RabbitMQ ping topic and consumes messages from the pong topic. The second application, which is the migrated to Kubernetes one, subscribes to the events.rabbit.ping channel and sends back pong responses to events.rabbit.pong channel.
Before building the integrations, you must deploy a KubeMQ cluster. This can be done by typing, “kubemqctl g c” from KubeMQ CTL console, we can see that we have already deployed the KubeMQ cluster on the KubeMQ namespace with 3 pods.
We will start by managing and adding the integration needed for the migration. After typing “kubemqctl m” command we will select “Manage KubeMQ Integrations”
Next, we will add 2 integrations. The first one is a source integration for connecting to a RabbitMQ server and forwarding all the messages to KubeMQ. The second one will be a target integration that will listen to messages that are coming from KubeMQ and send it to the RabbitMQ server.
Setting KubeMQ Source Integration
To add the KubeMQ Source integration, follow this is the procedure:
1. Select “Source Integration” 2. Setting a unique name such as “rabbit.ping”. 3. Under “Source Kind,” select “messaging.rabbitmq”. 4. Enter the connection string of the external RabbitMQ. 5. When prompted to set dynamic channel mapping, decline. 6. Set queue name as “ping”. 7. For Target Kind, select “kubemq.events”. 8. Set your channel mapping mode to “implicit,” and when prompted to use 'events.rabbit.ping’ as the events channel, accept it. 9. When prompted to add middlewares, decline, and then save your settings.
Setting KubeMQ Target Integration
To add the KubeMQ Target integration, follow this procedure:
1. Set your integration as “rabbit.pong” 2. Select your target kind as “messaging.rabbitmq”. 3. Your connection string is the same as the one you used previously. 4. For KubeMQ Source Kind, select “kubem.events”. This will allow you to receive replies coming from the pong application. 5. Set events channel as “evenrts.rabbit.pong”. When prompted to add middlewares, decline just like you did earlier. 6. Save all your settings.
Now you have two integrations: Source and Target. KubeMQ Source integration receives ping message from RabbitMQ, connects to RabbitMQ server, and converts them to events messages in KubeMQ. The KubeMQ Target integration listens to the events channel (events.rabbit.pong) and sends it to the RabbitMQ server.
Running the Legacy Service Ping and the Migrated Service Pong
After we have configured the Sources and Targets connectors we can now test the applications. The first one is the Legacy Service Ping, which is connected only to RabbitMQ, sending and receiving messages from topics in RabbitMQ. The second one is the Migrated to Kubernetes application Pong, which is directly linked to the KubeMQ cluster. Run RabbitMQ Ping Sender App. It should immediately start connecting to RabbitMQ. However, no response should be received because the pong app is not yet live. Therefore, go ahead and run Pong Service App. You should now see that the pong is connecting directly to the KubeMQ cluster, and there is back and forth communication between the Ping and the Pong applications.
We have demonstrated how to Migrate RabbitMQ connected service from a legacy environment to Kubernetes by using KubeMQ platform components. This type of migrations can be easily configured by using KubeMQ CTL automatic connectors creation functionalities, saving developers, and DevOps team precious time and costs in their migration to Kubernetes journey.
KubeMQ Control Center is Upgraded with Automatic Network Creation Functionalities
We are proud to announce the upgrade of the control center, used to easily create and manage message-based connectivity of multiple Kubernetes deployments, with automatic network creation functionalities. The automatic network creation functionalities are being launched during KubeCon NA 2020 where visitors are invited to get a firsthand impression of the automatic functionality efficiency and ease of use in live demonstrations at the KubeMQ booth.
The automatic network creation is a CLI that easily and transparently creates and connects, KubeMQ connectors and bridges while eliminating the hurdle of setting parameters and editing configuration files. Using the control center automatic functionalities, the network architecture is managed, elements are being added or modified, all from a single easy to use interface. Smooth integration is a click away for a long list of external services such as DB, cache, messaging, and storage. KubeMQ cluster creation and management are available, including KubeMQ cluster quick duplication and editing.
KubeMQ platform is a Kubernetes native, enterprise-grade message broker and message queue with ready-to-use connectors, bridges, and control center. KubeMQ simplifies dramatically the deployment and management of the messaging system. With support of all messaging patterns KubeMQ is a one-stop shop for building fast and efficient microservices architecture.
KubeMQ platform enables microservices from multiple environments to communicate with each other and build one hybrid infrastructure solution across multi-cloud, on-premises, and at the edge. KubeMQ platform enables enterprises to gradually migrate their monolithic or microservices infrastructure to a hybrid cloud solution safely, seamlessly and without service disruption.
KubeMQ components are:
KubeMQ server supports all messaging patterns such as Queue, Stream, Pub/Sub, and RPC.
KubeMQ bridges, provide a perfect way to bridge, replicate, or aggregate messaging between Kubernetes clusters across cloud environments.
KubeMQ targets enable instant connection between microservices and a rich set of cloud and external services.
KubeMQ sources support gradual migration from a monolithic environment with legacy messaging systems to an advanced Kubernetes hybrid solution.
And KubeMQ control center, which enables developers to easily create and manage multiple infrastructure Kubernetes deployments.
The need for Kubernetes Native Messaging Platform in Hybrid Cloud Environment
A fast-growing trend in IT infrastructure today, Hybrid Clouds are becoming increasingly popular among enterprise organizations worldwide. Used by major market leaders to connect their on-premises infrastructures, private cloud services, and third-party, public cloud into one flexible and efficient superstructure, Hybrid Clouds are a more efficient structure for running an organization’s applications and workloads. This Hybrid Clouds deployments, ensures that organizations meet their technical and business objectives with effectiveness and significantly improved cost-efficiency than can be done with just a public or private cloud.
Managing Messaging Connectivity in Hybrid Clouds
A critical component of any Hybrid Cloud System is the messaging connectivity across the applications and data contained within the system, hence the importance of managing said messaging connectivity. Regardless of the use of the Hybrid Cloud strategy, connectivity is key in ensuring that every component of the hybrid system works seamlessly together.
To achieve this, modern messaging platforms need to both provide complete transparency of the hybrid cloud system and support integration at the microservice level. When building such an advanced hybrid cloud infrastructure, some microservices are utilized in one environment while others are utilized in the other to enjoy the best of both environments.
Building such an environment efficiently requires a Kubernetes native messaging platform. In this blog post, we will be discussing the need and advantages of building hybrid cloud infrastructure using an innovative Kubernetes native messaging platform and how it works.
Building a Kubernetes-based Solution in a Hybrid Cloud Environment
The most common concerns over the hybrid deployment, we have come across, deals with complexity and risk. In most enterprises, there is a need to come to grips with the management and operation of both on-premises and cloud environments, making sure the environments are always in sync and doing so with security in mind. Building a hybrid cloud infrastructure creates a challenge of managing the communication complexity in a stable, reliable, and scalable approach. There, however, is a ray of hope, and this can be found in a unified Kubernetes native messaging platform across the environments with multi-cluster support.
Such a Kubernetes native messaging platform was developed for this specific environment with the support of all messaging patterns and therefore simplify messaging creation and maintenance, regardless of where you run applications. This ensures that organizations using such a messaging platform enjoy benefits not just from the enterprise-grade Kubernetes solutions to support hybrid cloud solutions but also from the native abilities of said solutions to enable the instant connection between microservices and a rich set of cloud and external services. This makes it possible for enterprise developers to create and manage multiple Kubernetes deployments from the messaging platform’s control center easily.
Portability Between All Deployments
A Kubernetes native messaging platform provides the perfect means to bridge, replicate, or aggregate Kubernetes clusters, providing an abundant set of connectors to instantly connect the various available microservices with cloud web and external services within the clusters across cloud environments, on-premises deployments, and the edge.
How to Successfully Migrate from On-premises to a Hybrid Deployment
Modern messaging platforms should enable enterprises to gradually migrate their current IT infrastructure on the fly to a hybrid cloud solution easily and without service disruption. They should also provide multi-cluster support, allowing for communication between the on-prem microservices and cloud microservices seamlessly, enabling two different Kubernetes environments to work together as one solution. This also ensures that enterprises can gradually transfer services from the on-premises environment to the cloud and vice-verse safely, transparently, and without delay. Furthermore, the messaging platform supports the gradual migration of microservices from a monolithic environment with legacy massaging systems to an advanced Kubernetes hybrid solution using the messaging platform source and target connectors.
KubeMQ is now available through the Red Hat Marketplace
KubeMQ the Kubernetes Message queue, and message broker, is now available through Red Hat Marketplace, a newly launched open cloud marketplace that makes it easier to discover and access certified software for container-based environments across the hybrid cloud. Through the marketplace, customers can take advantage of robust maintenance, support, and professional services, as well as streamlined billing, contracting, and simplified governance.
KubeMQ is the Kubernetes native enterprise-grade message broker and message queue, scalable, highly available, and secured. As a Red Hat Marketplace partner, customers gain easier access to deploy KubeMQ using our Certified OpenShift Operator in one click. KubeMQ technology adds to the ease of use offered by Red Hat Marketplace by providing a Kubernetes native message broker and message queue that supports all messaging patterns, helping enterprises to build stable microservices solutions that can be easily scaled and managed.
For companies building cloud-native infrastructure and applications, Red Hat Marketplace is an essential destination to unlock the value of cloud investments. Red Hat Marketplace enables enterprises to more easily manage workloads in hybrid environments. Deploying our cloud-agnostic KubeMQ message queue provides the technology that enables hybrid clouds to smoothly and transparently connect and interact, allowing workloads to alternate between cloud providers when capacity and costs change.
Built-in partnership by Red Hat and IBM, Red Hat Marketplace is designed to meet the unique needs of developers, procurement teams, and IT leaders through simplified and streamlined access to popular enterprise software. All solutions available through the marketplace have been tested and certified for Red Hat OpenShift Container Platform, the industry’s most comprehensive enterprise Kubernetes platform, allowing them to run anywhere OpenShift runs.