Mesh multicloud deployment¶
This page explains how to deploy a service mesh in a multicloud environment.
Prerequisites¶
Cluster requirements¶
- Cluster type and version: Specify the type and version of the current cluster to ensure that the service mesh installed later can run normally in the cluster.
- Provide reliable IP: The control plane cluster must provide a reliable IP for other data plane clusters to access the control plane.
- Authorization: The cluster that joins the mesh needs to provide a remote key with sufficient permissions to allow Mspider to install components on the cluster and the Istio control plane to access the API Server of other clusters.
Multi-cluster regional planning¶
In Istio, Regions, Zones, and SubZones are concepts used to maintain service availability in multi-cluster deployments. Specifically:
- Region represents a large region, usually used to represent the data center region of the entire cloud provider; in Kubernetes, the label
topology.kubernetes.io/region
determines the region of the node. - Zone represents a small zone, usually used to represent a sub-zone in a data center; in Kubernetes, the label
topology.kubernetes.io/zone
to determine the zone of a node. - SubZone is a smaller area used to represent a smaller part of the Zone. The concept of partitions does not exist in Kubernetes, so Istio introduces custom node labels
topology.istio.io/subzone
to define partitions.
The role of these concepts is mainly to help Istio manage the availability of services between different regions. For example, in a multi-cluster deployment, if a service fails in Zone A, Istio can be configured to automatically divert service traffic to Zone B to ensure service availability.
The configuration method is to add the corresponding Label to each node of the cluster:
Area | Label |
---|---|
Region | topology.kubernetes.io/region |
Zone | topology.kubernetes.io/zone |
SubZone | topology.istio.io/subzone |
To add a Label, you can find the corresponding cluster through the container management platform, and configure the Label for the corresponding node
mesh Planning¶
mesh basic information¶
- mesh ID
- mesh version
- mesh cluster
Network Planning¶
Confirm the network status between clusters and configure according to different statuses This part mainly lies in the configuration of the two parts of the multi-network mode:
- Planning Network ID
- Deployment and configuration of east-west mesh
- How to expose the mesh control surface to other worker clusters.
There are two network modes:
- Single network mode > It is clear whether the Pod network between clusters can be directly connected. > If the Pod network can communicate directly, it proves that it is the same network mode, but it should be noted that if there is a conflict between the Pod network, you need to choose a different network mode
- Multi-network mode > If the network between the clusters is disconnected, you need to divide the network ID for the cluster, and you need to install east-west gateways in clusters in different network areas, > And configure related configurations. The specific operation steps are in the following chapter [Installation and Configuration of mesh Components in Different Network Modes] (#_21).
planning form¶
The cluster and mesh-related plans mentioned above are gathered into a form, which is convenient for users to refer to.
Cluster Planning¶
Cluster Name | Cluster Type | Cluster Pod Subnet (podSubnet) | Pod Network Communication Relationship | Cluster Node & Network Area | Cluster Version | Master IP |
---|---|---|---|---|---|---|
mdemo-cluster1 | standard k8s | "10.201.0.0/16" | - | master: region1/zone1/subzone1 | 1.25 | 10.64.30.130 |
mdemo-cluster2 | standard k8s | "10.202.0.0/16" | - | master: region1/zone2/subzone2 | 1.25 | 10.6.230.5 |
mdemo-cluster3 | standard k8s | "10.100.0.0/16" | - | master: region1/zone3/subzone3 | 1.25 | 10.64.30.131 |
mesh Planning¶
configuration item | value |
---|---|
Mesh ID | mdemo-mesh |
mesh Mode | Managed Mode |
Network mode | Multi-network mode (need to install east-west gateway, plan network ID) |
mesh version | 1.15.3-mspider |
Managed Cluster | mdemo-cluster1 |
Working clusters | mdemo-cluster1, mdemo-cluster2, mdemo-cluster3 |
Network Planning¶
As known from the above table, there is no network communication between the clusters, so the mesh is in multi-network mode, and the following configuration needs to be planned:
cluster name | cluster mesh role | cluster network identifier (network ID) | hosted-istiod LB IP | eastwest LB IP | ingress LB IP |
---|---|---|---|---|---|
mdemo-cluster1 | managed cluster, worker cluster | network-c1 | 10.64.30.71 | 10.64.30.73 | 10.64.30.72 |
mdemo-cluster2 | working cluster | network-c2 | - | 10.6.136.29 | - |
mdemo-cluster3 | working cluster | network-c3 | - | 10.64.30.77 | - |
Access cluster and component preparation¶
Users need to prepare a cluster that meets the requirements. The cluster can be newly created (the creation capability of the container management platform can also be used to create a cluster), or it can be an existing cluster.
However, the clusters required for subsequent meshs must be connected to the container management platform.
Access to the cluster¶
If the cluster is not created through the container management platform, such as an existing cluster, or a cluster created through a custom method (like kubeadm or Kind cluster), you need to connect the cluster to the container management platform.
Confirm observable components (optional)¶
The observability of the key capability of the mesh, the key observability component is Insight Agent, so if you need to have the observation capability of the mesh, you need to install its component.
The cluster created by creating a cluster on the container management platform will have the Insight Agent component installed by default.
For other methods, you need to find the Helm application
in the cluster in the container management interface, and select insight-agent
to install.
Mesh ceployment¶
Create a mesh through the service mesh, and add the planned cluster to the corresponding mesh.
Create mesh¶
First in the mesh management page -> Create mesh
:
The specific parameters for creating the mesh are shown in the figure:
- Select a managed mesh: In a multicloud environment, only the managed mesh mode can manage multiple clusters
- Enter a unique mesh name
- According to the pre-condition environment, select the pre-selected mesh version that meets the requirements
- Select the cluster where the managed control plane resides
- Load balancing IP: This parameter is required by the Istiod that exposes the control plane and needs to be prepared in advance
- Mirror warehouse: In the private cloud, you need to upload the mirror image required by the mesh to the warehouse. For the public cloud, it is recommended to fill in
release.daocloud.io/mspider
The mesh is being created, you need to wait for the mesh to be created, and the status will change from creating
to running
;
Expose Mesh Hosted Control Surface Hosted Istiod¶
Confirm managed mesh control plane service¶
After ensuring that the mesh status is normal, observe whether the Services under istio-system
of the control plane cluster mdemo-cluster1
are successfully bound to the LoadBalancer IP.
Found no LoadBalancer IP assigned for service istiod-mdemo-cluster-hosted-lb
hosting the mesh control plane requires additional processing.
Assign EXTERNAL IP¶
There are different ways to apply for or allocate LoadBalancer IP in different environments, especiallyIt is a public cloud environment, and the LoadBalancer IP needs to be created according to the method provided by the public cloud vendor.
This article demonstrates that the demo adopts the metallb method to assign an IP to the LoadBalancer Service. For related deployment and configuration, refer to the [Metallb Installation Configuration] (#metallb) section.
After deploying metallb, confirm hosted mesh control plane service again.
Verify managed control plane Istiod EXTERNAL IP
is unobstructed¶
Verify the managed control plane Istiod in an unmanaged cluster environment. This practice is verified by curl. If a 400 error is returned, it can be basically determined that the network has been opened:
Confirm and configure mesh hosting control plane Istiod parameters¶
-
Get Managed mesh Control Plane Service
EXTERNAL IP
In the mesh
mdemo-mesh
control plane clustermdemo-cluster1
, confirm that the hosted mesh control plane serviceistiod-mdemo-mesh-hosted-lb
has been allocated LoadBalancer IP, and record its IP, as follows:Confirm hosted mesh control plane service
istiod-mdemo-mesh-hosted-lb
EXTERNAL-IP
is10.64.30.72
. -
Manually configure mesh hosting control plane Istiod parameters
First, enter the global control plane cluster
kpanda-global-cluster
on the container management platform (if you cannot confirm the location of the relevant cluster, you can ask the corresponding person in charge or pass Query Global Service Cluster)->
custom resource module
search resourceGlobalMesh
-> Next, find the corresponding meshmdemo-mesh
inmspider-system
-> Then edit the YAML- Add
istio.custom_params.values.global.remotePilotAddress
parameter in.spec.ownerConfig.controlPlaneParams
field in YAML; - Its value is the
istiod-mdemo-mesh-hosted-lb
EXTERNAL-IP
address noted above:10.64.30.72
.
- Add
Add worker cluster¶
Add clusters on the service mesh GUI.
-
After the mesh control plane is successfully created, select the corresponding mesh and enter the mesh management page ->
Cluster Management
->Add Cluster
: -
After selecting the desired working cluster, wait for the cluster installation mesh component to complete;
-
During the access process, the cluster status will change from
Accessing
toAccessed
:
Detect whether the multicloud control plane is normal¶
Since the current working cluster is different from the pod network of the mesh control plane cluster, it is necessary to expose the Istiod of the control plane to the public network through the above Exposing mesh Hosted Control Plane Hosted Istiod.
To verify whether the Istio-related components of the working cluster can run normally, you need to check whether the istio-ingressgateway
under the istio-system
namespace can run normally in the working cluster:
Install and configure mesh components in different network modes¶
This part is mainly divided into two parts:
- Configure
Network ID
for all worker clusters - Install east-west gateways in all clusters that do not communicate with each other
Here is a question first: Why do you need to install an east-west gateway? Since the Pod mesh between working clusters cannot be directly connected, network problems will also occur when services communicate across clusters. Istio provides a solution that is the East-West Gateway. When the target service is located in a different network, its traffic will be forwarded to the east-west gateway of the target cluster, and the east-west gateway will parse the request and forward the request to the real target service.
After understanding the principle of the east-west gateway above, there is a new question, how does Istio distinguish which network the service is in? Istio requires the user to define the network ID
shown when installing Istio per worker cluster, which is why the first step exists.
Manually configure the Network ID
for the worker cluster¶
Due to the different working cluster networks, it is necessary to manually configure network ID
for each working cluster. If in the actual environment, the Pod networks between the clusters can reach each other directly, they can be configured with the same network ID
.
Let's start configuring Network ID
, the specific process is as follows:
- First enter the global control plane cluster
kpanda-global-cluster
(if you cannot confirm the location of the relevant cluster, you can ask the corresponding person in charge or pass Query Global Service Cluster) - Then search resource
MeshCluster
inCustom Resource Module
-> - Find the working clusters added to the mesh under the
mspider-system
namespace. The working clusters in this case are:mdemo-cluster2
,mdemo-cluster3
-
Take
mdemo-cluster2
as an example, edit YAML- Find the field
.spec.meshedParams[].params
, and add the fieldNetwork ID
to the parameter column - Notes on parameter columns:
- Need to confirm whether
global.meshID: mdemo-mesh
is the same mesh ID - Need to confirm whether the cluster role
global.clusterRole: HOSTED_WORKLOAD_CLUSTER
is a working cluster
- Need to confirm whether
- Add parameter
istio.custom_params.values.global.network
with value according to network ID in original planning form:network-c2
- Find the field
Repeat the above steps to add network ID
to all working clusters (mdemo-cluster1, mdemo-cluster2, mdemo-cluster3
).
Identify the network ID
for the istio-system
of the worker cluster¶
Enter the container management platform, enter the corresponding working cluster: mdemo-cluster1, mdemo-cluster2, mdemo-cluster3
namespaces and add network labels.
- Tag Key:
topology.istio.io/network
- Label value:
${CLUSTER_NET}
Let's take mdemo-cluster3 as an example, find namespace
, select istio-system
-> modify label
.
Manually install East-West Gateway¶
Create a gateway instance¶
After confirming that all Istio-related components in the working cluster are in place, start installing the East-West Gateway.
Install the east-west gateway through the IstioOperator resource in the working cluster. The YAML of the east-west gateway is as follows:
Note
Be sure to modify the parameters according to the network ID
of the current cluster.
It is created by:
- Enter the corresponding working cluster on the container management platform
Custom Resources
module searchIstioOperator
- Check the
istio-system
namespace - Click
Create YAML
Create East-West Gateway Gateway resource¶
Create a rule in the mesh's Gateway Rules
:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: cross-network-gateway
namespace: istio-system
spec:
selector:
istio: eastwestgateway
servers:
-hosts:
- "*.local"
port:
name: tls
number: 15443
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
Set the mesh global network configuration¶
After installing the east-west gateway and the resolution rules of the gateway, you need to declare the configuration of the east-west gateway in the mesh in all clusters. Enter the global control plane cluster kpanda-global-cluster
on the container management platform (if you cannot confirm the location of the relevant cluster, you can ask the corresponding person in charge or pass Query Global Service Cluster)
-> Search for the resource GlobalMesh
in the Custom Resources
section -> Next find the corresponding mesh mdemo-mesh
in mspider-system
-> Then edit the YAML
Add a series of
istio.custom_params.values.global.meshNetworks
parameters in.spec.ownerConfig.controlPlaneParams
field in YAML
#! ! The two lines of configuration are indispensable
# Format: istio.custom_params.values.global.meshNetworks.${CLUSTER_NET_ID}.gateways[0].address
# istio.custom_params.values.global.meshNetworks.${CLUSTER_NET_ID}.gateways[0].port
istio.custom_params.values.global.meshNetworks.network-c1.gateways[0].address: 10.64.30.73 # cluster1
istio.custom_params.values.global.meshNetworks.network-c1.gateways[0].port: '15443' # cluster3 east-west gateway port
istio.custom_params.values.global.meshNetworks.network-c2.gateways[0].address: 10.6.136.29 # cluster2
istio.custom_params.values.global.meshNetworks.network-c2.gateways[0].port: '15443' # cluster2 east-west mesh port
istio.custom_params.values.global.meshNetworks.network-c3.gateways[0].address: 10.64.30.77 # cluster3
istio.custom_params.values.global.meshNetworks.network-c3.gateways[0].port: '15443' # cluster3 east-west gateway port
Network connectivity demo application and verification¶
Deploy the demo¶
There are mainly two applications: helloworld and sleep (these two demos belong to the test application provided by Istio)
Cluster deployment:
cluster | helloworld and version | sleep |
---|---|---|
mdemo-cluster1 | VERSION=vc1 | |
mdemo-cluster1 | VERSION=vc2 | |
mdemo-cluster1 | VERSION=vc3 |
Container management platform deployment demo¶
It is recommended to use the container management platform to create corresponding workloads and applications, find the corresponding cluster on the container management platform, and enter the [Console] to perform the following operations.
The following takes mdemo-cluster1 to deploy helloworld vc1 as an example:
Points to note for each of these clusters:
-
Mirror address:
- helloworld: docker.m.daocloud.io/istio/examples-helloworld-v1
- Sleep: curlimages/curl
-
helloworld workload increases corresponding to label
- app: helloworld
- version: ${VERSION}
- helloworld Workload increase corresponding version environment variables
- SERVICE_VERSION: ${VERSION}
Command line deployment demo¶
The configuration files that need to be used in the deployment process are:
Verify the demo cluster network¶
Expand¶
Other ways to create a cluster¶
Create a cluster through container management¶
There are many ways to create a cluster. It is recommended to use the cluster creation function in container management, but users can choose other creation methods. For other solutions provided in this article, please refer to [Other ways to create clusters] in the extended chapter (#_26)
You can flexibly select the components that need to be expanded in the cluster, and the observability of the mesh must rely on Insight-agent
If the cluster needs to define more advanced cluster configurations, they can be added in this step.
It takes about 30 minutes to create the cluster.
Create a cluster with kubeadm¶
kubeadm init --image-repository registry.aliyuncs.com/google_containers \
--apiserver-advertise-address=10.64.30.131 \
--service-cidr=10.111.0.0/16\
--pod-network-cidr=10.100.0.0/16 \
--cri-socket /var/run/cri-dockerd.sock
Create a kind cluster¶
Metallb installation configuration¶
Demo cluster metallb network pool planning record¶
Cluster name | IP pool | IP allocation |
---|---|---|
mdemo-cluster1 | 10.64.30.71-10.64.30.73 | - |
mdemo-cluster2 | 10.6.136.25-10.6.136.29 | - |
mdemo-cluster3 | 10.64.30.75-10.64.30.77 | - |
Install¶
Container management platform Helm installation¶
It is recommended to use Helm application
-> Helm template
in the container management platform -> find metallb -> install
.
Manual installation¶
See MetalLB official documentation.
Note: If the CNI of the cluster uses calico, you need to disable the BGP mode of calico, otherwise it will affect the normal work of MetalLB.
Add the specified IP to the corresponding service¶
kubectl annotate service -n istio-system istiod-mdemo-mesh-hosted-lb metallb.universe.tf/address-pool='first-pool'
Verify¶
Query the global service cluster¶
Sets managed by containersOn the group list interface, search for Cluster Role: Global Service Cluster
.