Kubernetes on Alibaba Cloud (ACK): Full Deployment Guide


The “Kubernetes is Kubernetes everywhere” mantra is an outright lie. I learned this the hard way.

We see it constantly in our consulting work. An engineering team successfully runs a massive EKS environment in AWS. They get a mandate to expand into the APAC region to capture new market share. The lead architect assumes they can just lift-and-shift their Terraform state over to Alibaba Cloud, swap out a few provider strings, and call it a day.

Six weeks later, everything is on fire. Their cluster is randomly dropping packets during peak hours. Their cloud bill is somehow double what they forecasted. Their deployment pipelines are timing out because of aggressive rate limits in Asia. The team is burned out, and management is asking why the expansion is failing.

I have spent years rescuing and rebuilding massive-scale Kubernetes environments across AWS, GCP, and Alibaba Cloud. When it comes to Alibaba Cloud Container Service for Kubernetes (ACK), treating it like a vanilla upstream deployment is the fastest way to fail.

ACK is the exact orchestration engine that powers Alibaba’s own Singles’ Day—an e-commerce event that handles peaks of tens of millions of queries per second. But to unlock that kind of scale, you have to embrace its proprietary, kernel-level integrations. If you aren’t leveraging the Terway container network interface for networking and Elastic Container Instance for serverless bursting, you are overpaying, underperforming, and frankly, doing it wrong.

We’ve mapped out what actually works through years of trial, error, and painful outages. Here’s my definitive roadmap to architecting, deploying, and optimizing ACK for high-stakes production environments. I’m including the scars and lessons learned from the trenches so you don’t have to repeat our mistakes.

Want to skip the trial and error? Expansion into APAC requires specialized infrastructure. Book an ACK Architecture Strategy Call with our Alibaba Cloud experts to ensure your deployment is built for scale from Day One.


1. Decoding Alibaba Cloud ACK Architecture

Stop fighting the platform. If you try to force vanilla Kubernetes paradigms onto Alibaba, the underlying distributed operating system will punish you. Standard Kubernetes interfaces are heavily optimized here. You need to understand how they interact with the physical network before you write a single line of YAML.

1.1. The Day Zero Sanity Check

Before touching Terraform, always verify what legacy clusters are lurking in your target region.

1.1.1. Auditing Your Existing VPC State

I’ve seen overlapping CIDR blocks from forgotten dev clusters silently take down production deployments because nobody bothered to check the Virtual Private Cloud (VPC) state. Someone spun up a test cluster six months ago, forgot to tear down the network, and now your production routing tables are poisoned.

Run this before you plan your IP subnets. It takes ten seconds and saves you ten hours of debugging.

Bash

# List all ACK clusters and their states in Singapore. 
# Do this to ensure you aren't about to overlap routing tables.
aliyun cs GET /api/v1/clusters --region ap-southeast-1

1.2. ACK Editions: Make the Right Call

Selecting the cluster type is an irreversible decision. You cannot upgrade a Standard cluster to a Pro cluster later without executing a full migration of your workloads. Here is my firm stance on the editions Alibaba offers.

1.2.1. Why We Reject Dedicated Clusters

With ACK Dedicated, you manage the master nodes. Avoid this.

Unless you are in a highly regulated sector like defense or central banking where compliance auditors explicitly mandate custom API server flags or raw etcd access, it’s an operational nightmare you simply do not need. Managing etcd quorum across availability zones is a full-time job. Backing up the control plane is tedious. Let Alibaba handle the control plane so you can focus on shipping features.

1.2.2. The Case for ACK Managed Pro

ACK Managed (Standard) is fine for dev or staging, but high availability is strictly best-effort. If you run production here, you are accepting downtime. The API server will occasionally restart during Alibaba’s backend maintenance windows. You won’t get an alert, and you won’t get a refund.

ACK Managed (Pro) is the only acceptable tier for production. The reality is that it provides a financially backed 99.95% SLA and automated etcd scaling. When your cluster hits 1,000+ pods, a standard etcd deployment will choke on the state updates. ACK Pro handles this dynamically, shifting your control plane resources behind the scenes so your kubectl commands don’t start timing out. Pay the extra few bucks. It is worth your sanity.

1.3. Network Architecture: Terway vs. Flannel (The Hill We Die On)

The most critical architectural choice you will make is your Container Network Interface. If you get this wrong, you will hit an invisible ceiling on your application’s performance.

1.3.1. The Flannel Overlay Bottleneck

In my experience, using Flannel (a VXLAN overlay) on Alibaba Cloud is a massive anti-pattern for high-throughput microservices.

The encapsulation and decapsulation overhead required to pack pod traffic into UDP packets will artificially bottleneck your nodes. It literally eats CPU cycles just to move bytes around. I once saw a cluster where 18% of the node’s CPU was entirely consumed by network overlay processing. That is wasted money.

1.3.2. Terway’s Direct VPC Routing

ACK’s proprietary Terway interface bypasses overlay networking entirely.

It integrates directly with the Alibaba VPC. It assigns primary or secondary IP addresses from Elastic Network Interfaces directly to your Pods.

Look at the real-world difference on a standard compute node:

Real-World Benchmark: standard 8 vCPU, 32GB instance

MetricFlannel (VXLAN)Terway (ENI Multi-IP)Production Impact
Max Throughput~2.5 – 3 Gbps~10 – 15 Gbps4x higher throughput. Mandatory for Kafka/Redis clusters.
P99 Latency~0.8ms – 1.2ms~0.1ms – 0.2ms~80% reduction. Cures mysterious microservice timeouts.
CPU Overhead~12% – 18%< 2%Reclaims ~15% of your compute budget for actual workloads.

1.3.3. The Brutal Trade-off: IP Exhaustion

Terway is vastly superior, but it eats VPC IPs for breakfast. Every single pod gets a real, routable IP on your VPC.

If your networking team hands you a /24 subnet (256 IPs), and you deploy 3 worker nodes, each running 40 pods… your cluster will crash from IP exhaustion within hours. Node scaling will fail silently. Pods will remain in a pending state indefinitely, and you will be tearing your hair out looking at Kubernetes event logs.

The rule is non-negotiable: Demand a /16 for your pod virtual switches. If your InfoSec team pushes back, explain that in Terway, Pod IPs are first-class VPC citizens. Do the math upfront: calculate your maximum nodes multiplied by maximum pods per node, and add a 20% buffer for rolling updates.


2. Step-by-Step Deployment Guide: Infrastructure as Code

Manual UI clicks in the Alibaba Cloud console are a recipe for 2 AM configuration drift nightmares. The console changes frequently, and default parameters shift without warning.

We deploy strictly via Terraform to ensure idempotency and repeatable disaster recovery. If a region goes down, I want to be able to spin the entire infrastructure back up in a different zone with one command.

2.1. Network Infrastructure Planning

As mentioned, over-provision your virtual switch CIDR blocks. You also need to provision a NAT Gateway immediately.

2.1.1. CIDR Math for Production

Getting the network topology right on day one is crucial.

Terraform

variable "region" { default = "ap-southeast-1" }
provider "alicloud" { region = var.region }

# The VPC needs a massive CIDR to feed Terway's IP hunger.
# A /8 provides roughly 16 million IPs. We won't use them all, 
# but it prevents routing collisions down the line.
resource "alicloud_vpc" "ack_vpc" {
  vpc_name   = "production-ack-vpc"
  cidr_block = "10.0.0.0/8" 
}

# Do not skimp on this subnet size. A /16 gives you 65,536 IPs.
# This ensures you never wake up to an IP exhaustion alert.
resource "alicloud_vswitch" "ack_vsw_a" {
  vswitch_name = "ack-vswitch-aza"
  vpc_id       = alicloud_vpc.ack_vpc.id
  cidr_block   = "10.1.0.0/16" 
  zone_id      = "${var.region}a"
}

2.1.2. The Mandatory NAT Gateway

If you don’t build this, your pods will have no route to the public internet to pull external images or hit third-party APIs. They will just sit there and time out. I’ve watched engineers spend three days debugging application code when the issue was simply a missing NAT route.

Terraform

# You will need a NAT Gateway for outbound traffic.
resource "alicloud_nat_gateway" "nat" {
  vpc_id           = alicloud_vpc.ack_vpc.id
  nat_gateway_name = "ack-outbound-nat"
  payment_type     = "PayAsYouGo"
  vswitch_id       = alicloud_vswitch.ack_vsw_a.id
  nat_type         = "Enhanced"
}

2.2. The ACK Pro Cluster & Node Pool

Here is where the magic happens. Your cluster configuration dictates your scalability limits.

2.2.1. IPVS Over Iptables

Notice the proxy mode setting in the configuration below. Do not use iptables. Just don’t.

I have watched iptables ruleset evaluations literally melt node CPUs when a cluster scales past 5,000 services. Kubernetes iptables mode executes sequential rules for every single packet, meaning performance degrades in O(n) time. IPVS uses highly efficient hash tables in the Linux kernel and routes at wire speed (O(1) time) regardless of how many services you have.

Terraform

resource "alicloud_cs_managed_kubernetes" "k8s" {
  name                 = "prod-cluster-01"
  cluster_spec         = "ack.pro.small"
  # Always pin your version. Do not let Terraform float to latest.
  version              = "1.28.3-aliyun.1"
  
  worker_vswitch_ids   = [alicloud_vswitch.ack_vsw_a.id]
  pod_vswitch_ids      = [alicloud_vswitch.ack_vsw_a.id]
  
  # Let ACK manage the SNAT entries for the NAT Gateway we created
  new_nat_gateway      = false 
  
  proxy_mode           = "ipvs" # Mandatory for scale
  
  # Service CIDR must NOT overlap with your VPC 10.0.0.0/8
  service_cidr         = "172.16.0.0/16" 
  
  # Load the essential Alibaba Add-ons
  addons { name = "terway-eniip" }
  addons { name = "csi-plugin" }
  addons { name = "csi-provisioner" }
  
  # Enable deletion protection. Terraform destroy accidents happen.
  deletion_protection = true
}

2.2.2. Node Pool Optimization

Never use the default cluster workers created during cluster initialization. Always attach a dedicated Worker Node Pool. This decouples your compute layer from your cluster state.

Terraform

# Attach a Worker Node Pool. 
resource "alicloud_cs_kubernetes_node_pool" "default" {
  cluster_id                    = alicloud_cs_managed_kubernetes.k8s.id
  node_pool_name                = "standard-workers"
  vswitch_ids                   = [alicloud_vswitch.ack_vsw_a.id]
  # 7th Gen instances offer the best network PPS performance
  instance_types                = ["ecs.g7.2xlarge"]
  system_disk_category          = "cloud_essd"
  system_disk_performance_level = "PL0" # See our storage rant below
  system_disk_size              = 100
  desired_size                  = 3
}

2.3. Modern Ingress Configuration

How traffic gets into your cluster is just as important as how it behaves inside.

2.3.1. Ditching Classic Load Balancers

Legacy Classic Load Balancers belong in a museum. They lack modern features, they scale poorly, and debugging them is miserable. Depending on your traffic type, you need to use Alibaba’s modern load balancers.

For Layer 4 (TCP/UDP), strictly use the Network Load Balancer (NLB). For Layer 7 (HTTP/HTTPS), use the Application Load Balancer (ALB).

2.3.2. Direct Routing with NLB

Because we are using Terway, we can do something incredible: Direct Routing.

Instead of traffic hitting the Load Balancer, routing to a random Node Port, and then forwarding it to the Pod (which adds an extra network hop and ruins source IP visibility), we tell the load balancer to route traffic directly to the Pod’s VPC IP.

YAML

apiVersion: v1
kind: Service
metadata:
  name: high-throughput-service
  annotations:
    # Provisions a modern Network Load Balancer (NLB)
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: "true"
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-name: "prod-nlb"
    
    # Terway Magic: Bypasses NodePort and routes directly to the Pod IP
    service.beta.kubernetes.io/backend-type: "eni" 
spec:
  type: LoadBalancer
  selector:
    app: backend-api
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

We Build Optimized Infrastructure

Navigating Alibaba Cloud’s unique quirks—from VPC IP exhaustion to complex cross-border networking—requires specialized experience. If your engineering team is expanding into APAC and needs a production-ready environment without the steep, expensive learning curve, we can help.

👉 Explore our Managed Alibaba Cloud Infrastructure Services to see how we accelerate global deployments.


3. Production Best Practices & Cost Optimization

You can rack up a massive bill on Alibaba Cloud very quickly if you treat it like a traditional data center. I review client architectures every week, and I consistently see the same two places where teams bleed money: idle compute and over-provisioned storage.

3.1. The Serverless Virtual Node Strategy

Traditional Kubernetes node scaling is inefficient for unpredictable workloads.

3.1.1. Identifying Idle Compute

We routinely see clients over-provisioning massive compute node pools just to handle sporadic background jobs, scheduled tasks, or CI/CD runners. They pay for 100% of the compute 24/7, even though the nodes sit at 15% utilization for most of the day.

Stop paying for idle compute.

3.1.2. Executing the ECI Hand-off

Alibaba has a feature called Virtual Nodes, backed by Elastic Container Instances. By applying a single label to your deployment, ACK intercepts the scheduling request, bypasses your worker nodes entirely, and spins the pod up on serverless infrastructure.

You pay exactly for the vCPU and Memory allocated, down to the second the task runs. When the pod dies, the billing stops immediately.

YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: batch-worker
spec:
  template:
    metadata:
      labels:
        # This one label offloads this specific workload entirely to serverless compute
        alibabacloud.com/eci: "true"
    spec:
      containers:
      - name: worker
        image: my-heavy-worker:v2
        resources:
          # You will be billed exactly for 2 vCPU and 4Gi memory per second
          requests:
            cpu: "2"
            memory: "4Gi"

A quick word of warning here: watch out for cold starts. If your Docker image is 3GB, the serverless engine has to pull that over the network before the pod starts. Use Alibaba’s Image Cache feature to pre-warm massive images onto the infrastructure to drop boot times from 40 seconds to 5 seconds.

3.2. Demystifying Enhanced SSD Tiers

Storage on Alibaba Cloud is not just standard SSD versus HDD. They use Enhanced SSD (ESSD), which comes in strict performance tiers based on IOPS and throughput.

3.2.1. The Default Storage Trap

I once audited a fintech client who was spending $15,000 a month solely on block storage. I looked at their cluster state and realized an engineer had copy-pasted an outdated tutorial and blindly deployed PL1 disks for every single worker node system disk and stateless logging volume.

This is burning money for zero performance gain.

3.2.2. Enforcing PL0 Policies

My rule of thumb is simple: Force PL0 as the default StorageClass for all generic worker nodes and stateless application logs.

PL0 caps at 10,000 IOPS. This is plenty for standard Kubernetes nodes running stateless microservices. It saves you up to 50% compared to PL1.

Reserve PL1 (which caps at 50,000 IOPS) strictly for stateful sets that actually need heavy disk I/O, like PostgreSQL databases, Kafka brokers, or Redis caches. If you think you need PL2 or PL3 (up to 1,000,000 IOPS), you probably shouldn’t be running that specific monolithic workload inside Kubernetes anyway.

Enforce the budget-saver via your default StorageClass:

YAML

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: alicloud-disk-essd-pl0-default
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: diskplugin.csi.alibabacloud.com
parameters:
  type: cloud_essd
  performanceLevel: PL0 # The budget-saver
reclaimPolicy: Retain
allowVolumeExpansion: true

4. Real-World Architecture: Surviving Flash Sales

Let’s talk about extreme scale.

Consider an e-commerce client of ours operating in Southeast Asia. Their baseline traffic is a comfortable 5,000 requests per second. But during regional flash sales, traffic violently spikes to 100,000 requests per second for exactly four hours, and then drops back down just as quickly.

4.1. The Hard Lesson on Autoscaling

You can’t rely on standard Kubernetes tooling for this kind of event.

4.1.1. Why Reactive HPA Fails

Standard Kubernetes Horizontal Pod Autoscaling relies on reactive metrics. It waits for the monitoring system to scrape CPU usage, calculates the average, and then triggers a scale event.

During a violent traffic spike, this loop is too slow. By the time the autoscaler registers the load, your existing pods are already CPU-throttled, throwing gateway errors, or getting killed for memory limits. By the time the cluster requests new virtual machines, boots them, installs the kubelet, and joins them to the cluster, the flash sale is half over and you’ve lost millions in revenue.

You cannot react to a flash sale. You must predict it.

4.2. The Production Fix: Predictive Pre-warming

We implemented Alibaba’s Advanced Horizontal Pod Autoscaler in tandem with serverless bursting to solve this problem permanently.

4.2.1. Implementing AHPA

This system uses machine learning to analyze historical metric patterns like daily or weekly seasonality and pre-warms pods before the spike hits.

YAML

apiVersion: autoscaling.alibabacloud.com/v1beta1
kind: AdvancedHorizontalPodAutoscaler
metadata:
  name: ahpa-ecommerce-api
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: ecommerce-api
  # Baseline state
  minReplicas: 50
  # Flash sale state
  maxReplicas: 15000
  prediction:
    quantiles:
    - 95
    # The lifesaver: Scales up infrastructure 30 minutes BEFORE the predicted spike
    scaleUpForward: 1800 

4.2.2. The Timeline of a Scale Event

Here is how it plays out in reality:

  1. T-Minus 30 Minutes: At 11:30 PM, the system realizes a spike is historically imminent at midnight.
  2. The Request: It requests 10,000 new pods.
  3. The Overflow: Because the standard worker nodes are full, the Virtual Kubelet intercepts the pending pods.
  4. The Serverless Burst: It overflows the traffic into serverless instances, deploying 10,000 pods directly into the VPC in under two minutes.
  5. The Wind Down: At 4:00 AM, the sale ends. The system scales down, and the instances are terminated.

The client pays roughly ~$1,500 for the exact seconds of burst compute used. If they had pre-provisioned dedicated hardware for the whole month just to be safe, the bill would have been over $180,000.


Need Help Implementing Predictive Scaling?

Configuring advanced autoscaling, virtual nodes, and serverless compute to work seamlessly together takes precision tuning. You have to balance image caching, VPC quotas, and database connection pooling.

We help high-growth SaaS and e-commerce brands automate their scaling to survive massive traffic spikes while actively slashing compute costs.

👉 Let’s optimize your cloud spend and bulletproof your scaling.


5. Failure Cases: What Will Wake You Up at 3 AM

These are the operational landmines that catch every Western team off guard when moving to Alibaba Cloud. Print this out and tape it to your monitor. I have lost weekends to every single one of these issues.

5.1. The SNAT Port Blackhole

This is the most common scaling failure we see, and it is infuriating to debug if you don’t know what you are looking for.

5.1.1. The 64k Connection Limit

When you scale out thousands of pods, and they all try to reach a third-party payment gateway API on the public internet, they must route out through your VPC’s NAT Gateway.

Here is the math: A single Elastic IP on a NAT Gateway has a hard maximum of 64,000 concurrent SNAT ports. If 10,000 pods all make 7 outbound connections at the exact same time, you hit the 64k limit.

What happens? The NAT gateway silently drops the packets. Your applications will log mysterious connection timeouts. You will assume your code is broken. You will waste days debugging your application logic when the problem is pure network physics.

5.1.2. Pooling Elastic IPs

The fix is architectural. Bind multiple Elastic IPs to your NAT Gateway and aggregate them into a pool.

Terraform

resource "alicloud_eip_address" "snat_ips" {
  count     = 3 # Triples your outbound concurrent connection limit to 192,000
  bandwidth = 100
}

# Bind them all to the same VSwitch routing rule
resource "alicloud_snat_entry" "ack_snat" {
  snat_table_id     = alicloud_nat_gateway.nat.snat_table_ids
  source_vswitch_id = alicloud_vswitch.ack_vsw_a.id
  snat_ip           = join(",", alicloud_eip_address.snat_ips[*].ip_address) 
}

5.2. CoreDNS CPU Throttling

This is the silent killer of microservice performance.

5.2.1. The Microservice Multiplier Effect

In heavy microservice architectures, a frontend app making one external API call might trigger 15 downstream internal calls. Every single one of those internal calls requires a DNS lookup.

Standard Kubernetes CoreDNS simply cannot handle more than 10,000 DNS queries per second without severe CPU throttling. When CoreDNS throttles, internal DNS resolution jumps from 1 millisecond to 5 seconds. Your entire microservice chain will start failing health checks, cascading into a massive, cluster-wide outage.

5.2.2. Deploying NodeLocal DNSCache

Do not wait for an outage to fix this. Deploy NodeLocal DNSCache immediately upon cluster creation.

It runs a lightweight DNS cache on every single worker node as a daemonset. Pods query their local node’s cache directly. This bypasses the standard service routing and keeps internal DNS resolution strictly under 1 millisecond, shielding your CoreDNS deployment from overwhelming traffic spikes.

5.3. Rate Limits in Asia

If you rely on public Docker Hub images like standard Nginx or Node.js alpine builds for your base images, you are playing Russian Roulette with your cluster availability.

5.3.1. The Public Registry Trap

Pulling images from public registries in APAC regions is notoriously slow. Speeds often drop below 500 KB/s. Worse, public registries enforce strict pull rate limits based on IP addresses.

If a node crashes and your cluster auto-scales a replacement node, that new node needs to pull 30 microservice images immediately to get back to a healthy state. If you get rate-limited by the public registry, your pods stay in a failed state for hours. Your outage just went from a 2-minute self-healing blip to a catastrophic failure requiring manual intervention.

5.3.2. Enforcing VPC-Internal Pulls

Never deploy directly from public registries in production.

We mandate mirroring all images to Alibaba Cloud Container Registry. More importantly, you must configure your nodes to pull strictly via internal VPC endpoints. This guarantees unmetered, public-internet-free, wire-speed image pulls directly across the Alibaba backbone network.

Bash

# How your CI/CD pipeline should push images. 
# Notice the '-vpc' in the domain. This ensures traffic stays off the public internet.
docker tag my-app:v1 registry-vpc.ap-southeast-1.aliyuncs.com/my-project/my-app:v1
docker push registry-vpc.ap-southeast-1.aliyuncs.com/my-project/my-app:v1

5.4. Overlooking Roles for Service Accounts

Security is often an afterthought during a migration, but getting it wrong on Alibaba Cloud leaves your entire infrastructure exposed.

5.4.1. The Danger of Static Keys

In standard, rushed setups, developers often inject long-lived AccessKeys directly into Kubernetes Secrets or Pod environment variables. They do this so their apps can upload files to object storage or query databases.

This is reckless. If a single pod is compromised via an application vulnerability, the attacker has your raw cloud credentials. They can pull your data, spin up crypto-miners, or delete your backups.

5.4.2. Mapping Roles to Pods

Stop using raw AccessKeys. ACK supports RAM Roles for Service Accounts.

You map a Kubernetes ServiceAccount to an Alibaba Cloud Role. The Pod assumes the identity dynamically and fetches short-lived tokens from the metadata server.

YAML

apiVersion: v1
kind: ServiceAccount
metadata:
  name: oss-uploader-sa
  namespace: production
  annotations:
    # The pod seamlessly inherits this highly restricted IAM role
    pod-identity.alibabacloud.com/role-name: "RoleForUploads_Production"

If the pod is breached, the token expires in 15 minutes, and it only has permissions to touch one specific bucket. You stop the blast radius cold.


Conclusion : Stop Guessing on Alibaba Cloud

Deploying Kubernetes on Alibaba Cloud isn’t just about spinning up worker nodes and writing YAML files. It requires a fundamental shift in how you treat the cloud provider.

You must treat the VPC network, the storage tiers, and the serverless fabric as native extensions of your cluster, rather than generic external resources. The moment you start fighting the Alibaba distributed operating system is the moment your deployment starts failing.

By flattening your network with Terway, utilizing serverless instances for ruthless cost-efficiency, mitigating the NAT and DNS traps before they happen, and automating your architectural guardrails via Terraform, ACK transitions from a generic orchestration layer into a bulletproof engine for global scale.

You can take the time to build it right the first time, or you can rebuild it in a panic during a massive, revenue-impacting outage. The choice is yours.

Don’t leave your APAC infrastructure to chance. Whether you are currently migrating workloads, struggling with unexplained latency, or looking to aggressively slash your cloud bill, our team of Alibaba Cloud specialists is ready to step in.

👉 Contact us today for a comprehensive ACK Architecture Review and let’s build your enterprise cloud.


Read more: 👉 CI/CD Pipelines on Alibaba Cloud: Complete DevOps Workflow

Read more: 👉 Serverless on Alibaba Cloud (Function Compute): Use Cases & Guide


Leave a Comment