products
Effortless Kubernetes Journey: Meet Arcfra Kubernetes Engine
2025-01-16
Arcfra Team

Arcfra Enterprise Cloud Platform (AECP) provides high-performance and stable support for virtualized and containerized applications as a full-stack software-defined infrastructure. In particular, Arcfra Kubernetes Engine (AKE) serves to provide Kubernetes services, which not only enable businesses to build production-grade Kubernetes clusters with an out-of-the-box experience but also meet diverse application demands by supporting unified management of VM-based and physical machine-based containers.

For a quick view of AKE, please check out our recent short video.

Addressing the Challenges

Various infrastructure management challenges emerge as I&O teams are increasingly involved in building and operating Kubernetes clusters.

Complexities of Kubernetes Management: The deployment, configuration, parameter optimization, and scaling operations of Kubernetes clusters usually involve multiple steps and require the use of command lines, making it difficult for I&O teams to quickly master and manage Kubernetes clusters efficiently.

Lack of Production-Grade Storage and Network Support:

  • Centralized storage is widely used in production systems, but its configuration method does not meet the requirements of Kubernetes agile delivery.
  • The network architecture, security policies, load balancing, and application release mechanisms of Kubernetes are relatively complex and challenging to integrate with the security systems of existing production systems.

Difficult to Manage Kubernetes on Multiple Computing Resources: When business applications demand extreme performance (e.g., AI scenarios), users may prefer deploying Kubernetes clusters directly on physical machines. However, these physical machine-based Kubernetes clusters often have their Control Plane nodes hosted independently on a vendor’s virtualization platform, causing fragmentation in O&M and configuration.

AKE: Multi-Environment Kubernetes Building and Management Can Be A Breeze

2.jpg

Based on Arcfra Cloud Operating System (ACOS) clusters with x86_64 architecture, AKE can automatically create virtual machines to build multiple highly available Kubernetes clusters. By integrating leading industry products such as Arcfra Virtualization Engine (AVE), Arcfra Block Storage (ABS), and Arcfra Network Service (ANS), AKE helps enterprise IT operations teams easily deploy and manage production-grade Kubernetes clusters.

  • IT operations teams can perform full-lifecycle management of Kubernetes clusters such as rapid creation, deletion, scaling, and upgrading of all Kubernetes clusters through a unified GUI, as well as manage resources like workloads, services, and networks.
  • Physical machines can be directly used as worker nodes for workload clusters, and support for managing both VM-based and physical machine-based clusters simultaneously.
  • Two built-in CSI addons allow users to directly use production-ready distributed storage from ACOS clusters.
  • Users can optionally use the CNI addon and external load balancer based on ANS.
  • Users can also choose from a variety of pre-integrated open-source add-ons, such as CNI, load balancer, Ingress Controller, monitoring, and logging, as well as other software from the Kubernetes ecosystem.
  • As for container registry setup, users can either create a container registry within the ACOS cluster and configure it for use by AKE Kubernetes clusters, or configure a third-party container registry of their own.

Key Features

#1 Easy to use

  • It helps you create Kubernetes workload clusters quickly in minutes with just a few simple steps. You can manage the entire lifecycle of all clusters, including creation, upgrades, scaling, and deletion, through a fully graphical single management interface.
  • You can configure Kubernetes core component parameters, GPU/vGPU resource parameters, and trusted container image repositories for workload clusters through a graphical interface.
  • It supports the use of virtual machines or physical machines as workload cluster nodes and achieves unified graphical management.
  • It supports multi-dimensional monitoring of cluster status, viewing cluster logs and events.
  • It supports full lifecycle management of Kubernetes resources such as Deployments, Pods, and Persistent Volumes through a graphical interface.
  • It pre-integrates and automates the installation of open-source software such as Calico, MetalLB, Contour, Prometheus, and EFK.
  • It supports parameter configuration of pre-integrated components through a graphical interface.

#2 Production-ready

  • The built-in Arcfra production-grade distributed storage CSI addon provides stable, high-performance persistent volumes for stateful applications.
  • It supports VM and container flat network and unified policy management via Arcfra Network Service and its Container Network Interface.
  • It ensures cluster high availability by placing multiple nodes on different host machines using the VM placement group.
  • When the host machine of a cluster virtual machine node fails, the virtual machine will automatically restart on a normal host machine.
  • It supports automatic or manual rapid replacement of failed Kubernetes nodes.
  • It supports cluster rolling upgrades and rollback on failure to ensure business continuity.
  • It automatically triggers horizontal nodes scale-out/in when cluster resources fail to meet app deployment demands.

#3 Multi-environment compatibility

  • AKE supports coexistence of physical and virtual machine Kubernetes clusters, allowing users to choose the appropriate cluster type and size based on business needs, with unified management.
  • It also supports GPU passthrough and virtualization including vGPU, MIG, and MPS for sharing GPU resources across VMs and containers, enhancing AI workload efficiency and resource utilization.

#4 No vendor Lock-in

  • AKE supports multiple standard Kubernetes versions, allowing users to choose as needed to build clusters.
  • It also allows users to use other open-source software from the CNCF ecosystem, without vendor lock-in.

Use Cases

#1 Automatically create Kubernetes clusters using virtual machines

AKE can reduce repetitive work in multiple cluster deployments. It automates and standardizes node configuration, deployment, parameter configuration, and other operations.

It also performs automated and standardized full-lifecycle management of Kubernetes clusters, including rapid scaling, faulty node automatic replacement, and upgrade/rollback operations.

#2 Supports virtualized and containerized applications simultaneously on limited hardware resources

AKE enables users to centrally manage foundational resources, enhancing interconnectivity between virtualized and containerized applications and resource utilization. Some application scenarios include:

  • Due to budget constraints, new servers cannot be purchased in the short term.
  • Due to limited space, the data center is unable to accommodate additional hardware.
  • Hardware resources must be distributed across various locations due to the geographical dispersion of end users.
  • Containerized and virtualization applications have tight interdependencies and direct intercommunication is required.

#3 Provides independent Kubernetes clusters for “multiple tenants”

Users can use AKE to create multiple Kubernetes clusters of different scales, versions, configurations, and purposes on the same infrastructure resource pool for different projects, different departments within the enterprise, and various third-party ISV-developed containerized applications

#4 Build different types of Kubernetes clusters

Due to varying business demands for computing performance and management flexibility, it is necessary to optimize resource utilization while balancing performance and cost management requirements.

Users can opt to use existing physical machines to provide computing resources for Kubernetes clusters, reducing initial costs.

#5 Building lightweight AI infrastructure with AKE

As most high-performance computing applications require GPU, high-performance storage, and networking services, users prefer to run HPC applications on containers. However, some parts may still rely on VMs and need mixed leverage of GPU and CPU.

With AKE, enterprise users can build a lightweight AI infrastructure that enables rapid provision of container runtime environments with both CPU and GPU resources on VMs and physical machines, meeting diverse compute needs while enhancing resource utilization.

For more information on AKE, please visit our website.

About Arcfra

Arcfra is an IT innovator that simplifies on-premises enterprise cloud infrastructure with its full-stack, software-defined platform. In the cloud and AI era, we help enterprises effortlessly build robust on-premises cloud infrastructure from bare metal, offering computing, storage, networking, security, backup, disaster recovery, Kubernetes service, and more in one stack. Our streamlined design supports both virtual machines and containers, ensuring a future-proof infrastructure.

For more information, please visit www.arcfra.com.