跳到主要内容

Architectures

APIM Deployment in DEV/STG/PRD Environments

In this architecture, the APIM platform is deployed on a Kubernetes cluster and segmented by namespaces and roles. The system is designed to run independently across multiple environments such as Development, Staging, and Production.

Ingress Layer and External/Internal Routing

At the top layer, the system utilizes two Load Balancers (LBs):

Internal LB (Admin Access) - provides access to:

  • APIM Console (apim-admin.company.com)
  • IAM Console (tenant-admin.company.com)
  • Developers Portal for admin (developers-admin.company.com)
  • Auth Console (auth-admin.company.com)

Internet-facing LB (User Access) - provides external access to:

  • Open API services (api.company.com)
  • Public Developer Portal (developers.company.com)

Traffic is routed through a centralized Ingress Controller, where TLS termination occurs, and traffic is forwarded over HTTP to internal services based on domain paths.

Node Group: Management

This group hosts all core components required to manage tenants, projects, gateways, APIs, and policies.

Key Namespaces:

namespace: apim

Core Components:

ComponentDescription
Tenant Manager (IAM)Handles identity and access management for system users and tenant organizations
Tenant Manager ConsoleUI frontend for tenant admins (built with Vue.js)
API Management Console BFFBackend-for-Frontend coordinating UI and service interactions (Vue.js & Node.js)
Gateway ManagerControls gateway provisioning and association with projects
Policy ManagerManages inbound/outbound policy definitions such as IP filtering, authentication, logging
Developers Portal (Frontend & Backend)Interfaces for API users to browse and test published APIs
Analysis ManagerHandles real-time API usage analysis and reporting (connected with FluentBit)

Persistent Databases:

  • Tenant Manager DB (PostgreSQL)
  • APIM DB Master/Slave (MariaDB)
  • PVC configured for data durability and redundancy.

Node Group: User-Node-Group

This group executes runtime API traffic and routes user API calls to backend microservices.

Namespaces:

namespace: user-namespace

Components:

ComponentDescription
API GatewayKong-based gateway handling ingress API requests
API Gateway DBPostgreSQL store for runtime gateway configuration and state
In-Memory DB (Master/Slaves)Used for token/session storage (likely Redis or similar)
MicroservicesThe actual backend services receiving routed API traffic

API Gateway receives requests from external users and performs:

  • Policy execution (auth, IP filtering, etc.)
  • Routing to appropriate microservice
  • Returning responses back via the ingress

Node Group: Monitoring

ComponentDescription
Logging SystemPowered by Elasticsearch, used for collecting structured API logs
Monitoring SystemPowered by Prometheus, collects metrics for system health and alerting

Logging and monitoring components are integrated with FluentBit and API Gateway logs, enabling:

  • Real-time API traffic insights
  • Custom metric visualization
  • Alerting via Slack/Email channels

System Communication Flow

  1. Admins access the system via internal domains through the Ingress Controller.
  2. Users call Open APIs and Developer Portal via external domains, which route to the Kong Gateway.
  3. Kong enforces API policies (inbound/outbound) and routes to respective microservices.
  4. Logs and metrics from all components are streamed to the monitoring and logging stack.

Integrated APIM Deployment with Cloud & Third-Party Services

This architecture shows how the APIM system integrates with external infrastructure such as AWS and logging/monitoring services like CloudWatch, Datadog or Firehose.

How It Works:
  • External users access the system through a public domain, which is routed via AWS API Gateway through VPC Link to Internal APIM Gateway.
  • Private domains and Route53 are used to route requests internally to the Kubernetes cluster where APIM services reside.
  • Once requests reach the Kong Gateway, inbound policies are enforced (authentication, header injection, etc.), and then traffic is routed to backend services.
  • Responses pass through outbound policies (e.g., data masking, logging), and are returned to the client.
  • All request/response logs and metrics are forwarded to CloudWatch, Datadog, or Firehose via integrated exporters.
  • Swagger-based spec registration is used to expose or update APIs dynamically through Dev Portal.

This architecture supports secure, scalable, and observable API management across organizational boundaries. It ensures API governance while allowing seamless extension to cloud-native services.

Internal Deployment for Development Environment

This version reflects an internal-only setup of the APIM platform for development use. It emphasizes security and closed access during API testing or service development.

How It Works:
  • All traffic flows internally, through private DNS and ALB, into the cluster.
  • Internal developers access the APIM Console, Developer Portal, and IAM via predefined internal subdomains.
  • API traffic from development frontend applications is sent to the Kong Gateway, where all configured policies are applied.
  • Backend microservices (hosted in the bo namespace) respond to requests routed through the gateway.
  • The entire stack is separated by namespace for maintainability and role separation:
    • apim contains configuration and control logic.
    • microservices contains runtime services and business logic.

This architecture allows safe API development and testing without any exposure to public networks. It is optimal for validating services, applying policies, and verifying access control before staging or production rollout.

Dev-Only Internal Flow Model

This architecture presents a detailed internal traffic flow within a development environment, focusing on network boundaries and isolation.

How It Works:
  • Internal applications and developers interact with the APIM Console or Developer Portal through private domains and NLB/ALB routing.
  • Requests from the frontend are routed to the Kong Gateway, where runtime policies such as authentication, rate limiting, and transformation are enforced.
  • Gateway routes requests to backend microservices hosted in the same cluster or via service mesh (if applicable).
  • API usage, logs, and traffic statistics are sent to internal observability tools like Datadog, ensuring visibility during dev operations.
  • There is no public-facing access point in this environment - all components, including the API Gateway, are strictly internal.

This setup ensures a secure, isolated pipeline for developing and testing APIs while retaining full monitoring and governance capability. It allows dev teams to simulate production-like API behavior without external exposure.

Component Description and Resource

This table outlines the CPU, memory, and storage resources assigned to each component in the APIM Control Plane. It helps infrastructure and DevOps teams plan and provision Kubernetes clusters accurately and efficiently.

InstanceDescriptionkindReplicasCPU (m)CPU Total (m)Memory (Mi)Memory Total (Mi)Storage (GB)Storage Total (GB)
deploy-apim-analysis-managerAnalysis ManagerDeployment10.50.51024102400
deploy-apim-bffAPIM Console BFFDeployment10.50.551251200
deploy-apim-gateway-managerGateway ManagerDeployment10.50.576876800
deploy-apim-tenant-managerTenant Manager (IAM)Deployment10.50.576876800
deploy-apim-tenant-manager-consoleTenant Manager ConsoleDeployment10.20.251251200
deploy-apim-policy-managerPolicy ManagerDeployment10.20.251251200
deploy-apim-developer-portal-backendDeveloper Portal BackendDeployment10.20.25125122020
deploy-apim-developer-portal-frontendDeveloper Portal FrontendDeployment10.20.2646400
deploy-apim-mariadb-masterAPIM DB (MariaDB Master)StatefulSet10.50.55125121010
deploy-apim-mariadb-slaveAPIM DB (MariaDB Slave)StatefulSet10.20.225625600
statefulset-apim-tenant-manager-postgresqlIAM DB (PostgreSQL)StatefulSet10.50.52562561010
Total4576040
Total Control Plane Resources:
  • CPU: 4 Cores
  • Memory: 6 GiB
  • Storage: 40 GB

Notes:

  • CPU/Memory increases depending on replica scaling policy
  • Logging/Monitoring storage scales with traffic volume
  • One public and one private APIM deployment are supported