Skip to content

Mizar Arktos Integration Release 2022 0130 Test Plan

Ying Huang edited this page Mar 11, 2022 · 12 revisions

Infrastructure needs to be covered

  1. Kube-up: scale out 2x2
  2. Kube-up: scale out 1x1
  3. Arktos up
  4. Arktos up with 3 workers

Test Success Standard

Provisioning

  1. Kubectl get bouncers, vpcs, subnets, dividers in provisioned status in reasonable time frame
    1. 2 min for VPC0?
    2. 30s for new tenant?

IP assignment tolerance

  1. If using docker.io/mizarnet/testpod:latest, no pre downloaded image, pod should start in < (1m?)
  2. If using whoami or nginx image, pod should start in 2s

Test cases

Initial Provisioning

  1. VPC0 creation automatically
  2. Default system tenant VPC creation automatically
  3. Create new tenant, has VPC provisioned successfully

IP assignment for pod/services

The following test cases need to be do both in system tenant as well as in newly created tenant:

  1. Pod created in system w/o Mizar annotation can be put into running state with IP range from default system tenant VPC
  2. Service created w/o Mizar annotation can have endpoint created with IP range from default tenant VPC 1. Manually create another VPC in system tenant
    1. Pod/service created with Mizar annotation will have IP range from manually created VPC
    2. Pod/service created with Mizar annotation that does not exist, pod/service should not start correctly. Remove VPC annotation will assign pod/service with IP range from default VPC (does this work with Mizar? - test, take it as it is)

Test Connectivity

The following test cases need to be do both in system tenant as well as in newly created tenant:

  1. Multiple pods sharing the same VPC can connect to each other - considering use different names for pods created in different tenant to ensure ping requests reached the correct pod
  2. A pod that sharing the same VPC with a service can connect to the service

Test Isolation

VPC in same tenant

  1. Pods not in the same VPC cannot connect to each other
  2. Services not in the same VPC cannot connect to each other

System vs. non system tenant

  1. Pods not in same tenant cannot connect to each other
  2. Services not in same tenant cannot connect to each other

Two non system tenants

  1. Pods not in same tenant cannot connect to each other
  2. Services not in same tenant cannot connect to each other

Initial test for performance

  1. Create multiple pods (10, 100, 1000 (30 workers)) from a single deployment, whether all pods can be running. If so, how long it takes?
  2. Create multiple tenants (1, 10, 100), how long it take to complete the initial provisioning

Questions

  1. Do we need to support VPC update in 130, i.e. switch from one VPC to another VPC within the same tenant (I don't think we need to support this case in 130. But remove mistakenly assigned VPC annotation should be able to put pod/service back to default VPC)