-
Notifications
You must be signed in to change notification settings - Fork 149
container_lifecycle
Verify the correct behaviour of gNOI.Containerz
when operating containers.
This step only applies if the reference implementation of containerz is being tested.
Start by pulling the reference implementation:
$ git clone [email protected]:openconfig/containerz.git
Then cd
into the containerz directory and build containerz:
$ cd containerz
$ go build .
Finally start containerz:
$ ./containerz start
You should see the following output:
$ ./containerz start
I0620 12:02:57.408496 3615908 janitor.go:33] janitor-starting
I0620 12:02:57.408547 3615908 janitor.go:36] janitor-started
I0620 12:02:57.408583 3615908 server.go:167] server-start
I0620 12:02:57.408595 3615908 server.go:170] Starting up on Containerz server, listening on: [::]:19999
I0620 12:02:57.408608 3615908 server.go:171] server-ready
The test container is available in the feature profile repository under
internal/cntrsrv
.
Start by entering in that directory and running the following commands:
$ cd internal/cntrsrv
$ go mod vendor
$ CGO_ENABLED=0 go build .
$ docker build -f build/Dockerfile.local -t cntrsrv:latest .
At this point you will have a container image build for the test container.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
cntrsrv latest 8d786a6eebc8 3 minutes ago 21.4MB
Now export the container to a tarball.
$ docker save -o /tmp/cntrsrv.tar cntrsrv:latest
$ docker rmi cntrsrv:latest
This is the tarball that will be used during tests.
Using the
gnoi.Containerz
API
(reference implementation to be available
openconfig/containerz
, deploy a
container to the DUT. Using gnoi.Containerz
start the container.
The container should expose a simple health API. The test succeeds if is possible to connect to the container via the gRPC API to determine its health.
Using the container started as part of CNTR-1.1, retrieve the logs from the
container and ensure non-zero contents are returned when using
gnoi.Containerz.Log
.
Using the container started as part of CNTR-1.1, validate that the container is
included in the listed set of containers when calling gnoi.Containerz.List
.
Using the container started as part of CNTR-1.2, validate that the container can
be stopped, and is subsequently no longer listed in the gnoi.Containerz.List
API.
The below yaml defines the RPCs intended to be covered by this test.
rpcs:
gnoi:
containerz.Containerz.Deploy:
containerz.Containerz.StartContainer:
containerz.Containerz.StopContainer:
containerz.Containerz.Log:
containerz.Containerz.ListContainer:
-
Home
- Test Plans
- ACCTZ-1.1: Record Subscribe Full
- ACCTZ-2.1: Record Subscribe Partial
- ACCTZ-3.1: Record Subscribe Non-gRPC
- ACCTZ-4.1: Record History Truncation
- ACCTZ-4.2: Record Payload Truncation
- Authz: General Authz (1-4) tests
- CNTR-1: Basic container lifecycle via
gnoi.Containerz
. - CNTR-2: Container network connectivity tests
- Credentialz-1: Password console login
- Credentialz-2: SSH Password Login Disallowed
- Credentialz-3: Host Certificates
- Credentialz-4: SSH Public Key Authentication
- Credentialz-5: Hiba Authentication
- DP-1.2: QoS policy feature config
- DP-1.3: QoS ECN feature config
- DP-1.4: QoS Interface Output Queue Counters
- DP-1.7: One strict priority queue traffic test
- DP-1.8: Two strict priority queue traffic test
- DP-1.9: WRR traffic test
- DP-1.10: Mixed strict priority and WRR traffic test
- DP-1.11: Bursty traffic test
- DP-1.14: QoS basic test
- example-0.1: Topology Test
- FP-1.1: Power admin DOWN/UP Test
- gNMI-1.1: cli Origin
- gNMI-1.2: Benchmarking: Full Configuration Replace
- gNMI-1.3: Benchmarking: Drained Configuration Convergence Time
- gNMI-1.4: Telemetry: Inventory
- gNMI-1.5: Telemetry: Port Speed Test
- gNMI-1.8: Configuration Metadata-only Retrieve and Replace
- gNMI-1.9: Get requests
- gNMI-1.10: Telemetry: Basic Check
- gNMI-1.11: Telemetry: Interface Packet Counters
- gNMI-1.12: Mixed OpenConfig/CLI Origin
- gNMI-1.13: Optics Telemetry, Instant, threshold, and miscellaneous static info
- gNMI-1.14: OpenConfig metadata consistency during large config push
- gNMI-1.15: Set Requests
- gNMI-1.16: fabric redundancy test
- gNMI-1.17: Controller Card redundancy test
- gNMI-1.18: gNMI subscribe with sample mode for backplane capacity counters
- gNMI-1.19: ConfigPush after Control Card switchover
- gNMI-1.20: Telemetry: Optics Thresholds
- gNMI-1.21: Integrated Circuit Hardware Resource Utilization Test
- gNMI-1.22: Controller card port attributes
- gNMI-1.27: gNMI Sample Mode Test
- GNMI-2: gnmi_subscriptionlist_test
- gNOI-2.1: Packet-based Link Qualification
- gNOI-3.1: Complete Chassis Reboot
- gNOI-3.2: Per-Component Reboot
- gNOI-3.3: Supervisor Switchover
- gNOI-3.4: Chassis Reboot Status and Reboot Cancellation
- gNOI-4.1: Software Upgrade
- gNOI-5.1: Ping Test
- gNOI-5.2: Traceroute Test
- gNOI-5.3: Copying Debug Files
- gNOI-6.1: Factory Reset
- Health-1.1: Generic Health Check
- Health-1.2: Healthz component status paths
- MGT-1: Management HA solution test
- MTU-1.3: Large IP Packet Transmission
- OC-1.2: Default Address Families
- OC-26.1: Network Time Protocol (NTP)
- P4RT-1.1: Base P4RT Functionality
- P4RT-1.2: P4RT Daemon Failure
- P4RT-2.1: P4RT Election
- P4RT-2.2: P4RT Metadata Validation
- P4RT-3.1: Google Discovery Protocol: PacketIn
- P4RT-3.2: Google Discovery Protocol: PacketOut
- P4RT-3.21: Google Discovery Protocol: PacketOut with LAG
- P4RT-5.1: Traceroute: PacketIn
- P4RT-5.2: Traceroute Packetout
- P4RT-5.3: Traceroute: PacketIn With VRF Selection
- P4RT-6.1: Required Packet I/O rate: Performance
- P4RT-7.1: LLDP: PacketIn
- P4RT-7.2: LLDP: PacketOut
- Replay-1.0: Record/replay presession test
- Replay-1.1: Record/replay diff command trees test
- Replay-1.2: P4RT Replay Test
- RT-1.1: Base BGP Session Parameters
- RT-1.2: BGP Policy & Route Installation
- RT-1.3: BGP Route Propagation
- RT-1.4: BGP Graceful Restart
- RT-1.5: BGP Prefix Limit
- RT-1.7: Local BGP Test
- RT-1.10: BGP Keepalive and HoldTimer Configuration Test
- RT-1.11: BGP remove private AS
- RT-1.12: BGP always compare MED
- RT-1.14: BGP Long-Lived Graceful Restart
- RT-1.19: BGP 2-Byte and 4-Byte ASN support
- RT-1.21: BGP TCP MSS and PMTUD
- RT-1.23: BGP AFI SAFI OC DEFAULTS
- RT-1.24: BGP 2-Byte and 4-Byte ASN support with policy
- RT-1.25: Management network-instance default static route
- RT-1.26: Basic static route support
- RT-1.27: Static route to BGP redistribution
- RT-1.28: BGP to IS-IS redistribution
- RT-1.29: BGP chained import/export policy attachment
- RT-1.30: BGP nested import/export policy attachment
- RT-1.32: BGP policy actions - MED, LocPref, prepend, flow-control
- RT-1.33: BGP Policy with prefix-set matching
- RT-1.34: BGP route-distance configuration
- RT-1.51: BGP multipath ECMP
- RT-1.52: BGP multipath UCMP support with Link Bandwidth Community
- RT-1.53: prefix-list test
- RT-1.54: BGP Override AS-path split-horizon
- RT-1.55: BGP session mode (active/passive)
- RT-2.1: Base IS-IS Process and Adjacencies
- RT-2.2: IS-IS LSP Updates
- RT-2.6: IS-IS Hello-Padding enabled at interface level
- RT-2.7: IS-IS Passive is enabled at interface level
- RT-2.8: IS-IS metric style wide not enabled
- RT-2.9: IS-IS metric style wide enabled
- RT-2.10: IS-IS change LSP lifetime
- RT-2.11: IS-IS Passive is enabled at the area level
- RT-2.12: Static route to IS-IS redistribution
- RT-2.13: Weighted-ECMP for IS-IS
- RT-2.14: IS-IS Drain Test
- RT-2.15: IS-IS Graceful Restart Helper
- RT-3.1: Policy based VRF selection
- RT-3.2: Multiple <Protocol, DSCP> Rules for VRF Selection
- RT-4.10: AFTs Route Summary
- RT-4.11: AFTs Route Summary
- RT-5.1: Singleton Interface
- RT-5.2: Aggregate Interfaces
- RT-5.3: Aggregate Balancing
- RT-5.4: Aggregate Forwarding Viable
- RT-5.5: Interface hold-time
- RT-5.6: Interface Loopback mode
- RT-5.7: Aggregate Not Viable All
- RT-5.8: IPv6 Link Local
- RT-5.9: Disable IPv6 ND Router Arvetisment
- RT-5.10: IPv6 Link Local generated by SLAAC
- RT-6.1: Core LLDP TLV Population
- RT-7.1: BGP default policies
- RT-7.2: BGP Policy Community Set
- RT-7.3: BGP Policy AS Path Set
- RT-7.4: BGP Policy AS Path Set and Community Set
- RT-7.5: BGP Policy - Match and Set Link Bandwidth Community
- RT-7.8: BGP Policy Match Standard Community and Add Community Import/Export Policy
- RT-7.11: BGP Policy - Import/Export Policy Action Using Multiple Criteria
- RT-14.2: GRIBI Route Test
- SEC-3.1: Authentication
- SFLOW-1: sFlow Configuration and Sampling
- System-1: System testing
- TE-1.1: Static ARP
- TE-1.2: My Station MAC
- TE-2.1: gRIBI IPv4 Entry
- TE-2.2: gRIBI IPv4 Entry With Aggregate Ports
- TE-3.1: Base Hierarchical Route Installation
- TE-3.2: Traffic Balancing According to Weights
- TE-3.3: Hierarchical weight resolution
- TE-3.5: Ordering: ACK Received
- TE-3.6: ACK in the Presence of Other Routes
- TE-3.7: Base Hierarchical NHG Update
- TE-3.31: Hierarchical weight resolution with PBF
- TE-4.1: Base Leader Election
- TE-4.2: Persistence Mode
- TE-5.1: gRIBI Get RPC
- TE-6.1: Route Removal via Flush
- TE-6.2: Route Removal In Non Default VRF
- TE-8.1: DUT Daemon Failure
- TE-8.2: Supervisor Failure
- TE-9.2: MPLS based forwarding Static LSP
- TE-9.3: FIB FAILURE DUE TO HARDWARE RESOURCE EXHAUST
- TE-9: gRIBI MPLS Compliance
- TE-10: gRIBI MPLS Forwarding
- TE-11.1: Backup NHG: Single NH
- TE-11.2: Backup NHG: Multiple NH
- TE-11.3: Backup NHG: Actions
- TE-11.21: Backup NHG: Multiple NH with PBF
- TE-11.31: Backup NHG: Actions with PBF
- TE-13.1: gRIBI route ADD during Failover
- TE-13.2: gRIBI route DELETE during Failover
- TE-14.1: gRIBI Scaling
- TE-14.2: encap and decap scale
- TE-15.1: gRIBI Compliance
- TE-16.1: basic encapsulation tests
- TE-16.2: encapsulation FRR scenarios
- TE-16.3: encapsulation FRR scenarios
- TE-17.1: VRF selection policy driven TE
- TR-6.1: Remote Syslog feature config
- TRANSCEIVER-1: Telemetry: 400ZR Chromatic Dispersion(CD) telemetry values streaming
- TRANSCEIVER-3: Telemetry: 400ZR Optics firmware version streaming
- TRANSCEIVER-4: Telemetry: 400ZR RX input and TX output power telemetry values streaming.
- TRANSCEIVER-5: Configuration: 400ZR channel frequency, output TX launch power and operational mode setting.
- TRANSCEIVER-6: Telemetry: 400ZR Optics performance metrics (pm) streaming.
- TRANSCEIVER-7: Telemetry: 400ZR Optics inventory info streaming
- TRANSCEIVER-8: Telemetry: 400ZR Optics module temperature streaming.
- TRANSCEIVER-9: Telemetry: 400ZR TX laser bias current telemetry values streaming.
- TRANSCEIVER-10: Telemetry: 400ZR Optics FEC(Forward Error Correction) Uncorrectable Frames Streaming.
- TRANSCEIVER-11: Telemetry: 400ZR Optics logical channels provisioning and related telemetry.
- TRANSCEIVER-12: Telemetry: 400ZR Transceiver Supply Voltage streaming.
- TRANSCEIVER-13: Configuration: 400ZR Transceiver Low Power Mode Setting.
- TUN-1.4: Interface based IPv6 GRE Encapsulation
- TUN-1.9: GRE inner packet DSCP
- Test Plans