How-to build, test, debug#
Prepare the development environment#
Development and packaging of SCL components for distribution is
done with nix from the nix ecosystem.
Make sure to:
- install
nix, and - load the
kvmvirtualization kernel module for integration tests. Support for nested virtualization with other hypervisors must be enabled.
The two most important nix commands used in the following are nix-shell
and nix-build. nix-shell brings up a reproducible shell environment
providing all tools like Rust or Go compilers with the right
versions that are required for development. nix-build is used to
transform the SCL code base into distributable artifacts (binaries,
documentation) or to run tests and additional QA checks.
Build documentation#
Use the following command to generate and serve the rendered documentation:
nix-build -A docs.rendered -o result-docs-rendered && \
nix-shell -p python3 --run 'python3 -m http.server --directory result-docs-rendered 8001'
The documentation can now be viewed at http://localhost:8001.
Build SCL binaries#
The most relevant targets for development are listed in the following table.
| Build Target | Command(s) |
|---|---|
| SCL service binaries | nix-build -A packages.scl-management |
| SCL terraform provider | nix-build -A packages.tf-provider |
Run unit tests and linters#
| Build Target | Command(s) |
|---|---|
| Unit tests | nix-build -A checks.scl-management.test |
| Unit test coverage | nix-build -A checks.scl-management.coverage |
| Linters | nix-build -A checks.scl-management.fmt |
| Clippy | nix-build -A checks.scl-management.clippy |
Note: Go-related unit tests are executed automatically when building the SCL terraform provider.
Integration tests#
Run integration tests#
Run nix-build -A checks.integrationTests to execute all integration tests.
Individual integration tests can be run as follows:
| Integration Test Suite | Command(s) |
|---|---|
Multi-node, tested via sclctl |
nix-build -A checks.integrationTests.multinode.sclctlVmLifecycle |
Multi-node, tested via terraform |
nix-build -A checks.integrationTests.multinode.terraformVmLifecycle |
| Testing Infrastructure Management config | nix-build -A checks.integrationTests.imApiFilter |
| Testing SCL-API Certificate Revocation List support | nix-build -A checks.integrationTests.sclApiCrl |
| Testing ComputeAPI | nix-build -A checks.integrationTests.computeApi |
| Testing L2-Net-API | nix-build -A checks.integrationTests.l2NetApi |
In Multi-node tests, clients talk to the Infrastructure Management API (realized by an OpenAPI Proxy instance forwarding traffic to the SCL API) and use mocked access tokens obtained via the JWT Dispenser(Also in the OpenAPI Proxy repository). The services are spread across multiple nodes:
| Node / Machine Name | Services / Role |
|---|---|
computeNode0 |
Runs OpenApiProxy, L3-Net-Controller, ComputeApi & L2NetApi |
computeNode1 |
Runs OpenApiProxy, ComputeApi & L2NetApi |
sclApiNode |
Runs SCL API, Scheduler-, L2-Net- & Vm-Controller |
imApiNode |
Runs OpenAPI Proxy, JWT Dispenser |
clientNode |
sclctl, terraform |
Debug with integration test environment#
The aforementioned integration tests and their environments can be debugged interactively, allowing you to freely interact with the services and their underlying systems.
Such environments can be created by appending .driverInteractive to any integration
test command from the table above. Afterwards, the session can be started by
running ./result/bin/nixos-test-driver.
If we for example want to investigate checks.integrationTests.multinode.terraformVmLifecycle, we simply
run the following commands:
nix-build -A checks.integrationTests.multinode.terraformVmLifecycle.driverInteractive
./result/bin/nixos-test-driver
This will give you an interactive Python shell.
You can execute the same commands as used in the Python-based test scripts in the
integration-test sub-directory. Note that no expansion of enclosed nix expressions
(i.e., the ${...} syntax) is performed.
Interaction with nodes is possible via Python machine objects. The number and names of
machines (such as singlenode or sclApiNode) depends on the test environment. See
the previous section for details.
To start all virtual machines, run start_all(). It is advised to do this first.
You can get a shell (without prompt!) in a given $MACHINE via $MACHINE.shell_interact().
But be aware that terminating this shell will permanently break the connection
to the VM for the test run. It is often a better approach to externally connect to the
test VMs if an interactive shell is needed, e.g., in a graphical environment by logging
in as root in the VM window, opening a serial console, or connecting via SSH to the VMs.
Please also refer to the corresponding Nix tutorial.
Manual testing with (virtual) machines#
1. Install the system module on a physical device#
2. Use sclctl to create, read, update and delete resources:#
We now interact with the SCL via the terminal from the host:
# SSH into the scl-vm or singlenode-vm and simply hit enter when asked for a password
# (don't check and store the SSH host keys of the VM as they change with each new build)
ssh -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null" -p 2222 root@127.0.0.1
# Register a node
sclctl node create localhost --nic-api https://localhost:9009 --node-api https://localhost:9009 --vcpu 4 --ram 4096
# Create a SC
sclctl sc create sc01
# Create a router to forward ports
sclctl router create sc01 router01 --external-ip 192.168.168.2 --internal-ip 192.168.10.1 --internal-ip-netmask 255.255.255.0 --forward-tcp 2222:192.168.10.42:22
Before we create a VM, we first prepare two cloud-init configuration files
(either on your host or the launched singlenode-vm) that we pass later to sclctl.
#cloud-config
system_info:
default_user:
name: scl
password: scl
chpasswd: { expire: False }
ssh_pwauth: True
version: 2
ethernets:
id0:
match:
name: "enp*"
addresses:
- 192.168.10.42/24
gateway4: 192.168.10.1
If you created user-data.yaml and network-config.yaml on your host, you can copy the files into
the singlenode-vm like this:
scp -P 2222 user-data.yaml network-config.yaml root@127.0.0.1:/root
Now back inside the scl-vm or singlenode-vm, we can create the guest VM and SSH into it:
# Create a VM with a local volume
sclctl vm create sc01 vm01 \
--vcpu 1 --ram 1024 \
--boot-volume size=4096,image=http://localhost:9999/ubuntu-22.04-server-cloudimg-amd64.raw \
--network-device-name tapvm01 \
--cloud-init-user-data user-data.yaml --cloud-init-network-config network-config.yaml
# DEPRECATED: Create a VM with a referenced volume (only works in single node setups)
sclctl volume create sc01 vol01 --size 4096 --url http://localhost:9999/ubuntu-22.04-server-cloudimg-amd64.raw
sclctl vm create sc01 vm01 \
--vcpu 1 --ram 1024 \
--boot-volume name=vol01 \
--network-device-name tapvm01 \
--cloud-init-user-data user-data.yaml --cloud-init-network-config network-config.yaml
# Wait a moment and verify that vm01 reached the status "running"
sclctl vm show sc01 vm01
# SSH into VM vm01 (using the port mapping from router01), the password is "scl" as specified in `user-data.yaml`
ssh -p 2222 scl@192.168.168.2
With this, you have successfully launched a VM via the SCL!
All steps in one copy-pastable code block
sclctl node create localhost --nic-api https://localhost:9009 --node-api https://localhost:9009 --vcpu 4 --ram 4096
sclctl sc create sc01
sclctl router create sc01 router01 --external-ip 192.168.168.2 --internal-ip 192.168.10.1 --internal-ip-netmask 255.255.255.0 --forward-tcp 2222:192.168.10.42:22
cat <<EOT > user-data.yaml
#cloud-config
system_info:
default_user:
name: scl
password: scl
chpasswd: { expire: False }
ssh_pwauth: True
EOT
cat <<EOT > network-config.yaml
version: 2
ethernets:
id0:
match:
name: "enp*"
addresses:
- 192.168.10.42/24
gateway4: 192.168.10.1
EOT
sclctl vm create sc01 vm01 \
--vcpu 1 --ram 1024 \
--boot-volume size=4096,image=http://localhost:9999/ubuntu-22.04-server-cloudimg-amd64.raw \
--network-device-name tapvm01 \
--cloud-init-user-data user-data.yaml --cloud-init-network-config network-config.yaml
sclctl vm show sc01 vm01