Slice Guest Telemetry and Benchmark v1¶
Purpose¶
Define the next telemetry and benchmark model for gpu_slice allocations.
The current product behavior is intentionally conservative:
- bare-metal allocations currently normalize metrics from host-side collectors, with Netdata still present only as the fallback/operator path during transition;
- slice allocations do not expose host Netdata to tenants because host GPU
telemetry is misleading once devices are bound to
vfio-pci; - slice allocations therefore show an explicit "guest telemetry is not enabled yet" gap instead of pretending host values are tenant values.
This document turns that gap into an explicit design.
Non-Goals¶
This document does not propose:
- direct tenant access to guest Netdata;
- a generic observability platform redesign;
- fake or synthesized slice metrics;
- bypassing node-agent to reach guest VMs directly from the API.
Source Model¶
Allocation metrics need an explicit source contract.
Bare Metal¶
- transition target:
host_local_probe - current rollout state: prefer the platform-owned host probe and fall back to host Netdata only while rollout is incomplete
Open Netdataremains an admin/operator tool, not a tenant telemetry dependency
GPU Slice¶
slice_guest- guest telemetry must be collected through node-agent using the controlled management path to the VM private IP
- host Netdata remains useful for node health, bridge state, and operator-only diagnostics, but not for tenant GPU utilization
Unavailable¶
unavailable- used when neither host nor guest telemetry can provide a truthful answer
- responses should carry an explicit reason instead of falling back to host GPU values for slice allocations
Why Not Direct Guest Netdata¶
Guest Netdata may still be installed inside the slice VM as an implementation detail, but it should not become the public tenant surface.
Reasons:
- it expands tenant VM network exposure unnecessarily;
- it forces the platform proxy to solve another app-auth/session problem for every slice guest;
- it makes telemetry depend on guest networking policy instead of node-agent's controlled management path;
- it weakens the product boundary between "tenant workload surface" and "platform-managed telemetry surface".
The preferred product shape is:
- node-agent collects guest telemetry;
- API normalizes it into the allocation metrics contract;
- UI renders it as allocation telemetry without exposing the guest dashboard directly.
Recommended First Implementation¶
1. Reuse the Existing Guest Access Path¶
Node-agent already:
- waits for guest SSH on the slice VM private IP;
- checks guest readiness via the managed SSH key;
- captures a post-boot
performanceprobe in node task output.
That same control path should be used for telemetry.
2. Add Guest Telemetry Collection in Node-Agent¶
For gpu_slice allocations, node-agent should collect:
nvidia-smiGPU utilization and memory utilization;- device-level health/power/temperature when available;
- optional guest CPU and memory values if we want tenant-visible OS metrics to reflect the VM rather than the host.
The first implementation can be pull-based and read-only. It does not need a long-running in-guest agent before the product contract is proven.
3. Keep Host Netdata for Operator-Only Node Health¶
Host Netdata still owns:
- bridge and host interface telemetry;
- BF3/tmfifo visibility;
- host pressure, storage, and service health;
- bare-metal GPU visibility on nodes not in slice mode.
This operator health path should stay separate from tenant allocation metrics.
4. Extend the Allocation Metrics Contract¶
The allocation metrics APIs should add an explicit source indicator, for example:
host_netdataslice_guestunavailable
The UI should use this to explain why Netdata is available for one allocation class but not another.
Benchmark Model¶
Benchmark evidence should be captured from the same product-controlled paths used for lifecycle and telemetry, not from ad hoc manual shell sessions.
Baseline¶
Keep the node-agent performance probe as the cheap default capture:
- guest boot/readiness timings;
nvidia-smiavailability and latency;- GPU count;
- RDMA device count;
- root disk identity;
- vCPU and memory sizing;
- Docker/NVIDIA runtime presence.
BM vs VM Comparison¶
Add a repo-owned capture harness for repeatable evidence:
- same host/guest commands;
- JSON output saved under
dist/benchmarks/; - optional read-only
fioprobe; - optional
ib_write_bwprobe when infra supplies a peer.
This creates a durable comparison artifact for:
- hugepages on/off;
- guest driver profile changes;
- host/guest network tuning changes;
- BM versus slice VM performance deltas.
Current Recommendation¶
- Do not expose guest Netdata directly.
- Build slice guest metrics through node-agent first.
- Treat host Netdata as operator-only on slice nodes.
- Use the benchmark harness plus node-agent
performanceprobe to validate BM versus VM behavior before publishing performance expectations.
Product Boundary Update¶
Netdata should not remain a tenant-facing product surface.
Recommended boundary:
- allocation pages render first-party GPUaaS metrics only;
Open Netdatais removed from user allocation pages;- operator-facing Netdata remains allowed in admin surfaces such as
/admin/opstoday and can later move behind tighter/admin/nodesflows; - the allocation metrics contract stays stable even if the underlying BM collector changes away from Netdata later.
This matches the intended cloud-style model:
- VM/allocation pages show selected product telemetry;
- deeper infra tooling is separate and operator-owned;
- slice and bare metal converge on the same allocation telemetry UX even when their collection backends differ.
Current Bare-Metal Collector Caveats¶
Today, bare-metal allocation metrics are normalized from host Netdata. That gives a consistent API surface, but the richness of BM detail still depends on what each host exports.
Observed current behavior:
- one node may show aggregate GPU metrics without per-device GPU rows;
- one node may show IB/fabric interfaces with
down/empty throughput while another shows partial speed/utilization data; - this is expected with the current collector because per-device GPU tables
require Netdata
nvidia_smi.gpu_gpu-*charts and fabric tables require the correspondingnet.*,net_speed.*,net_operstate.*, and carrier charts to exist for those interfaces.
Implication:
- BM allocation metrics are already product-shaped,
- but BM detail is not yet fully uniform across hosts,
- and that inconsistency should be treated as a host collector/config quality problem, not as the desired long-term tenant telemetry architecture.
Initial Utility Validation Notes¶
The first platform-owned telemetry utility was validated over SSH before any API rollout.
Healthy NVIDIA Bare Metal¶
Reference host:
j27u15- Tailscale
100.99.13.72
Observed:
- direct
nvidia-smireturned all8H200 devices with stable memory, power, temperature, and ECC values; - the utility returned the same
8devices and the same aggregate shape the allocation metrics API expects; - the utility also returned IB/fabric inventory such as
ibp26s0with correct link speed metadata.
NVIDIA Slice Guest¶
Reference guest:
- slice VM on
j22u15 - reached through the existing managed SSH path via the host private bridge
Observed:
- the utility returned a clean single-device
NVIDIA H200view; - only guest-visible NICs were returned, not host bridge or unrelated node interfaces;
- this confirms the guest-scoped collector model gives the product surface we want for slices without exposing host Netdata.
CPU-Only Node¶
Reference host:
- local kind VM
192.168.1.171
Observed:
- the utility returned CPU, memory, pressure, and network inventory;
- GPU capabilities were correctly reported as unavailable;
- this confirms the schema degrades cleanly on non-GPU nodes.
Broken NVIDIA Host Behavior¶
Reference host:
j22u15- Tailscale
100.103.10.83
Observed:
- direct
nvidia-smifailed on the host with:Failed to initialize NVML: No supported GPUs were found; - the utility now reports GPU telemetry as unavailable and records the error in
metrics.last_error; - this is the desired behavior because it avoids fabricating fake GPU devices from stderr text.
Comparison Against Netdata¶
Observed on j27u15:
- Netdata exposes the expected
nvidia_smi.gpu_gpu-*charts and IB charts; - the utility matches the important product-facing values: GPU device count, VRAM, power, temperature, ECC, and fabric inventory;
- Netdata still has richer chart families for deep operator debugging, which is acceptable because the long-term product boundary is to keep Netdata admin-only.
Observed on j22u15:
- Netdata is old and inconsistent;
- host
nvidia-smiitself is broken; - this confirms we should not make tenant telemetry depend on host Netdata quality.