var startServices … var stopServices … var busyboxImage … var agnhostImage … const defaultNodeAllocatableCgroup … const defaultPodResourcesPath … const defaultPodResourcesTimeout … const defaultPodResourcesMaxSize … const cpuManagerStateFile … const memoryManagerStateFile … var kubeletHealthCheckURL … var containerRuntimeUnitName … var kubeletCfg … func getNodeSummary(ctx context.Context) (*stats.Summary, error) { … } func getV1alpha1NodeDevices(ctx context.Context) (*kubeletpodresourcesv1alpha1.ListPodResourcesResponse, error) { … } func getV1NodeDevices(ctx context.Context) (*kubeletpodresourcesv1.ListPodResourcesResponse, error) { … } // Returns the current KubeletConfiguration func getCurrentKubeletConfig(ctx context.Context) (*kubeletconfig.KubeletConfiguration, error) { … } func addAfterEachForCleaningUpPods(f *framework.Framework) { … } // Must be called within a Context. Allows the function to modify the KubeletConfiguration during the BeforeEach of the context. // The change is reverted in the AfterEach of the context. func tempSetCurrentKubeletConfig(f *framework.Framework, updateFunction func(ctx context.Context, initialConfig *kubeletconfig.KubeletConfiguration)) { … } func updateKubeletConfig(ctx context.Context, f *framework.Framework, kubeletConfig *kubeletconfig.KubeletConfiguration, deleteStateFiles bool) { … } func waitForKubeletToStart(ctx context.Context, f *framework.Framework) { … } func deleteStateFile(stateFileName string) { … } // listNamespaceEvents lists the events in the given namespace. func listNamespaceEvents(ctx context.Context, c clientset.Interface, ns string) error { … } func logPodEvents(ctx context.Context, f *framework.Framework) { … } func logNodeEvents(ctx context.Context, f *framework.Framework) { … } func getLocalNode(ctx context.Context, f *framework.Framework) *v1.Node { … } // getLocalTestNode fetches the node object describing the local worker node set up by the e2e_node infra, alongside with its ready state. // getLocalTestNode is a variant of `getLocalNode` which reports but does not set any requirement about the node readiness state, letting // the caller decide. The check is intentionally done like `getLocalNode` does. // Note `getLocalNode` aborts (as in ginkgo.Expect) the test implicitly if the worker node is not ready. func getLocalTestNode(ctx context.Context, f *framework.Framework) (*v1.Node, bool) { … } // logKubeletLatencyMetrics logs KubeletLatencyMetrics computed from the Prometheus // metrics exposed on the current node and identified by the metricNames. // The Kubelet subsystem prefix is automatically prepended to these metric names. func logKubeletLatencyMetrics(ctx context.Context, metricNames ...string) { … } // getCRIClient connects CRI and returns CRI runtime service clients and image service client. func getCRIClient() (internalapi.RuntimeService, internalapi.ImageManagerService, error) { … } // findKubeletServiceName searches the unit name among the services known to systemd. // if the `running` parameter is true, restricts the search among currently running services; // otherwise, also stopped, failed, exited (non-running in general) services are also considered. // TODO: Find a uniform way to deal with systemctl/initctl/service operations. #34494 func findKubeletServiceName(running bool) string { … } func findContainerRuntimeServiceName() (string, error) { … } type containerRuntimeUnitOp … const startContainerRuntimeUnitOp … const stopContainerRuntimeUnitOp … func performContainerRuntimeUnitOp(op containerRuntimeUnitOp) error { … } func stopContainerRuntime() error { … } func startContainerRuntime() error { … } // restartKubelet restarts the current kubelet service. // the "current" kubelet service is the instance managed by the current e2e_node test run. // If `running` is true, restarts only if the current kubelet is actually running. In some cases, // the kubelet may have exited or can be stopped, typically because it was intentionally stopped // earlier during a test, or, sometimes, because it just crashed. // Warning: the "current" kubelet is poorly defined. The "current" kubelet is assumed to be the most // recent kubelet service unit, IOW there is not a unique ID we use to bind explicitly a kubelet // instance to a test run. func restartKubelet(running bool) { … } // stopKubelet will kill the running kubelet, and returns a func that will restart the process again func stopKubelet() func() { … } func kubeletHealthCheck(url string) bool { … } func toCgroupFsName(cgroupName cm.CgroupName) string { … } // reduceAllocatableMemoryUsageIfCgroupv1 uses memory.force_empty (https://lwn.net/Articles/432224/) // to make the kernel reclaim memory in the allocatable cgroup // the time to reduce pressure may be unbounded, but usually finishes within a second. // memory.force_empty is no supported in cgroupv2. func reduceAllocatableMemoryUsageIfCgroupv1() { … } // Equivalent of featuregatetesting.SetFeatureGateDuringTest // which can't be used here because we're not in a Testing context. // This must be in a non-"_test" file to pass // make verify WHAT=test-featuregates func withFeatureGate(feature featuregate.Feature, desired bool) func() { … } // waitForAllContainerRemoval waits until all the containers on a given pod are really gone. // This is needed by the e2e tests which involve exclusive resource allocation (cpu, topology manager; podresources; etc.) // In these cases, we need to make sure the tests clean up after themselves to make sure each test runs in // a pristine environment. The only way known so far to do that is to introduce this wait. // Worth noting, however, that this makes the test runtime much bigger. func waitForAllContainerRemoval(ctx context.Context, podName, podNS string) { … } func getPidsForProcess(name, pidFile string) ([]int, error) { … } func getPidFromPidFile(pidFile string) (int, error) { … } // WaitForPodInitContainerRestartCount waits for the given Pod init container // to achieve at least a given restartCount // TODO: eventually look at moving to test/e2e/framework/pod func WaitForPodInitContainerRestartCount(ctx context.Context, c clientset.Interface, namespace, podName string, initContainerIndex int, desiredRestartCount int32, timeout time.Duration) error { … } // WaitForPodContainerRestartCount waits for the given Pod container to achieve at least a given restartCount // TODO: eventually look at moving to test/e2e/framework/pod func WaitForPodContainerRestartCount(ctx context.Context, c clientset.Interface, namespace, podName string, containerIndex int, desiredRestartCount int32, timeout time.Duration) error { … } // WaitForPodInitContainerToFail waits for the given Pod init container to fail with the given reason, specifically due to // invalid container configuration. In this case, the container will remain in a waiting state with a specific // reason set, which should match the given reason. // TODO: eventually look at moving to test/e2e/framework/pod func WaitForPodInitContainerToFail(ctx context.Context, c clientset.Interface, namespace, podName string, containerIndex int, reason string, timeout time.Duration) error { … } func nodeNameOrIP() string { … }