const nominalConcurrencyLimitMetricsName … const requestExecutionSecondsSumName … const requestExecutionSecondsCountName … const priorityLevelSeatUtilSumName … const priorityLevelSeatUtilCountName … const fakeworkDuration … const testWarmUpTime … const testTime … type SumAndCount … type plMetrics … type metricSnapshot … type clientLatencyMeasurement … func (clm *clientLatencyMeasurement) reset() { … } func (clm *clientLatencyMeasurement) update(duration float64) { … } func (clm *clientLatencyMeasurement) getStats() clientLatencyStats { … } type clientLatencyStats … type plMetricAvg … func intervalMetricAvg(snapshot0, snapshot1 metricSnapshot, plLabel string) plMetricAvg { … } type noxuDelayingAuthorizer … func (d *noxuDelayingAuthorizer) Authorize(ctx context.Context, a authorizer.Attributes) (authorizer.Decision, string, error) { … } // TestConcurrencyIsolation tests the concurrency isolation between priority levels. // The test defines two priority levels for this purpose, and corresponding flow schemas. // To one priority level, this test sends many more concurrent requests than the configuration // allows to execute at once, while sending fewer than allowed to the other priority level. // The primary check is that the low flow gets all the seats it wants, but is modulated by // recognizing that there are uncontrolled overheads in the system. // // This test differs from TestPriorityLevelIsolation since TestPriorityLevelIsolation checks throughput instead // of concurrency. In order to mitigate the effects of system noise, a delaying authorizer is used to artificially // increase request execution time to make the system noise relatively insignificant. // Secondarily, this test also checks the observed seat utilizations against values derived from expecting that // the throughput observed by the client equals the execution throughput observed by the server. func TestConcurrencyIsolation(t *testing.T) { … } func getRequestMetricsSnapshot(c clientset.Interface) (metricSnapshot, error) { … }