Performance Testing
This section contains documentation and guidelines for performance testing the Xping SDK.
Looking for practical performance guidance? See the Performance Overview for user-facing documentation on SDK overhead, configuration, and optimization tips.
Overview
Performance testing ensures the SDK meets its performance targets and maintains minimal overhead on test execution. We use BenchmarkDotNet for precise, reliable performance measurements.
Performance Targets
The Xping SDK is designed to have minimal impact on test execution. Our performance targets are:
| Metric | Target | Rationale |
|---|---|---|
| Test Tracking Overhead | <5ms per test | Negligible impact on test suite execution time |
| Memory per Test Execution | <1KB per test | Efficient memory usage for rich metadata capture |
| Collection Throughput | >10,000 tests/sec | Handle high-volume test execution |
| Batch Upload (100 tests) | <500ms | Efficient network utilization |
| Memory Footprint (10k tests) | <50MB | Reasonable memory consumption |
| CPU Overhead | <5% | Minimal CPU usage during test execution |
Running Benchmarks
See the benchmark project README (tests/Xping.Sdk.Benchmarks/README.md) for detailed instructions.
Quick start:
cd tests/Xping.Sdk.Benchmarks
dotnet run -c Release
Benchmark Categories
1. Core Component Benchmarks
Measure performance of core SDK components:
- TestExecutionCollector - Recording and buffering overhead
- XpingApiClient - Upload and network performance
- Configuration - Configuration loading and validation
- Environment Detection - Platform and CI detection
2. Integration Benchmarks
End-to-end performance measurements:
- Complete test lifecycle (record → buffer → batch → upload)
- Sampling performance at various rates
3. Adapter Benchmarks
Framework-specific overhead:
- NUnit adapter (
[XpingTrack]attribute processing) - xUnit adapter (test framework integration)
- MSTest adapter (base class overhead)
4. Stress & Load Tests
High-volume scenarios:
- 10,000+ test executions
- Concurrent recording from multiple threads
- Sustained load over time
- Memory leak detection
Performance Analysis
Key Metrics
Execution Time:
- Mean - Average execution time
- Median - Middle value (50th percentile)
- StdDev - Standard deviation (consistency indicator)
- P95/P99 - 95th/99th percentile (worst-case scenarios)
Memory:
- Gen0/Gen1/Gen2 - Garbage collection frequency
- Allocated - Total memory allocated
- Peak Working Set - Maximum memory usage
Throughput:
- Operations/sec - Throughput under load
- Scalability - Performance under concurrent load
Interpreting Results
✅ Good Performance Indicators:
- Mean time within target
- Low standard deviation (<10% of mean)
- Minimal GC pressure (low Gen2 collections)
- Linear scalability with load
⚠️ Warning Signs:
- High standard deviation (>25% of mean)
- Frequent Gen2 collections
- Memory growth over time
- Performance degradation under load
❌ Performance Issues:
- Mean time exceeds target by >20%
- Memory leaks (continuous growth)
- Lock contention (poor concurrency)
- High CPU usage (>10%)
Baseline Results
Baseline performance results are stored in tests/Xping.Sdk.Benchmarks/results/baseline/.
To establish a new baseline:
cd tests/Xping.Sdk.Benchmarks
dotnet run -c Release
cp BenchmarkDotNet.Artifacts/results/* results/baseline/
Performance Regression Detection
CI/CD Integration
Benchmarks run automatically on PRs labeled with performance. Results are compared against the baseline, and comments are posted if regressions are detected.
Regression Thresholds:
- ⚠️ Warning: >10% slower than baseline
- ❌ Failure: >20% slower than baseline
- 💥 Critical: >50% slower than baseline
Manual Comparison
Compare current results with baseline:
# Run benchmarks
dotnet run -c Release
# Compare with baseline (requires BenchmarkDotNet.Tool)
dotnet benchmark compare baseline current --threshold 10%
Optimization Guidelines
Code-Level Optimizations
Avoid Allocations in Hot Paths
- Use object pooling for frequently allocated objects
- Reuse buffers and collections
- Use
stackallocfor small arrays
Minimize Lock Contention
- Use lock-free data structures where possible
- Reduce lock scope
- Consider reader-writer locks for read-heavy scenarios
Optimize String Operations
- Use
StringBuilderfor concatenation - Cache computed strings
- Use
Span<char>for string manipulation
- Use
Efficient Collections
- Use
ConcurrentQueuefor thread-safe queues - Pre-size collections when size is known
- Use
ValueTaskfor potentially synchronous operations
- Use
Profiling Tools
Recommended Tools:
- BenchmarkDotNet - Microbenchmarking and memory profiling
- dotnet-trace - Performance trace collection
- dotnet-counters - Real-time performance counters
- PerfView (Windows) - Advanced performance analysis
- Instruments (macOS) - Time profiler and memory allocations
Example Usage:
# Collect performance trace
dotnet-trace collect --process-id <pid> --providers Microsoft-Windows-DotNETRuntime
# Monitor performance counters
dotnet-counters monitor --process-id <pid>
Troubleshooting Performance Issues
High Memory Usage
- Check for memory leaks with memory profiler
- Review buffer sizes and limits
- Verify proper disposal of resources
- Monitor GC behavior
Slow Test Execution
- Profile to identify bottlenecks
- Check for synchronous I/O in async code
- Review lock contention
- Verify configuration settings
Poor Throughput
- Check batch sizes
- Review concurrent collection limits
- Profile async/await patterns
- Monitor CPU and I/O utilization
Best Practices
- Always run benchmarks in Release mode - Debug builds have significant overhead
- Close background applications - Minimize interference from other processes
- Run multiple iterations - Single runs can be misleading
- Use representative data - Benchmark with realistic test scenarios
- Monitor trends over time - Track performance across releases
- Profile before optimizing - Don't guess where the bottleneck is
- Validate optimizations - Measure impact of changes
References
- BenchmarkDotNet Best Practices
- .NET Performance Tips
- Async Programming Best Practices
- Memory Management in .NET
Contributing
When making performance-sensitive changes:
- Run benchmarks before and after changes
- Document performance impact in PR description
- Include benchmark results in PR comments
- Update baselines if improvements are significant
- Add new benchmarks for new features
For questions or issues related to performance testing, please open an issue with the performance label.