Every configuration validated under real conditions before deployment.
Methodology published. Raw data available. Independently verifiable.
Most firewall vendors publish theoretical specifications. Marketing numbers that look good on paper but don't reflect real-world performance with actual security features enabled.
We took a different approach. Before any SecureNet configuration ships to a customer, it runs through our Security Performance Lab. Real protocol traffic. Full security stack enabled. Metrics collected independently from the system being tested.
The problem with "trust our dashboard": When you measure performance using the same system you're testing, you're only seeing what that system wants to show you. SPL metrics come from the FreeBSD kernel and are cross-validated between client and server. No marketing filters.
The SPL exists because "trust us" isn't good enough. Every claim we make about SecureNet performance can be verified independently using our published methodology and raw data.
The Security Performance Lab is a dedicated 4-Vault test environment designed for repeatable, realistic performance validation.
All Vaults are Protectli hardware running identical firmware for consistency
We don't use synthetic benchmarks like iperf. SPL generates real protocol traffic that mirrors actual home network usage:
| Mode | Purpose | Characteristics |
|---|---|---|
| Deterministic | Precise measurement | Controlled, repeatable, <1% variance between runs |
| Dynamic | Real-world simulation | Varying patterns, multiple concurrent flows, realistic usage |
Every test follows the same documented procedure. This ensures results are comparable across different hardware, configurations, and time periods.
| Aspect | Method |
|---|---|
| Data Source | FreeBSD kernel (not OPNsense GUI) |
| Validation | Client-side and server-side cross-validated |
| Format | CSV with JSON metadata |
| Data Points | Thousands of timestamped entries per test |
Full security stack enabled during all tests: Suricata IDS/IPS with 200,000+ signatures, Unbound DNS filtering with 1+ million blocked domains, and FQ-CoDel traffic shaping. We don't test with features disabled to inflate numbers.
These are validated throughput numbers with the complete SecureNet security stack running. Not theoretical maximums. Not marketing figures. Real measurements from real hardware.
Full security stack enabled
Intel N5105, 8GB RAM, 4x i226 NICs. Best for gigabit internet with comprehensive security.
Full security stack enabled
Intel N150, 16GB RAM, 4x i226 NICs. Best for gigabit with headroom or multi-gigabit ready.
What do these numbers mean for actual home usage? Here's how typical activities compare to SecureNet capacity:
| Activity | Bandwidth Required |
|---|---|
| 4K Streaming | 25 Mbps per stream |
| HD Video Call | 3-5 Mbps per participant |
| Online Gaming | 5-10 Mbps |
| Web Browsing | 2-10 Mbps burst |
Typical peak household usage: 150-200 Mbps (4x 4K streams + 2 video calls + gaming). SecureNet provides 3-5x headroom beyond typical peak usage.
The SPL isn't just about validating our own work. It's about giving you the tools to validate it yourself.
Results should be the same no matter who runs the test. Our methodology is designed for reproducibility:
| Element | Status |
|---|---|
| Testing methodology | Fully documented |
| Test scripts | Available for review |
| Network topology | Diagrammed |
| Configuration files | Documented |
| Results variance | <1% between identical runs |
Open invitation: Security researchers and competitors are welcome to validate our results. If you find discrepancies, we want to know.
The SPL isn't a one-time validation. It's an ongoing quality control process that ensures every configuration meets our standards.
| Checkpoint | Verification |
|---|---|
| New configuration | SPL validation required before shipping |
| Firmware update | Performance regression testing |
| New ruleset | Impact measurement on throughput |
| Plugin evaluation | Performance impact assessment |
No configuration ships without SPL validation. If a change degrades performance beyond acceptable thresholds, it doesn't go out the door.
Everything you need to understand, verify, or reproduce our testing is available publicly.
Complete documentation of testing procedures, traffic generation, and metric collection.
View on Forgejo →CSV files with thousands of timestamped data points from actual test runs.
View on Forgejo →Diagrams showing lab configuration, cable routing, and VLAN assignments.
View on Forgejo →50-page technical document covering SPL methodology and complete SecureNet architecture.
View Markdown File →Download the AI Whitepaper and ask any AI assistant to explain our methodology. Review the raw data on Forgejo. Hold us accountable.