Performance Comparison
VectorWave achieves 21M+ samples/sec with < 100ns P50 latency (JMH-verified). IronWave delivers < 100ns per-tick for HFT. Both in nanosecond territory, both memory-safe.
Latency Benchmarks (Tick-by-Tick)
| Implementation | Latency | Primary Risk | Target User |
|---|---|---|---|
| IronWave (Rust)MorphIQ Labs | 66.3ns | None (Memory Safe) | HFT / Inst. Execution |
| VectorWave (Java 25)MorphIQ Labs • JMH-verified | 83ns (P50) | None (GC-Safe) | AI / Data Engineering |
| Optimized C++In-House / Custom | ~1 - 5 μs | Segfaults / Memory Leaks | HFT Prop Desks |
| Legacy LibrariesIMSL / NAG / JNI | ~50 - 500 μs | JNI Overhead / Bloat | Bank Risk Desks |
| Open Source (Java/Python)JWave / PyWavelets | > 1000 μs | GC Pauses / Slow | Research / Academia |
* VectorWave benchmarks performed with JMH 1.37 on GraalVM JDK 25. IronWave benchmarks on dedicated Linux hardware.
Throughput Benchmarks (AI Data Prep)
For high-volume data cleaning and AI feature engineering, throughput (samples per second) matters more than individual tick latency. In this mode, IronWave processes data in blocks (e.g., 1024 samples), amortizing the cost of transforms.
| Implementation | Throughput | Time per Tick | Use Case |
|---|---|---|---|
| IronWave (Rust)DB4, 1024 samples • Criterion Verified | ~275 Million / sec | 3.65ns | AI Feature Engineering / ETL |
| VectorWave (Java 25)DB4, 1024 samples • JMH-verified | ~185 Million / sec | 5.4ns (P50) | Spark / Flink Pipelines |
| Standard Java (Estimated)Non-Vectorized | ~0.5 Million / sec | ~2.0 μs | Standard Data Processing |
* IronWave: Criterion benchmarks, Streaming MODWT (DB4, 4-levels) with O(L) incremental algorithm + AVX2 SIMD. VectorWave: JMH 1.37.
vs. Legacy Commercial
Legacy libraries are often 40-year-old Fortran code wrapped in JNI. This "boundary crossing" kills performance.
IronWave runs natively at 66.3ns, eliminating the JNI tax entirely for pure, hardware-accelerated execution.
vs. In-House C++
Typical C++ wavelet implementations run at 1-5μs. IronWave achieves 66.3ns—without the risk of Segmentation Faults.
IronWave provides "Crash Insurance." Faster than C++ with 100% memory safety.
vs. Open Source
Standard libraries (JWave, PyWavelets) prioritize ease-of-use, not speed. They allocate memory constantly, causing GC pauses.
IronWave uses a Zero-Allocation architecture, guaranteeing consistent latency with no GC spikes.