3.9 KiB
Piker Profiling Subsystem
Skill for using piker.toolz.profile.Profiler to measure performance across distributed actor systems.
Core Profiler API
Basic Usage
from piker.toolz.profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
)
profiler = Profiler(
msg='<description of profiled section>',
disabled=False, # IMPORTANT: enable explicitly!
ms_threshold=0.0, # show all timings
)
# do work
some_operation()
profiler('step 1 complete')
# more work
another_operation()
profiler('step 2 complete')
# prints on exit:
# > Entering <description of profiled section>
# step 1 complete: 12.34, tot:12.34
# step 2 complete: 56.78, tot:69.12
# < Exiting <description>, total: 69.12 msDefault Behavior Gotcha
CRITICAL: Profiler is disabled by default in many contexts!
# BAD: might not print anything!
profiler = Profiler(msg='my operation')
# GOOD: explicit enable
profiler = Profiler(
msg='my operation',
disabled=False, # force enable!
ms_threshold=0.0, # show all steps
)Profiler Output Format
> Entering <msg>
<label 1>: <delta_ms>, tot:<cumulative_ms>
<label 2>: <delta_ms>, tot:<cumulative_ms>
...
< Exiting <msg>, total time: <total_ms> ms
Reading the output: - delta_ms = time since previous checkpoint - cumulative_ms = time since profiler creation - Final total = end-to-end time
Profiling Distributed Systems
Piker runs across multiple processes (actors). Each actor has its own log output.
Common piker actors
pikerd- main daemon processbrokerd- broker connection actorchart- UI/graphics actor- Client scripts - analysis/annotation clients
Cross-Actor Profiling Strategy
- Add
Profileron both client and server - Correlate timestamps from each actor’s output
- Calculate IPC overhead = total - (client + server processing)
Example correlation:
Client console:
> Entering markup_gaps() for 1285 gaps
initial redraw: 0.20ms, tot:0.20
built annotation specs: 256.48ms, tot:256.68
batch IPC call complete: 119.26ms, tot:375.94
final redraw: 0.07ms, tot:376.02
< Exiting markup_gaps(), total: 376.04ms
Server console (chart actor):
> Entering Batch annotate 1285 gaps
`np.searchsorted()` complete!: 0.81ms, tot:0.81
`time_to_row` creation: 98.45ms, tot:99.28
created GapAnnotations item: 2.98ms, tot:102.26
< Exiting Batch annotate, total: 104.15ms
Analysis: - Total client time: 376ms - Server processing: 104ms - IPC overhead + client spec building: 272ms - Bottleneck: client-side spec building (256ms)
Integration with PyQtGraph
Some piker modules integrate with pyqtgraph’s profiling:
from piker.toolz.profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
)
profiler = Profiler(
msg='Curve.paint()',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)Performance Expectations
Typical timings: - IPC round-trip (local actors): 1-10ms - NumPy binary search (10k array): <1ms - Dict building (1k items, simple): 1-5ms - Qt redraw trigger: 0.1-1ms - Scene item removal (100s items): 10-50ms
Red flags: - Linear array scan per item: 50-100ms+ for 1k - Dict comprehension with struct array: 50-100ms - Individual Qt item creation: 5ms per item
References
piker/toolz/profile.py- Profiler implpiker/ui/_curve.py- FlowGraphic paint profilingpiker/ui/_remote_ctl.py- IPC handler profilingpiker/tsp/_annotate.py- Client-side profiling
See patterns.md for detailed profiling patterns and debugging techniques.
Last updated: 2026-01-31 Session: Batch gap annotation optimization