As someone who spends a lot of time looking at timestamped log lines to debug Pipecat pipelines, I'm a big fan of this work from Aleix.
In general, I have three pain points with debugging realtime, multi-model, multi-modal AI stuff. 1. where's the latency creeping in? 2. What context actually got passed to the models. 3. Did the model/processor get data in the format it expected.
For 1 and 3, Whisker is a big step forward. For 2, something like LangFuse (Open Telementry) is very helpful.
I had been thinking of working on something like this recently as a way to debug Pipecat pipelines. But the work Aleix has done goes far beyond what I was thinking!
As someone who spends a lot of time looking at timestamped log lines to debug Pipecat pipelines, I'm a big fan of this work from Aleix.
In general, I have three pain points with debugging realtime, multi-model, multi-modal AI stuff. 1. where's the latency creeping in? 2. What context actually got passed to the models. 3. Did the model/processor get data in the format it expected.
For 1 and 3, Whisker is a big step forward. For 2, something like LangFuse (Open Telementry) is very helpful.
A debugger for Pipecat: https://github.com/pipecat-ai/pipecat
With Whisker you can:
- View a live graph of your pipeline
- Watch frame processors flash in real time
- Select a processor to inspect its frames
- Filter frames by name
- Select a frame to trace its full path
I had been thinking of working on something like this recently as a way to debug Pipecat pipelines. But the work Aleix has done goes far beyond what I was thinking!