Monday, April 13, 2026

Automating troubleshooting in Kognitos

 

Automating Troubleshooting in Kognitos

What if a support investigation could start with a vague prompt and end with a deterministic, repeatable workflow?

That’s what I tested in Kognitos.

I started with a simple request:

Look at Sentry logs and replay for jerome@kognitos.com and tell me what this user was trying to do. Add the summary activity, error, and next steps.

From that one high-level instruction, Kognitos created an SOP that could investigate the issue step by step. Behind the scenes, it also generated SPY code so the workflow could move from an exploratory AI-driven draft into deterministic automation.

That shift is the interesting part. This was not just “AI gives me an answer.” It was “AI builds a troubleshooting workflow I can test, refine, and operationalize.”


Here is to drinking our own 🍷

What the SOP does

The workflow takes a user email, then:

  • Pulls Sentry session replays from the last 24 hours
  • Summarizes session behavior, pages visited, and frontend errors
  • Identifies top transactions to understand what the user spent time doing
  • Extracts the workspace ID from replay URLs
  • Uses that workspace ID to query SigNoz for backend warnings and errors
  • Produces a human-readable activity summary, error analysis, and prioritized next steps

In other words, it connects frontend behavior with backend signals and turns scattered telemetry into a support-ready narrative.

Actual SOP in Kognitos:





Why this matters

Support and engineering teams often spend too much time doing mechanical investigation work:

  • Watch a replay
  • Check logs
  • Infer intent
  • Correlate timestamps
  • Guess which backend errors matter
  • Write up next steps

This SOP compresses that process into something far more systematic.

Instead of manually stitching together Sentry, SigNoz, and product context, Kognitos assembled a workflow that did the correlation automatically and produced a usable report.

What the system found

Actual Output

Recorded for future audits:


Final Thoughts: From AI chat to reliable automation

What I like most about this flow is that it starts creatively and ends deterministically.

I can begin with an open-ended prompt, let the system assemble the investigation logic, test it in draft mode, and then promote it into a repeatable automation with observable runs.

That is the bridge between generative AI and operational reliability.

The runtime behavior itself worked once the defects were understood. The main blockers were not the idea of the automation, but the platform and codegen issues uncovered during execution.

And that is exactly why this kind of workflow is useful: it doesn’t just solve support problems, it helps expose product and platform gaps that teams can actually fix.

No comments:

Post a Comment

Automating troubleshooting in Kognitos

  Automating Troubleshooting in Kognitos What if a support investigation could start with a vague prompt and end with a deterministic, repea...