Tom Johnson
banner
tomjohnson3.bsky.social
Tom Johnson
@tomjohnson3.bsky.social

CTO at Multiplayer.app, full stack session recordings to seamlessly capture and resolve issues or develop new features.
Also: 🤖 robot builder 🏃‍♂️ runner 🎸 guitar player

Thomas Floyd Johnson was an American composer and music critic associated with minimalism. After a religious upbringing in Colorado, he studied at Yale with Allen Forte and in New York City with Morton Feldman. There he covered the work of several noted composers, bringing them to wider attention in The Village Voice. .. more

Art 25%
Political science 22%

I'm doing short 15–20 min feedback chats and offering a $50 gift card as a thank-you.

You can schedule a call with me here: cal.com/multiplayer/...
30 Min Meeting | Multiplayer | Cal.com
30 Min Meeting
cal.com

If you tried Multiplayer but didn’t finish setup, I’d love to learn why.

I’m building the only tool that gives teams full-stack session replays out of the box. All correlated, all AI-ready.

But if getting there feels harder than it should, that’s on me.

[5/5] If you’re curious how full stack session context and AI intersect in real developer workflows, our 2025 year-in-review dives into it here 👇

www.multiplayer.app/blog/multipl...
Multiplayer 2025: year in review
In 2025 we focused on a simple but ambitious goal: making debugging faster, less fragmented and less manual. Check out all our releases to make that possible.
www.multiplayer.app

[4/]
No more alerts without answers.
No more “let me feed this to AI and see what happens.”
Just actionable suggestions that know your system state.

[3/] We asked:
What if your system could automatically turn errors into PRs?

So for 2026 we’re building the Multiplayer AI agent:
Instead of manually exporting session data into separate AI tools,
you’ll receive pull requests with suggested fixes automatically, grounded in real production context.

[2/] ✔️Correlating frontend session replays with backend traces and logs

✔️ Including full request/response content and headers from internal services and external dependencies

✔️ Making that context available to AI workflows via the MCP server

But we didn’t stop at making data accessible.

[1/] This year our engineering team leaned into one conviction: AI will only be as good as the context it’s given.

If my background in neural nets has taught me anything it’s that data is everything. That’s why our 2025 focus was about capturing the right data in the right structure:

Where are AI tools ✦augmenting✦ our work, and where are they ✦undermining✦ the craft that makes engineering effective?

As CTO of a startup building with AI, I keep coming back to this question. I shared my thoughts in this LeadDev article 👇

leaddev.com/ai/are-smart...
Are smart machines making us dumber?
Without a well-thought AI adoption strategy, you could be leading your team into an automation paradox trap.
leaddev.com

👇 These are some best practices to make APIs testable.

You can also create free notebooks on Multiplayer to test your real-world API integrations, chaining your APIs and code snippets for realistic workflows.

beyondruntime.substack.com/p/the-hidden...
Tools for API testing
Key features to look for in an API testing tool to ensure successful testing
beyondruntime.substack.com

Most teams already test their code, UI, and infra. But ask when they last tested their APIs (thoroughly) and you’ll get a hesitant pause.

Bad news: that pause is expensive.

Good news: this is preventable.

Better news: what makes your APIs testable also make your systems more observable.

👇 Check my article on best practices on how to structure logs in the comments (or you could always use @multiplayer.app ’s full stack session recordings which automatically correlate logs by session 😅).

beyondruntime.substack.com/p/logs-that-...
Logs that talk back
How to debug distributed systems without losing your mind
beyondruntime.substack.com

When you’re looking for the bug, every log file you grep through gets you closer to the root cause.

The trick is making those logs useful, faster.

Jules Verne might as well have been talking about debugging when he said: “Science, my lad, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.”

If you’re thinking about:

• how to let tools like ChatGPT, Claude, or Gemini interact with your apps safely
• how to turn small AI experiments into prod features
• how to keep costs predictable
• and how to put the right guardrails in place

…then you’ll get a lot out of this panel.

I’m joining a LeadDev panel tomorrow on a topic many teams are struggling with right now: how to bring AI into your systems without creating chaos, security holes, or unexpected costs.

That means automatically correlated insights frontend to backend: being able to understand end-to-end your system, from user action, to the specific trace, log, request/response.

(That’s what we do at Multiplayer with full stack session recordings 😉)

beyondruntime.substack.com/p/from-red-a...
From Red Alerts to Root Causes
How observability builds on monitoring to keep complex systems reliable.
beyondruntime.substack.com

Monitoring will always have its place.

But modern distributed systems require more: you need immediate, surgical and complete visibility *across your stack* to fully understand system behavior.

I recently reviewed what went wrong with one of our users’ internal tool incidents and the lessons were clear: you have to shorten the path from “something broke” to “we know why.”

And it starts with high-quality issue reporting powered by full-stack session recordings.

A tiny bug. A big bank. Hours lost.

Internal tools don’t get the same love as customer-facing products, but the pain of debugging them is just as real.

Let me know what other best practices you’d add to this list!

👇 Check the full article: beyondruntime.substack.com/p/apis-dont-...
APIs don’t test themselves
Why automation is the only way to keep pace with modern distributed systems.
beyondruntime.substack.com

• Test authentication and authorization just as rigorously as the “happy path”

• Keep your test data clean, parameterized, and repeatable

• Continuously monitor the health of your test suite, track flaky tests, and evolve coverage as the system changes

• Start early and target the APIs that matter most

• Write isolated, assertive tests that validate one behavior at a time, with strong assertions

• Use mocks, stubs, or service virtualization for dependencies

APIs are the front doors to our systems, and users expect them to just work. Automated API testing is how we guarantee that at scale.

Here are a few proven best practices: 🧵

👆He captures perfectly why business critical issues slip through when context is missing and you have to waste hours piecing together all the information.

(and how Multiplayer full stack session recordings are built to solve that 😊)

“Ambiguous screenshots”. It says it all, doesn’t it?

@farisaziz12.bsky.social is describing the nightmare that of debugging vague support tickets, with blurry photos, no reproduction steps, and endless back and forth.

Does debugging support tickets look like this for you?
If yes, share the latest rabbit hole you fell into.

‣ Shape your data early.
‣ Prioritize security.
‣ Be deliberate with receivers.
‣ Export with efficiency.
‣ Monitor the Collector itself.

The lesson I keep coming back to is simple: an observability framework is only as strong as its Collector configuration.