Michael 👨‍💻🥑
banner
michaelsolati.com
Michael 👨‍💻🥑
@michaelsolati.com
Developer Relations @livekit.io
December 3, 2025 at 6:01 PM
If you enjoyed this thread:

Follow me Michael 👨‍💻🥑 for more on AI Engineering & Web Development!

Read the complete build log (and get the code): https://dev.to/michaelsolati/i-built-an-ai-powered-ttrpg-adventure-generator-because-generic-hallucinations-are-boring-362m
December 3, 2025 at 6:01 PM
⚡ TL;DR:

Generic AI prompts = Generic results.

Use a "Research-then-Generate" workflow.

Enforce JSON schemas for structured output.

Stream agent "thoughts" via SSE to improve UX.

Visualize citations to ground the content.
December 3, 2025 at 6:01 PM
The coolest part? Citation Mapping.

Because we track the research steps, we can link the output back to the source.

I used D3.js to visualize the "web of inspiration." You can see exactly which folklore blog post inspired your villain.
December 3, 2025 at 6:01 PM
UX Tip: Kill the loading spinner. 💀

Research takes time. Waiting sux.

I used Server-Sent Events (SSE) to stream the agent's actions.

The user sees: "Scanning wiki..." -> "Reading blog..." -> "Generating villain..."

It makes the wait feel like part of the experience.
December 3, 2025 at 6:01 PM
To stop the AI from generating a "wall of text," I enforce a strict JSON schema.

This acts as a contract. The AI must return structured objects for NPCs, Locations, and Plot Hooks.

No more rambling. Just usable data.
December 3, 2025 at 6:01 PM
The "Secret Sauce" is Exa (Neural Search).

Google searches for keywords. Exa searches for concepts.

If I want "realistic dragon biology," Exa skips the movie reviews and finds niche biology forums.

We create a research task and return a taskId instantly.
December 3, 2025 at 6:01 PM
The Problem: Standard LLMs have no context. They guess based on probability.

The Solution: Don't ask it to write immediately. Ask it to Research.

My app, Adventure Weaver, dispatches an agent to crawl wikis, blogs, and forums for "vibes" before writing a single word of plot.
December 3, 2025 at 6:01 PM
Stop guessing the output of async code.

If you found this breakdown helpful:

1. Follow me Michael 👨‍💻🥑 for more deep dives into JS internals.
2. Check out the full visual guide here: https://dev.to/michaelsolati/visualizing-the-event-loop-a-guide-to-microtasks-macros-and-timers-2l22
November 26, 2025 at 5:01 PM
⚡ TL;DR Cheat Sheet

* Synchronous: Runs first.
* Microtasks (Promises): Run immediately after the stack clears. Exhaustive processing.
* Rendering: Happens after Microtasks, before Macrotasks.
* Macrotasks (Timers): Run only when everything else is quiet.
November 26, 2025 at 5:01 PM
Here is the execution flow:

1. Sync Code: 'Start' and 'End' run immediately on the Call Stack.
2. Stack Empties: The Event Loop wakes up.
3. Microtask Checkpoint: "Any VIPs?" Yes, the Promise. Run it NOW.
4. Macrotask: "Okay, now we can do the Timer."
November 26, 2025 at 5:01 PM
🧠 The Mental Model: The VIP Lane

JS has a single thread, but many queues.

1. Macrotask Queue: `setTimeout`, `setInterval`. (General)
2. Microtask Queue: `Promise.then`, `MutationObserver`. (VIP)

The Event Loop checks the VIP lane immediately after the current code finishes.
November 26, 2025 at 5:01 PM
✅ The Solution

The actual output is:
1. Start
2. End
3. Promise
4. Timeout

Wait, why does the Promise beat the Timeout, even though the Timeout was declared first with 0 delay?

It comes down to Microtasks vs. Macrotasks.
November 26, 2025 at 5:01 PM
🛑 The Trap

Intuition says: "Code runs top-to-bottom. The timeout is 0ms, so it's instant. The Promise is async too. Maybe they race?"

If you guessed:
Start → End → Timeout → Promise ❌

You're wrong!
November 26, 2025 at 5:01 PM
November 20, 2025 at 5:01 PM
tl;dr: The 2025 Reality

* Enterprise: LeetCode is alive (Anti-Cheat Mode).
* Startups: Vibe Coding is here (Speed Mode).
* The Risk: "Bring Your Own AI" creates economic inequality.
* The Fix: Be bilingual. Audit AI output with strong fundamentals.
November 20, 2025 at 5:01 PM
A "Pay-to-Win" Barrier 💸

Startups expect you to interview with your own tools. Can you afford the $200 Claude Code tier? If the model hallucinates since you're on a free tier and you miss it, did you fail? We are asking candidates to pay for the privilege of getting hired.
November 20, 2025 at 5:01 PM
Startup Speedruns 🚀

Startups are the opposite. They hand you the keys to Copilot and say, "Go."

The constraint isn't memory; it's Speed. They don't want a coder; they want an "AI Editor." But this speed comes with a hidden price tag...
November 20, 2025 at 5:01 PM
Enterprise Paranoia 🏢

Big Tech is terrified of "AI Impostors." 81% of interviewers suspect cheating.

Their solution? "Proof of Work." They know AI can solve it. They want you to have the raw cognitive bandwidth to solve Invert Binary Tree w/out a robot whispering in your ear.
November 20, 2025 at 5:01 PM
Follow me for more honest takes on the engineering industry.

And read my full breakdown here: https://dev.to/michaelsolati/im-getting-serious-deja-vu-but-this-time-its-different-17f4
November 18, 2025 at 5:01 PM
TL;DR

- The market isn't just saturated; it's compressed.
- Layoffs are funding GPU purchases ($170B+ shift).
- The "How" (coding) is commoditized.
- The "Why" (Engineering & Verification) is the gold standard.
November 18, 2025 at 5:01 PM
The "Prompt-Jockey" is a myth. The real winner is the AI-Assisted Engineer.

The new skill isn't writing code. It's: ✅ Debugging AI hallucinations. ✅ Architecting prompt systems. ✅ Owning the outcome when the black box fails.
November 18, 2025 at 5:01 PM