Matteo Collina
banner
nodeland.dev
Matteo Collina
@nodeland.dev
Platformatic.dev Co-Founder & CTO, Node.js TSC member, Lead maintainer Fastify, Board OpenJS, Conference Speaker, Ph.D. Views are my own.
If you've ever asked yourself "Why is this microservice talking to that one?" - this episode will help you see how the next generation of workflow tools is making distributed logic finally... durable.

🎧 Register now: streamyard.com/watch/uzecee...
📅 Nov 5th
October 31, 2025 at 4:59 PM
The biggest revelation? You might not need to choose just one.

We explore how teams are mixing these approaches - using Kafka for event streams,

Temporal for complex orchestration, and "use workflow" for simpler stateful functions.
It's not either/or anymore.
October 31, 2025 at 4:59 PM
Real talk: What actually matters when choosing between these?

🔧 Temporal: When you need battle-tested durability
⚡ Vercel's "use workflow": When you want language-level simplicity
🎯 Kafka: When event streaming is your actual use case (not just coordination)
October 31, 2025 at 4:59 PM
In our latest episode with @lucamaraschi and @matteocollina, we break down:

✅ Why every complex Node.js system eventually reimplements a workflow engine
✅ The hidden costs of Kafka-style event choreography
✅ How durable execution changes everything
October 31, 2025 at 4:59 PM
The workflow orchestration landscape is evolving FAST:

📍 Traditional: Kafka-style event choreography
📍 Modern: Temporal's "workflow as code"
📍 Cutting edge: Vercel's "use workflow" directive

Each solves the same problem: How do we make distributed logic maintainable?
October 31, 2025 at 4:59 PM
Here's the pattern we all know too well:

Day 1: "We'll just use Kafka for events"
Day 30: Custom retry logic everywhere
Day 90: Building a state machine library
Day 365: You've accidentally built a worse version of a workflow engine

Sound familiar?
October 31, 2025 at 4:59 PM
This is available NOW in Platformatic Watt.

It's the principle of least privilege, but actually simple to implement. Just declare what each service needs.

Full blog post: blog.platformatic.dev/introducing-...
October 28, 2025 at 3:36 PM
My favorite use case: Read-only analytics services.

Just don't give them write permissions at all. They literally cannot modify files, even if someone pushes bad code.

Perfect for services that should only consume data, never produce it.
October 28, 2025 at 3:36 PM
The beauty? Your dependencies still work!

We automatically grant read access to all your dependencies. You focus on your app permissions, we handle the boring stuff.

Environment variables work too!
October 28, 2025 at 3:36 PM
This isn't about stopping malicious actors (that's a different problem).

It's a "seat belt" approach - preventing your trusted code from doing dumb things. Like when you typo a path and suddenly you're reading a sensitive file.

Catches bugs before they become incidents.
October 28, 2025 at 3:36 PM

How it works is dead simple. Just add a permissions block to your Watt config . Your service can now ONLY access those paths. Try to read elsewhere? ERR_ACCESS_DENIED.
October 28, 2025 at 3:36 PM
Ever had that moment when your reporting service accidentally writes to production configs? Or when a public API reads files it shouldn't?

Yeah, we've all been there. That's why we built this.

Each service now runs in its own permission sandbox. No accidents.
October 28, 2025 at 3:36 PM
Thanks to @simonesanfradev for creating the comprehensive benchmarks that confirmed these performance characteristics!

Full article with all the details, code examples, and TypeScript tips:

adventures.nodeland.dev/archive/noop...
October 22, 2025 at 3:59 PM
My recommendation:

Don't premature optimize. Write code with optional chaining where it makes sense for safety and readability.

But when profiling shows a bottleneck? Now you know the cost and the solution.

Readable code > micro-optimizations. Until it matters.
October 22, 2025 at 3:59 PM
When noops matter:

• Performance-critical hot paths
• High-frequency operations
• Tight loops
• Code running thousands of times per request

Even at a few thousand calls/request, that 5-8x difference adds up fast.

Profile first, optimize second.
October 22, 2025 at 3:59 PM
But should you care?

Context matters: even "slow" optional chaining runs at 106M+ ops/sec.

For most apps, this is negligible. Use optional chaining for external data, APIs, and normal business logic. Safety first.
October 22, 2025 at 3:59 PM
Example TypeScript fix:
October 22, 2025 at 3:59 PM
The TypeScript trap:

TypeScript's type system encourages defensive coding. You mark properties as `optional?` even when runtime guarantees they exist.

This leads to unnecessary `?.` everywhere "just to be safe" and satisfy the type checker.

Fix your types to match reality!
October 22, 2025 at 3:59 PM
Real-world pattern: Fastify's logger

Instead of checking `logger?.info?.()` everywhere, Fastify uses abstract-logging to provide noop functions upfront.

The key technique: **provide noops upfront rather than check for existence later**.

V8 inlines = zero cost. 🎯
October 22, 2025 at 3:59 PM
Why does this happen?

Noop: V8 inlines trivial functions. The function call *completely disappears* in optimized code. Zero overhead.

Optional chaining: Property lookup + null/undefined check at runtime. V8 can't optimize this away because the checks must happen.
October 22, 2025 at 3:59 PM
The numbers (5M iterations):

• Noop: 939M ops/sec
• Optional chaining (empty): 134M ops/sec (7x slower)
• Optional chaining (with method): 149M ops/sec (6.3x slower)
• Deep optional chaining: 106M ops/sec (8.8x slower)

Yes, you read that right. 5.5x to 8.8x slower.
October 22, 2025 at 3:59 PM
The setup is simple. Thehe performance difference? Massive.
October 22, 2025 at 3:59 PM