🎧 Register now: streamyard.com/watch/uzecee...
📅 Nov 5th
🎧 Register now: streamyard.com/watch/uzecee...
📅 Nov 5th
We explore how teams are mixing these approaches - using Kafka for event streams,
Temporal for complex orchestration, and "use workflow" for simpler stateful functions.
It's not either/or anymore.
We explore how teams are mixing these approaches - using Kafka for event streams,
Temporal for complex orchestration, and "use workflow" for simpler stateful functions.
It's not either/or anymore.
🔧 Temporal: When you need battle-tested durability
⚡ Vercel's "use workflow": When you want language-level simplicity
🎯 Kafka: When event streaming is your actual use case (not just coordination)
🔧 Temporal: When you need battle-tested durability
⚡ Vercel's "use workflow": When you want language-level simplicity
🎯 Kafka: When event streaming is your actual use case (not just coordination)
✅ Why every complex Node.js system eventually reimplements a workflow engine
✅ The hidden costs of Kafka-style event choreography
✅ How durable execution changes everything
✅ Why every complex Node.js system eventually reimplements a workflow engine
✅ The hidden costs of Kafka-style event choreography
✅ How durable execution changes everything
📍 Traditional: Kafka-style event choreography
📍 Modern: Temporal's "workflow as code"
📍 Cutting edge: Vercel's "use workflow" directive
Each solves the same problem: How do we make distributed logic maintainable?
📍 Traditional: Kafka-style event choreography
📍 Modern: Temporal's "workflow as code"
📍 Cutting edge: Vercel's "use workflow" directive
Each solves the same problem: How do we make distributed logic maintainable?
Day 1: "We'll just use Kafka for events"
Day 30: Custom retry logic everywhere
Day 90: Building a state machine library
Day 365: You've accidentally built a worse version of a workflow engine
Sound familiar?
Day 1: "We'll just use Kafka for events"
Day 30: Custom retry logic everywhere
Day 90: Building a state machine library
Day 365: You've accidentally built a worse version of a workflow engine
Sound familiar?
It's the principle of least privilege, but actually simple to implement. Just declare what each service needs.
Full blog post: blog.platformatic.dev/introducing-...
It's the principle of least privilege, but actually simple to implement. Just declare what each service needs.
Full blog post: blog.platformatic.dev/introducing-...
Just don't give them write permissions at all. They literally cannot modify files, even if someone pushes bad code.
Perfect for services that should only consume data, never produce it.
Just don't give them write permissions at all. They literally cannot modify files, even if someone pushes bad code.
Perfect for services that should only consume data, never produce it.
We automatically grant read access to all your dependencies. You focus on your app permissions, we handle the boring stuff.
Environment variables work too!
We automatically grant read access to all your dependencies. You focus on your app permissions, we handle the boring stuff.
Environment variables work too!
It's a "seat belt" approach - preventing your trusted code from doing dumb things. Like when you typo a path and suddenly you're reading a sensitive file.
Catches bugs before they become incidents.
It's a "seat belt" approach - preventing your trusted code from doing dumb things. Like when you typo a path and suddenly you're reading a sensitive file.
Catches bugs before they become incidents.
How it works is dead simple. Just add a permissions block to your Watt config . Your service can now ONLY access those paths. Try to read elsewhere? ERR_ACCESS_DENIED.
How it works is dead simple. Just add a permissions block to your Watt config . Your service can now ONLY access those paths. Try to read elsewhere? ERR_ACCESS_DENIED.
Yeah, we've all been there. That's why we built this.
Each service now runs in its own permission sandbox. No accidents.
Yeah, we've all been there. That's why we built this.
Each service now runs in its own permission sandbox. No accidents.
Full article with all the details, code examples, and TypeScript tips:
adventures.nodeland.dev/archive/noop...
Full article with all the details, code examples, and TypeScript tips:
adventures.nodeland.dev/archive/noop...
Don't premature optimize. Write code with optional chaining where it makes sense for safety and readability.
But when profiling shows a bottleneck? Now you know the cost and the solution.
Readable code > micro-optimizations. Until it matters.
Don't premature optimize. Write code with optional chaining where it makes sense for safety and readability.
But when profiling shows a bottleneck? Now you know the cost and the solution.
Readable code > micro-optimizations. Until it matters.
• Performance-critical hot paths
• High-frequency operations
• Tight loops
• Code running thousands of times per request
Even at a few thousand calls/request, that 5-8x difference adds up fast.
Profile first, optimize second.
• Performance-critical hot paths
• High-frequency operations
• Tight loops
• Code running thousands of times per request
Even at a few thousand calls/request, that 5-8x difference adds up fast.
Profile first, optimize second.
Context matters: even "slow" optional chaining runs at 106M+ ops/sec.
For most apps, this is negligible. Use optional chaining for external data, APIs, and normal business logic. Safety first.
Context matters: even "slow" optional chaining runs at 106M+ ops/sec.
For most apps, this is negligible. Use optional chaining for external data, APIs, and normal business logic. Safety first.
TypeScript's type system encourages defensive coding. You mark properties as `optional?` even when runtime guarantees they exist.
This leads to unnecessary `?.` everywhere "just to be safe" and satisfy the type checker.
Fix your types to match reality!
TypeScript's type system encourages defensive coding. You mark properties as `optional?` even when runtime guarantees they exist.
This leads to unnecessary `?.` everywhere "just to be safe" and satisfy the type checker.
Fix your types to match reality!
Instead of checking `logger?.info?.()` everywhere, Fastify uses abstract-logging to provide noop functions upfront.
The key technique: **provide noops upfront rather than check for existence later**.
V8 inlines = zero cost. 🎯
Instead of checking `logger?.info?.()` everywhere, Fastify uses abstract-logging to provide noop functions upfront.
The key technique: **provide noops upfront rather than check for existence later**.
V8 inlines = zero cost. 🎯
Noop: V8 inlines trivial functions. The function call *completely disappears* in optimized code. Zero overhead.
Optional chaining: Property lookup + null/undefined check at runtime. V8 can't optimize this away because the checks must happen.
Noop: V8 inlines trivial functions. The function call *completely disappears* in optimized code. Zero overhead.
Optional chaining: Property lookup + null/undefined check at runtime. V8 can't optimize this away because the checks must happen.
• Noop: 939M ops/sec
• Optional chaining (empty): 134M ops/sec (7x slower)
• Optional chaining (with method): 149M ops/sec (6.3x slower)
• Deep optional chaining: 106M ops/sec (8.8x slower)
Yes, you read that right. 5.5x to 8.8x slower.
• Noop: 939M ops/sec
• Optional chaining (empty): 134M ops/sec (7x slower)
• Optional chaining (with method): 149M ops/sec (6.3x slower)
• Deep optional chaining: 106M ops/sec (8.8x slower)
Yes, you read that right. 5.5x to 8.8x slower.