1. Follow me @itsalexzajac for more content on Software and AI
2. Check out Hungry Minds: newsletter.hungryminds.dev
3. RT the tweet below to share this thread with your audience
1. Follow me @itsalexzajac for more content on Software and AI
2. Check out Hungry Minds: newsletter.hungryminds.dev
3. RT the tweet below to share this thread with your audience
But for technical demos, sprint reviews, and architecture docs?
→ It's faster
→ More maintainable
→ Better than what most of us were making anyway
But for technical demos, sprint reviews, and architecture docs?
→ It's faster
→ More maintainable
→ Better than what most of us were making anyway
→ Write specs in Notion
→ Hit the Gamma API endpoint
→ Get the presentation in 3 minutes
→ Share live link that updates with source docs
Time on slides: 15 minutes, not 7 hours.
→ Write specs in Notion
→ Hit the Gamma API endpoint
→ Get the presentation in 3 minutes
→ Share live link that updates with source docs
Time on slides: 15 minutes, not 7 hours.
Before:
→ Write specs in Notion
→ Spend 6+ hours recreating in PowerPoint
→ Update slides manually when specs change
Before:
→ Write specs in Notion
→ Spend 6+ hours recreating in PowerPoint
→ Update slides manually when specs change
3 days.
Time to make the slides?
7 hours.
3 days.
Time to make the slides?
7 hours.
But we're still manually dragging boxes in slides like it's 2005.
But we're still manually dragging boxes in slides like it's 2005.
1. Follow me @itsalexzajac for more content on Software and AI
2. Check out Hungry Minds: newsletter.hungryminds.dev
3. RT the tweet below to share this thread with your audience
1. Follow me @itsalexzajac for more content on Software and AI
2. Check out Hungry Minds: newsletter.hungryminds.dev
3. RT the tweet below to share this thread with your audience
↳ Network overhead: Retries due to server load spike traffic
↳ AST parsing edge cases: Language-specific syntax quirks
↳ Embedding inversion risks: Theoretical code leaks from vectors (mitigated by short TTLs)
↳ Network overhead: Retries due to server load spike traffic
↳ AST parsing edge cases: Language-specific syntax quirks
↳ Embedding inversion risks: Theoretical code leaks from vectors (mitigated by short TTLs)
↳ Bandwidth savings: Sync only delta changes (Git-like)
↳ Cache optimization: Hash-indexed embeddings enable instant re-indexing
↳ Data integrity: Tamper-proof codebase fingerprints
↳ Bandwidth savings: Sync only delta changes (Git-like)
↳ Cache optimization: Hash-indexed embeddings enable instant re-indexing
↳ Data integrity: Tamper-proof codebase fingerprints
↳ Query vector DB (Turbopuffer) for relevant chunks
↳ Inject top matches into LLM context
↳ Generate answers using GPT-4 + codebase context
↳ Query vector DB (Turbopuffer) for relevant chunks
↳ Inject top matches into LLM context
↳ Generate answers using GPT-4 + codebase context
↳ Uses OpenAI’s text-embedding-3-small or custom code-specific models
↳ Obfuscates file paths with client-side encryption (e.g., src/utils . py → a1b2/c3d4/e5f6)
↳ No raw code stored, embeddings purged after request
↳ Uses OpenAI’s text-embedding-3-small or custom code-specific models
↳ Obfuscates file paths with client-side encryption (e.g., src/utils . py → a1b2/c3d4/e5f6)
↳ No raw code stored, embeddings purged after request
↳ Local hashing: Compute SHA-256 hashes for all code chunks
↳ Tree sync: Compare root hash with server to identify changed files
↳Incremental uploads: Only modified chunks get re-embedded
Result: 90% fewer uploads vs. full re-indexing.
↳ Local hashing: Compute SHA-256 hashes for all code chunks
↳ Tree sync: Compare root hash with server to identify changed files
↳Incremental uploads: Only modified chunks get re-embedded
Result: 90% fewer uploads vs. full re-indexing.
↳ Code splitting: AST tree-sitter to parse code into logical blocks (functions, classes)
↳ Token limits: Merge sibling AST nodes without exceeding model token capacity
↳ Semantic boundaries: Avoid mid-function splits for better embeddings
↳ Code splitting: AST tree-sitter to parse code into logical blocks (functions, classes)
↳ Token limits: Merge sibling AST nodes without exceeding model token capacity
↳ Semantic boundaries: Avoid mid-function splits for better embeddings
↳ Hierarchical hash chains that fingerprint data blocks
↳ Leaf nodes = hash of code chunks
↳ Parent nodes = hash of child hashes
↳ Root hash = single fingerprint for the entire codebase
Key benefit: Detect changes instantly by comparing root hashes.
↳ Hierarchical hash chains that fingerprint data blocks
↳ Leaf nodes = hash of code chunks
↳ Parent nodes = hash of child hashes
↳ Root hash = single fingerprint for the entire codebase
Key benefit: Detect changes instantly by comparing root hashes.
1. Follow me @itsalexzajac for more content on Software and AI
2. Check out Hungry Minds: newsletter.hungryminds.dev
3. RT the tweet below to share this thread with your audience
1. Follow me @itsalexzajac for more content on Software and AI
2. Check out Hungry Minds: newsletter.hungryminds.dev
3. RT the tweet below to share this thread with your audience
↳ Use gotos only when absolutely necessary.
✅ They can make your code harder to follow and debug.
↳ Use gotos only when absolutely necessary.
✅ They can make your code harder to follow and debug.
↳ Keep your functions small and focused on a single task.
✅ Smaller functions are easier to read, test, and reuse.
↳ Keep your functions small and focused on a single task.
✅ Smaller functions are easier to read, test, and reuse.
↳ Declare variables as close to their usage as possible.
✅ This reduces the mental effort needed to track their values.
↳ Declare variables as close to their usage as possible.
✅ This reduces the mental effort needed to track their values.
↳ Choose descriptive and unique names for your variables.
✅ Avoid names that look similar, like `i` and `j`, to prevent confusion.
↳ Choose descriptive and unique names for your variables.
✅ Avoid names that look similar, like `i` and `j`, to prevent confusion.