Mathis obadia
mathisob.bsky.social
Mathis obadia
@mathisob.bsky.social
Working at http://www.pubgen.ai for local newsrooms. Building http://askair.ai in public
Yes cognito was the issue for me also, the limit is on the lambda payload size for the full headers. You can maybe try to remove some parts of the cookies you don’t need in a the cloudfront function since it’s going to get called before lambda
April 14, 2025 at 11:07 AM
If I remember correctly I changed the distribution’s behaviour to not forward cookies anymore to the lambda. The issue was that the cookie string was huge but only needed in the front so removing it solved it. Let me know if that makes sense for you !
April 14, 2025 at 9:47 AM
You already have a few translations but here is my take

Découvrez comment créer des interactions charmantes et des touches pleines de magie grâce à CSS, JavaScript, SVG et Canvas.

Je vous dévoile toutes mes astuces ici !

I think this sounds more idiomatic
February 4, 2025 at 10:27 PM
It’s not really a refutation of your point but I think LLMs are quite good at turning a small amount of text (bullet points and short not well written sentences) into a more structured complete text. Agreed that you can’t use that directly without human intervention (proof reading at the very least)
January 20, 2025 at 10:40 PM
I want devin to open a PR anytime there is an issue detected in the sst console, with access to the codebase and the stack trace it should have everything it needs. Do you plan on integrating something like that ?
December 20, 2024 at 6:42 PM
lol I heard anthropic specifically had to instruct Claude to avoid vim in their SWE-bench agent - on the great @latentspacepod.bsky.social
December 11, 2024 at 10:40 PM
If you ever see {"Message":"Request must be smaller than 6291456 bytes for the InvokeFunction operation"}
It might mean that your headers are larger thank 10kb! But you would never guess that by looking at the error.
December 10, 2024 at 10:17 PM
Not sure if you want to add it to the scope but auth for the published blogs was very tricky to implement.
December 5, 2024 at 2:28 PM
Can you add multiplayer on top of the autocomplete everywhere to the wish list ?
December 4, 2024 at 8:53 AM
Thank you ! Exactly what I needed. Glad to see I’m personally responsible for 4 app router websites in the top 1M (with passing CWV)
December 2, 2024 at 9:41 PM
I found a website on similarweb with 1,1M rank and around 40k page views per month so I think it should be in that ballpark if we can trust their ranking
December 2, 2024 at 9:07 PM
That’s cool! Any idea how much traffic does a website need to make in order to be in the top 1M ? I’m wondering if some of our sites show up in those stats.
December 2, 2024 at 8:16 PM
I don’t know if you count it in the category of llm powered UI generation but I let cursor design my frontend and it’s mostly better than what I would have come up with by myself
November 30, 2024 at 6:27 PM
The switching to Chinese in the reasoning step is interesting. I wonder if training models to have a reasoning step in different languages changes the performance as some languages use less tokens to express the same idea.
November 28, 2024 at 9:18 AM
That’s so cool! if you choose to moderate your thread and hide a reply will this also hide it from your blog ? Also the really cool but more complicated feature would be to allow users to post replies directly from the blog.
November 25, 2024 at 10:28 PM
Really excited about this. Ideally there would be something as easy to integrate / use as Facebook comments social plugin developers.facebook.com/docs/plugins... I would be interested in helping build that if anyone wants to join.
Comments - Social Plugins - Documentation - Meta for Developers
developers.facebook.com
November 25, 2024 at 7:33 AM
Yes it seems like the way most people are doing it. I need to look more into it.
November 21, 2024 at 12:46 PM
Getting automatic / updated evals out of you production logs is a challenge ! I would be interested in learning how people have achieved that.
November 21, 2024 at 12:02 PM
Oh right so basically you’re saying you’re not replacing all control flow with LLMs but something that would have been a very complex function (a tangle of control flow logic) is now an LLM call?
November 20, 2024 at 4:22 PM
Did you listen to the @latentspacepod.bsky.social podcast with lindy.ai creator ? He talks about this and how having the control flow not be an llm improves accuracy and makes them usable for a lot of tasks otherwise very complicated to describe solely with prompts.
November 20, 2024 at 2:21 PM
sst.dev I would rather use the same thing wether it’s « just a simple app » or « turned out to be not so simple »
SST
sst.dev
November 19, 2024 at 7:50 PM