It looks nice and lots of designers will be using it.
Add effect > glass > play with params 🫰
It looks nice and lots of designers will be using it.
Add effect > glass > play with params 🫰
- Built with Craft
- One-click insert
- Auto-adapts to your design system on insert
- Quality > quantity
- Accessible and SEO’d
Talk soon 😉
- Built with Craft
- One-click insert
- Auto-adapts to your design system on insert
- Quality > quantity
- Accessible and SEO’d
Talk soon 😉
The title experiment was a success.
“Custom frontend without custom code” converts at 7.45% 🤯
The title experiment was a success.
“Custom frontend without custom code” converts at 7.45% 🤯
It’s been extremely challenging to ideate these, specifically balancing clarity, SEO, punchiness, target audience, and unique value proposition.
It’s been extremely challenging to ideate these, specifically balancing clarity, SEO, punchiness, target audience, and unique value proposition.
- Pointer events none so you can hover over the globe and it registers as hovering over the stats
- Calcs to use a consistent offset and move things around
- Using --ease-squish-3 from Open Props
- The only custom CSS was keyframes for the live indicator
- Pointer events none so you can hover over the globe and it registers as hovering over the stats
- Calcs to use a consistent offset and move things around
- Using --ease-squish-3 from Open Props
- The only custom CSS was keyframes for the live indicator
To achieve extremely high perf, I made a cron job that fetches data from 4 different sources (Cloudflare, Discord, GitHub, and Supabase).
Most sources require multiple fetches to get the required data.
To achieve extremely high perf, I made a cron job that fetches data from 4 different sources (Cloudflare, Discord, GitHub, and Supabase).
Most sources require multiple fetches to get the required data.
I pasted in the API URL, it gets the data, and then I connect or bind that data to my text blocks.
I pasted in the API URL, it gets the data, and then I connect or bind that data to my text blocks.
I personally export the non-clipped image and on larger screens show the full image outside of the container.
I personally export the non-clipped image and on larger screens show the full image outside of the container.
1 Feed it CSS from Figma
2 Suggest the Open Props color groups
3 AI maps colors to palette variables (—gray-12)
4 Human intervention lets you customize the mappings
5 AI maps the color vars to semantic vars by interpreting their usage from the CSS
1 Feed it CSS from Figma
2 Suggest the Open Props color groups
3 AI maps colors to palette variables (—gray-12)
4 Human intervention lets you customize the mappings
5 AI maps the color vars to semantic vars by interpreting their usage from the CSS
⌘ + Shift + D
All with the left hand.
This shortcut is for toggling to Design mode, but if you’re already in it, it’ll take you to Preview.
⌘ + Shift + D
All with the left hand.
This shortcut is for toggling to Design mode, but if you’re already in it, it’ll take you to Preview.
Example: Display "1 template" or "6 templates" dynamically with this expression:
`${total} template${+total == 1 ? '' : 's'}`
Example: Display "1 template" or "6 templates" dynamically with this expression:
`${total} template${+total == 1 ? '' : 's'}`
This makes it a lot easier to take a screenshot with the desired focal point in the middle.
This makes it a lot easier to take a screenshot with the desired focal point in the middle.
Includes 7 pages
Includes 7 pages
Added support for Local Styles! Including:
- Grid template columns based on what it sees
- Flex with direction and Craft gap vars
- Background colors (on `section`)
Also added a CLI and some other helpful tasks 😉
Added support for Local Styles! Including:
- Grid template columns based on what it sees
- Flex with direction and Craft gap vars
- Background colors (on `section`)
Also added a CLI and some other helpful tasks 😉
Now using OpenAI API in the local script to convert the image to HTML.
Then the usual code converts the HTML to Webstudio AST and copies it to the clipboard.
Note: I trimmed the video, the API takes longer than that :)
Now using OpenAI API in the local script to convert the image to HTML.
Then the usual code converts the HTML to Webstudio AST and copies it to the clipboard.
Note: I trimmed the video, the API takes longer than that :)
- ChatGPT for screenshot to HTML with attributes.
- Custom script for HTML to Webstudio AST.
ChatGPT is very reliable for the HTML but too inconsistent with the AST.
Also added support for:
- Alt text
- Grid columns
- Cards
- ChatGPT for screenshot to HTML with attributes.
- Custom script for HTML to Webstudio AST.
ChatGPT is very reliable for the HTML but too inconsistent with the AST.
Also added support for:
- Alt text
- Grid columns
- Cards
It outputs Craft-compliant* Webstudio AST (format for pasting)
It adds:
- Components
- Existing Tokens (section, container, grid, headings, buttons)
- Renames instance labels
- Sets box tag like h2 or section
*most of the time😉
It outputs Craft-compliant* Webstudio AST (format for pasting)
It adds:
- Components
- Existing Tokens (section, container, grid, headings, buttons)
- Renames instance labels
- Sets box tag like h2 or section
*most of the time😉