Piston Developers 🍿🥤🐸
pistondeveloper.bsky.social
Piston Developers 🍿🥤🐸
@pistondeveloper.bsky.social
A modular game engine written in Rust https://piston.rs
Research branch: https://advancedresearch.github.io

Discord (Piston): https://discord.gg/TkDnS9x
Discord (AdvancedResearch): https://discord.gg/JkrhJJRBR2
Now, this is a hard problem to solve. I don't know yet whether I can solve it. I can try.

I like working with animation, so this is a nice test case for experimenting with new production pipelines. There are less requirements in animation than in games.
November 25, 2025 at 2:11 PM
The iteration cycles on design slows down because the pipeline that feeds the game engine does more work to adapt to the architecture of the game engine.

That's why it's a good idea to decouple content production from the game engine as much as possible, to avoid making these decisions upfront.
November 25, 2025 at 2:09 PM
This means that instead of spending more time on content production, developers fight against their game engine to stay productive.

It's not because of lack of performance in the game engine. On the contrary, the game engine is usually NOT the problem.

The problem is that you need a pipeline.
November 25, 2025 at 2:08 PM
The biggest problem in content production is that choice of architecture usually locks you in, before you learn more about the data you're processing.

Commercial game engines today use good average performance techniques that work in most cases. Still, they can be difficult to work with.
November 25, 2025 at 2:04 PM
There isn't like a one-size-fits-all solution for content production. Sometimes, it's more efficient to use triangles. Other times, you might want to use voxels.

The optimal type of data primitives depend on what tradeoffs you want to make: Memory, performance and quality.
November 25, 2025 at 2:01 PM
Software that you use in content production needs to be designed to be modified. The amount of time that developers spend tweaking performance in the content production pipeline is much larger than they spend on the game engine itself.
November 25, 2025 at 2:00 PM
What's fast in a game engine depends on both the data you're processing and the algorithms you use.

If you can't change the algorithms, then you can't optimize. If you can't reason about the data, then you don't know how to change the algorithms.

This is why 3D processing is a hard problem.
November 25, 2025 at 1:56 PM
When you change camera perspective, a scene that runs optimally in the pipeline can suddenly become slow.

This means, to optimize a game engine properly, you need to bench example scenes of what the player sees and does.

There is no such thing as "this is going to be optimal in every case".
November 25, 2025 at 1:54 PM
When you work on a content production pipeline, you optimize the parameters for the specific data that you're processing.

It matters e.g. whether a scene has a lot of empty tiles and how many triangles there are in the tile with max triangles.

This way, you can save 10s of seconds here and there.
November 25, 2025 at 1:51 PM
However, in order for benchmark mode to be a reliable indicator of average performance per frame, it means that the game engine needs to run predictably. If you e.g. use a scripting language with garbage collection, then this can be difficult to reason about.

I use Dyon because it's predictable.
November 25, 2025 at 1:47 PM
The complex dependencies in performance in game engines on various stages means that the only reliable benchmark, at the end of the day, is running the event loop in benchmark mode.

That way, you bench your whole game engine.
November 25, 2025 at 1:45 PM
When you bench optimal tile sizes with or without the extra stage, the total performance is over 2x.

However, when you view compression in isolation, the 10x improvement is objectively true. You get 10x faster, probably on average too (if it's optimized), but how this impacts rendering varies.
November 25, 2025 at 1:41 PM
The 10x improvement in compression performance is relative to the fastest total with a pre-pre-processing stage and simply turning off this stage.

So, it's not a fair comparison. The optimal tile size without this extra stage on this scene is 25.

Total: 16.975974082946777 => 6.905922889709473
November 25, 2025 at 1:39 PM
On this scene, I found that 10 in tile-size and 70 as a pre-tile-size was optimal.

I haven't tested with pre-pre-pre-processing stage yet. This will take longer time to bench, since there are 3 parameters.
November 25, 2025 at 1:33 PM
There are 45727 voxels in this scene.

The technique is to run a pre-pre-processing stage with run-length compression on masks with higher render tile size.

This improves the next pre-processing stage, such that the total is over 10x faster.
November 25, 2025 at 1:29 PM
For example, how to install Ubuntu with a specific configuration. A script that installs Rust and Turbine (whatever this means in the future) automatically.

I'm going to design everything with the intention of making my dream of Additive computing becoming a reality.
November 24, 2025 at 5:05 PM
Once we got standard formats, we can develop dedicated server software that will assist with on-demand compute. Maybe we can use GPUs or other kinds of optimizations.

A few of these servers will boost productivity enormously.

We could develop instructions for setting up such servers.
November 24, 2025 at 4:58 PM
Since I'm optimizing for CPU rendering right now, this content production pipeline should work on all devices. If you can compile Rust on it, then there is a way.

This is going to be fun.
November 24, 2025 at 4:52 PM
I haven't started thinking about animation systems yet. For now, it's about optimizing the cycle:

1. Dyon scripting
2. Reloading data
3. Mask compression
4. Rendering

I think that's a good start base utility that we can build on later.
November 24, 2025 at 4:48 PM
In the long term I plan to develop a system for sharing workloads across multiple devices. So, you can scale the scenes you can handle using available compute in your home.

If we can reduce iteration cycles from tens of seconds to sub-seconds, then it will help a lot.
November 24, 2025 at 4:46 PM
We could develop a similar human readable formats for Aabbs, Cubes and Trianges. So, you can script at the level of abstraction that best fits your needs.

Could develop a layer system supporting multiple resolutions. An idea is to render first layers that are visible when updating. Idle the rest.
November 24, 2025 at 4:42 PM
There are some improvements I want to make in Dyon, such as returning error code. This will be easier in the workflow with Bash loops.

I am thinking about a human readable format for first-person camera too. So, you can easily save/load camera positions.
November 24, 2025 at 4:37 PM
I'm considering adding the human readable voxel format to Turbine-Process3D with a generic render pipeline for Piston, so it's easier for people to get up and running.

Also, we could create a repository for sharing Dyon snippets in this format. So, people don't have to reinvent the wheel.
November 24, 2025 at 4:32 PM