Research branch: https://advancedresearch.github.io
Discord (Piston): https://discord.gg/TkDnS9x
Discord (AdvancedResearch): https://discord.gg/JkrhJJRBR2
I like working with animation, so this is a nice test case for experimenting with new production pipelines. There are less requirements in animation than in games.
I like working with animation, so this is a nice test case for experimenting with new production pipelines. There are less requirements in animation than in games.
That's why it's a good idea to decouple content production from the game engine as much as possible, to avoid making these decisions upfront.
That's why it's a good idea to decouple content production from the game engine as much as possible, to avoid making these decisions upfront.
It's not because of lack of performance in the game engine. On the contrary, the game engine is usually NOT the problem.
The problem is that you need a pipeline.
It's not because of lack of performance in the game engine. On the contrary, the game engine is usually NOT the problem.
The problem is that you need a pipeline.
Commercial game engines today use good average performance techniques that work in most cases. Still, they can be difficult to work with.
Commercial game engines today use good average performance techniques that work in most cases. Still, they can be difficult to work with.
The optimal type of data primitives depend on what tradeoffs you want to make: Memory, performance and quality.
The optimal type of data primitives depend on what tradeoffs you want to make: Memory, performance and quality.
If you can't change the algorithms, then you can't optimize. If you can't reason about the data, then you don't know how to change the algorithms.
This is why 3D processing is a hard problem.
If you can't change the algorithms, then you can't optimize. If you can't reason about the data, then you don't know how to change the algorithms.
This is why 3D processing is a hard problem.
This means, to optimize a game engine properly, you need to bench example scenes of what the player sees and does.
There is no such thing as "this is going to be optimal in every case".
This means, to optimize a game engine properly, you need to bench example scenes of what the player sees and does.
There is no such thing as "this is going to be optimal in every case".
It matters e.g. whether a scene has a lot of empty tiles and how many triangles there are in the tile with max triangles.
This way, you can save 10s of seconds here and there.
It matters e.g. whether a scene has a lot of empty tiles and how many triangles there are in the tile with max triangles.
This way, you can save 10s of seconds here and there.
I use Dyon because it's predictable.
I use Dyon because it's predictable.
That way, you bench your whole game engine.
That way, you bench your whole game engine.
However, when you view compression in isolation, the 10x improvement is objectively true. You get 10x faster, probably on average too (if it's optimized), but how this impacts rendering varies.
However, when you view compression in isolation, the 10x improvement is objectively true. You get 10x faster, probably on average too (if it's optimized), but how this impacts rendering varies.
So, it's not a fair comparison. The optimal tile size without this extra stage on this scene is 25.
Total: 16.975974082946777 => 6.905922889709473
So, it's not a fair comparison. The optimal tile size without this extra stage on this scene is 25.
Total: 16.975974082946777 => 6.905922889709473
I haven't tested with pre-pre-pre-processing stage yet. This will take longer time to bench, since there are 3 parameters.
I haven't tested with pre-pre-pre-processing stage yet. This will take longer time to bench, since there are 3 parameters.
The technique is to run a pre-pre-processing stage with run-length compression on masks with higher render tile size.
This improves the next pre-processing stage, such that the total is over 10x faster.
The technique is to run a pre-pre-processing stage with run-length compression on masks with higher render tile size.
This improves the next pre-processing stage, such that the total is over 10x faster.
I'm going to design everything with the intention of making my dream of Additive computing becoming a reality.
I'm going to design everything with the intention of making my dream of Additive computing becoming a reality.
A few of these servers will boost productivity enormously.
We could develop instructions for setting up such servers.
A few of these servers will boost productivity enormously.
We could develop instructions for setting up such servers.
This is going to be fun.
This is going to be fun.
1. Dyon scripting
2. Reloading data
3. Mask compression
4. Rendering
I think that's a good start base utility that we can build on later.
1. Dyon scripting
2. Reloading data
3. Mask compression
4. Rendering
I think that's a good start base utility that we can build on later.
If we can reduce iteration cycles from tens of seconds to sub-seconds, then it will help a lot.
If we can reduce iteration cycles from tens of seconds to sub-seconds, then it will help a lot.
Could develop a layer system supporting multiple resolutions. An idea is to render first layers that are visible when updating. Idle the rest.
Could develop a layer system supporting multiple resolutions. An idea is to render first layers that are visible when updating. Idle the rest.
I am thinking about a human readable format for first-person camera too. So, you can easily save/load camera positions.
I am thinking about a human readable format for first-person camera too. So, you can easily save/load camera positions.
Also, we could create a repository for sharing Dyon snippets in this format. So, people don't have to reinvent the wheel.
Also, we could create a repository for sharing Dyon snippets in this format. So, people don't have to reinvent the wheel.