Pekka Väänänen
@pekkavaa.bsky.social
1.1K followers 350 following 210 posts
Avid reader, computer graphics fan and atmospheric jungle beats enjoyer. Demoscene: cce/Peisik. Blog at https://30fps.net/
Posts Media Videos Starter Packs
Pinned
pekkavaa.bsky.social
Hello I'm Pekka and I do experiments in computer graphics and machine learning. I mostly post about my coding projects, my blog or books I'm reading. I read a lot.

For the last couple of years I've gotten into #N64 coding. It's great for low-level graphics! I also keep an eye on #demoscene stuff.
A diagram of my fractal landscape renderer's inner workings: trees get spawn on a staggered grid. The trees are rendered as anti-aliased poinnn based on impostor depth maps. My Nintendo 64 occlusion culling experiment. Running in ares emulator so it's optimistically 60 fps because it has infinite fillrate. Still worth it on hardware though but maybe 25 fps :) A diagram of an iterative neural network that colors and stylizes G-buffer inputs. Based on Neural Cellular Automata (NCA) by Alexander Mordvintsev. Left: original monochrome Mona Lisa bitmap. Right: Least squares optimized tessellated version with linear interpolation.
pekkavaa.bsky.social
Perhaps in that case it's also possible to fit some kind of better weights of the filter kernel? At least if you focus on a selected corpus of images.
pekkavaa.bsky.social
New article on my site: Better sRGB to greyscale conversion

The commonly used greyscale formula is slightly off when computed in gamma space but can it be fixed?

📜 30fps.net/pages/better...
A four-pane comparison of different formulas for greyscale conversion.
Top row: Linear sRGB (the correct way to do it), Gamma sRGB (the somewhat wrong "luma" formula).
Bottom row: CIELAB L* (a high quality option), and the new alternative gamma-space version.

Careful inspection shows the "Gamma sRGB" case is slightly dimmer at points than the others. A quote from “Principles of Digital Image Processing” by Wilhelm Burger and Mark J. Burge. It shows weights that are a better fit for gamma-space greyscale conversion.
pekkavaa.bsky.social
After these improvements the median split position works surprisingly well and the only thing I've consistently seem to improve the result (MSE-wise) is Celebi's 2012 "VCL" method that splits at the mean but runs k-means with two clusters afterwards to finetune the split plane in 3D.
pekkavaa.bsky.social
Sure I've tried a lot of things. These help: Give more weight to green, RGB weights like (1,1.2,0.8) work OK, split the cluster with largest variance (sum of squared errors), split the axis with the most marginal variance, run 1-10 k-means iterations at the end, pixel map in Oklab space (maybe).
pekkavaa.bsky.social
My new visualization for color quantization algorithm execution, though still unfinished. Attached is the result for a simple 12-color median cut. Each box contains a subset of image colors with the X-axis as a chosen sort axis; Y is PCA for plotting. See how e.g. box 3 has unwanted greens in it.
A top-down hierarchical color quantization routine shown as a binary tree. The algorithm starts from the top box 1 that contains all image colors and continues splitting the box with the greatest error (in this case just max volume) into color subsets. Each box is split at the median, shown in red. Splitting stops when the requested number of colors has been reached.

The mean colors of the leaf nodes at the bottom define the final 12-color palette. The final image and its palette (top right). I chose a purposefully primitive algorithm as a demonstration and the colors are a bit greyish and dim because of it.
pekkavaa.bsky.social
Uneven or messy line art is another such detail, in illustrations I mean. When drawing with the target resolution in mind it's possible to adapt the style the constraints. Hard to do when converting.
Reposted by Pekka Väänänen
zeux.io
New blog post! In "Billions of triangles in minutes" we'll walk through hierarchical cluster level of detail generation of, well, billions of triangles in minutes. Reposts welcome!

zeux.io/2025/09/30/b...
pekkavaa.bsky.social
OK I see, perhaps "sensitivity" or "density" would fit. Sometimes it's surprisingly difficult to find names for simple things, for example the "black&white" intensity of a color we can call "luminosity" or "lightness" but what is its opponent? Colorfulness? Chromaticity?
pekkavaa.bsky.social
Works super well! And doesn't sound like it's hard to implement after the ramps have been detected.
pekkavaa.bsky.social
I wrote a simple box collision resolution code today and made the classic mistake of not checking if box_a != box_b before testing for overlap. This was the bug that made me post my first programming question online 20 years ago or so! I was unaware of the <> Basic "not equals" operator at the time.
pekkavaa.bsky.social
I think here I had a 2D array, so x[i] returned a *view* to a row. And however the a,b=b,a syntactic sugar is implemented, it fails to make temporary copy (how could it know?) and writes the same value twice instead. Advanced indexing is handled by NumPy internally and it could be still in Python.
pekkavaa.bsky.social
I remember hearing that in aviation +X forward is standard. My theory is that an airplane is usually drawn on paper flying from left to right and when they went to 3D, they kept the convention.
pekkavaa.bsky.social
Looks like it's working :) What do the "Control" sliders do?
pekkavaa.bsky.social
The papers in question:
Paul Debevec, "A Median Cut Algorithm for Light Probe Sampling": vgl.ict.usc.edu/Research/Med...
F. Banterle et al., "A framework for inverse tone mapping" www.researchgate.net/publication/...
vgl.ict.usc.edu
pekkavaa.bsky.social
A neat little paper from 2006 "A Median Cut Algorithm for Light Probe Sampling" where an environment map is compressed to a set of point lights using the median cut algorithm. Seems like a precursor to another 2007 paper where others cast the task as density estimation.
Figure 1: The Grace Cathedral light probe subdivided into 64 regions of equal light energy using the median cut algorithm. The small circles are the 64 light sources chosen as the energy centroids of each region; the lights are all approximately equal in energy.

From Paul Debevec, "A Median Cut Algorithm for Light Probe Sampling" Comparison of median cut results using an HDRI and LDRI of memorial HDRI. a The memorial LDRI. b Median cut result, 1024 light sources generated starting from a LDRI. c Median cut result, 1024 light sources generated starting from a HDR

From F. Banterle et al. "A framework for inverse tone mapping"
Reposted by Pekka Väänänen
hiddenasbestos.bsky.social
Got my flight sim taking keyboard input and using that to drive the aerodynamic control surfaces. One step closer to take-off! #gamedev #flightsim #physics 🛩️
Reposted by Pekka Väänänen
n64brew.bsky.social
N64brew user @boxingbruin.bsky.social continues to showcase more work that they've done for their Pickle64 project, their action-RPG platformer hybrid with dark souls-like boss fights.
pekkavaa.bsky.social
Looks plausible and fits the visual style, really neat!
pekkavaa.bsky.social
There's a classic color transfer technique from 2001: convert source and target image to a decorrelated color space (Oklab works fine), make target's mean and stddev match the source, convert back to RGB. Using a palette image as a target is no problem :)

Paper: home.cis.rit.edu/~cnspci/refe...
A diagram showing three images: the Source, a crop from Big Buck Bunny; the target image, the PICO-8 palette; and the result image. The result image's color distribution has been moved to match the target.

Paper reference: "Color Transfer Between Images" Reinhard et al. 2001.
pekkavaa.bsky.social
A short article on mapping pixel colors to the PICO-8 palette in the CAM16-UCS color space. Surprisingly, it didn't do much better than Oklab, which is derived from CAM16.

📜 30fps.net/pages/percep...
A four-pane comparison of pixel mapping results in CAM16-UCS, Oklab, and the CIELAB colorspace (HyAB distance function). Top right is the input image.
pekkavaa.bsky.social
Got familiar with NumPy's structured arrays today. It's basically an Array Of Structs, where fields can of course be different types. Could be useful with binary files, but I used it to organize code. Adding a "record array" allows a nice `colors.r` access syntax.

numpy.org/doc/stable/u...
# First deduplicate the colors
unique_colors, counts = deduplicate_colors(img.reshape(-1, 4))

# Create a structured array so that we can have float RGBA colors, per color weight and
# the original RGBA8 color accessible and sortable with a single index.

ColorRGBA = np.dtype([
('r', 'f4'),
('g', 'f4'),
('b', 'f4'),
('a', 'f4'),
('weight', 'f4'),
('original', 'u4'),
])

colors = np.rec.array(np.zeros(unique_colors.shape[0], dtype=ColorRGBA))
colors.r = unique_colors[:, 0] / 255.0
colors.g = unique_colors[:, 1] / 255.0
colors.b = unique_colors[:, 2] / 255.0
colors.a = unique_colors[:, 3] / 255.0
colors.weight = counts / num_total
colors.original = unique_colors.view(np.uint32).reshape(-1)

# Scale the channels to weight luminance more when picking a split-plane axis.
# This is why its convenient to store the original RGBA8 color: the final step
# that averages colors for the palette doesn't need to know about this transform.

colors.r *= 0.30
colors.g *= 0.59
colors.b *= 0.11
colors.a *= 0.50
Reposted by Pekka Väänänen
mjp123.bsky.social
Some fun with shader-based debug drawing: here I'm drawing an arrow for each path taken in the path tracer, starting with the pixel under the mouse cursor.