#encoders
It's a bit of both. Sometimes it's just a bug, for instance I recently had an issue where the max bitrate setting had no effect. Resolve's UI also obscures a lot of parameters.

Plus those external encoders tend to benchmark the best quality vs bitrate wise, but are slower, so it really depends.
December 20, 2025 at 11:37 PM
I wish I could trust the encoders in Resolve more though. In the past few releases a lot of the quality / bitrate controls were broken for me. Lately I spit out IMF, ProRes or FFV1 and use external tools like x264, SVT-AV1, or Dolby DEE, which is tedious but more reliable.
December 20, 2025 at 11:27 PM
Tiny Encoders Improve NLP Pipeline Efficiency with Reduced Energy

Read more:
https://quantumzeitgeist.com/efficiency-nlp-tiny-encoders-pipelines-processing-data-reduced-energy/
Tiny Encoders Improve NLP Pipeline Efficiency With Reduced Energy
Researchers have created a new family of compact language models, called TiME, that achieve improved speed and energy efficiency for specific natural language processing tasks without sacrificing performance, demonstrating that effective knowledge transfer is possible even from larger, more complex systems.
quantumzeitgeist.com
December 19, 2025 at 9:58 PM
SEMI uses a central projector and dynamic LoRA adapters to efficiently translate embeddings from various domain-specific encoders into the LLM's embedding space. This could accelerate AI adoption in technical fields where large text-paired datasets are currently unavailable.
OpenAI’s Answer to Gemini 3, Runway’s Interactive Worlds, Disney’s Alliance With OpenAI, and more...
The Batch AI News and Insights: As amazing as LLMs are, improving their knowledge today involves a more piecemeal process than is widely...
www.deeplearning.ai
December 19, 2025 at 8:20 PM
Ok. There's a nuance I ignored. They don't use generative LLMs, they use text encoders which are trained to align representations with image representations. Indeed such models are integrated but the challenge is often generating and storing the embeddings at scale.
December 19, 2025 at 5:28 PM
TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation https://cstu.io/207063 #india #startup #photography
TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation
Transformers + U‑Net: A simpler way to read medical images Scans can be messy, and this...
cstu.io
December 19, 2025 at 4:16 PM
Encoder-decoder models are great for annotation tasks, where you want full information about an input as well as the ability to generate arbitrary structured output. Encoders (BERT) and Decoders (most everything else) only give you one of those.
Google released the open-weights of T5Gemma 2, an encoder-decoder models

Built on top of Gemma 3, T5Gemma 2 is a multimodal, long-context, and heavily multilingual (140 languages) encoder-decoder model, where most models today are decoder-only.
December 19, 2025 at 3:57 PM
The Google weather forecast AI use Graph NNs, not Transformer, so in that sense it’s less related to LLMs than most currently deployed ML models are.

On the other hand (as the authors themselves note) GNNs and sparse transformer encoders are different ways of labeling the same operations.
December 18, 2025 at 5:28 PM
Tamanna Hossain, Robert L. Logan IV, Ganesh Jagadeesan, Sameer Singh, Joel Tetreault, Alejandro Jaimes: Characterizing Mamba's Selective Memory using Auto-Encoders https://arxiv.org/abs/2512.15653 https://arxiv.org/pdf/2512.15653 https://arxiv.org/html/2512.15653
December 18, 2025 at 6:30 AM
Tamanna Hossain, Robert L. Logan IV, Ganesh Jagadeesan, Sameer Singh, Joel Tetreault, Alejandro Jaimes
Characterizing Mamba's Selective Memory using Auto-Encoders
https://arxiv.org/abs/2512.15653
December 18, 2025 at 5:17 AM
We’re going to end up with 8 contextual push buttons & 2 rotary encoders on the edges of the biggest touchscreen, and this will be a significant tactile improvement over the status quo
December 17, 2025 at 7:24 PM
"We can't afford the weight of position encoders" what if we wrote software that didn't require a $2000 Linux PC to run what is ostensibly an RC car with an arm attached
December 17, 2025 at 5:38 PM
VBR - Variable Bit-Rate. You basically say it how many bits per second you want. It would allow some frames to be bigger and then compensate for other.

Good encoders have "look-ahead" feature that encodes a bunch of frames to get their complexity and distribute the bits proportionally.
December 17, 2025 at 9:52 AM
David Schulmeister, Valentin Hartmann, Lars Klein, Robert West: TiME: Tiny Monolingual Encoders for Efficient NLP Pipelines https://arxiv.org/abs/2512.14645 https://arxiv.org/pdf/2512.14645 https://arxiv.org/html/2512.14645
December 17, 2025 at 6:30 AM
Our kitchen-aid convection toaster oven of Many Years Old had progressively flaky encoders for the last couple of years and the display partially failed in the last couple of months.
Some hand tools, a soldering iron, some Deoxit spray, about an hour of time, and a band-aid was what it took to […]
Original post on mastodon.social
mastodon.social
December 17, 2025 at 6:01 AM
David Schulmeister, Valentin Hartmann, Lars Klein, Robert West
TiME: Tiny Monolingual Encoders for Efficient NLP Pipelines
https://arxiv.org/abs/2512.14645
December 17, 2025 at 5:43 AM
[26/30] 145 Likes, 16 Comments, 4 Posts
2512.07829, cs․CV | cs․AI, 08 Dec 2025

🆕One Layer Is Enough: Adapting Pretrained Visual Encoders for Image Generation

Yuan Gao, Chen Chen, Tianrong Chen, Jiatao Gu
December 17, 2025 at 12:06 AM
Synth Bits! ESI NeON, one of the very first MIDI keyboards with built-in audio interface. Also, one of the very few devices with axial (!) endless rotary encoders, like mouse wheels. Sadly, it was too ahead of its time and didn't sell well. (not mine) #synthbits
December 16, 2025 at 9:28 PM
Shoutout to Tom Pettini for striking images with the new HCR™ Gold IF assay – overcoming same-species antibody limits to multiplex mouse primaries on cell and nuclear membranes, and showcasing the versatility of the HCR™ HiFi Encoders by working with high-quality primaries from CST and DSHB!
December 16, 2025 at 5:59 PM
Streaming has been blow up since I got a new monitor. Also I'd love to know why the hardware encoders just don't show up in OBS sometimes... Anyway I guess streams postponed until I can build a new PC.
December 15, 2025 at 6:10 PM
My new vehicle has HDRadio which means I get to hear how misconfigured all my local stations’s encoders are. A few didn’t even sync the latency with FM.

I understand linear media is somewhat of a dying industry, but I do miss when the competition of look & sound was a big deal to station engineers.
December 15, 2025 at 3:22 PM
Biraj Dahal, Jiahui Cheng, Hao Liu, Rongjie Lai, Wenjing Liao: Data-Driven Model Reduction using WeldNet: Windowed Encoders for Learning Dynamics https://arxiv.org/abs/2512.11090 https://arxiv.org/pdf/2512.11090 https://arxiv.org/html/2512.11090
December 15, 2025 at 6:53 AM
Yuan Li: Capacity-Achieving Codes with Inverse-Ackermann-Depth Encoders https://arxiv.org/abs/2512.11443 https://arxiv.org/pdf/2512.11443 https://arxiv.org/html/2512.11443
December 15, 2025 at 6:32 AM
Image generation models want low-dimensional spaces for efficiency. Pre-trained visual encoders need high-dimensional features for understanding. This tension usually requires complex workarounds.
December 15, 2025 at 6:11 AM
Yuan Li
Capacity-Achieving Codes with Inverse-Ackermann-Depth Encoders
https://arxiv.org/abs/2512.11443
December 15, 2025 at 5:19 AM