The Rust Programming Language Forum
users.rust-lang.org.web.brid.gy
The Rust Programming Language Forum
@users.rust-lang.org.web.brid.gy
General discussion of The Rust Programming Language

[bridged from https://users.rust-lang.org/ on the web: https://fed.brid.gy/web/users.rust-lang.org ]
`Pin<Rc<T>>` vs `Rc<Pin<T>>`
pufferfish101007: > Context: I've got a data structure (call it `Thing`) which is always passed around in an `Rc`, and `Thing` currently has an `id` field, because two `Thing`s with the same data are not necessarily the same thing (and may become different in the future because of interior mutability). So my implementation of `PartialEq` for `Thing` compares their `id`s; `Hash` also just uses the id. > > Generating and storing ids seems unnecessary though, seeing as the `Rc` provides an id for us in the form of its backing pointer. Going off the description and thinking ahead just a little bit, using an in-memory location for the `id` might not be the ideal way to go. For one: each and every `Thing` you create will only ever be able to reference _itself_ - by its _own_ reference. Any caching approach you might ever come up with, will have to be limited to the current allocation of `Thing`'s on the heap. The moment your program stops running, all of the `id`'s will need to be regenerated from the top at next launch. Assuming you're fine with all the above: pufferfish101007: > The actual question: If I want to guarantee that the pointer I obtain from an `Rc` will always point to the same data for the lifetime of that `Rc`, should I be using `Pin<Rc<Thing>>` or `Rc<Pin<Thing>>`? Both will work: the `Rc<Pin<Box<....>>>` is a bit clearer (especially if you're new to `Pin`) while the `Pin<Rc<...>>` will most likely have less of an impact on your code later on, performance-wise. As @Cerber-Ursi has pointed out: a `Pin` itself doesn't "pin" or do anything to the data. It's just a [compile-time] "marker" that imposes additional constraints on the use of a given pointer. > At its core, _pinning_ a value means _making the guarantee_ that the value's data will not be moved nor have its storage invalidated until it gets dropped. `[Pin::new_unchecked]` Therefore (click for more details) pufferfish101007: > Moreover, it seems to me that doing anything useful with a `Pin<Rc<T>>` is quite difficult because I don't think you can just access a `&Rc<T>` from a `Pin<Rc<T>>`, which you need for doing anything useful with an `Rc`. `Rc<T>` is (for all intents and purposes here) just a pointer to a heap allocation of `T`; with counters for the number of "strong" and "weak" references to that allocation given away on each `clone()`. An `&Rc<T>` is a pointer to _that_ pointer. A `Pin<Rc<T>>` is an `Rc<T>` with some guarantees on top. Both auto-`Deref` into `&T` themselves. To `clone()` the `Pin<Rc<T>>` is to `clone()` the `Rc<T>` associated with it automatically. What other "useful" thing would you require "doing" there? If you were thinking about getting your `id` from the `&Rc<T>` itself: that is most definitely one of the worst ways to go. Again: think of the `Rc<T>` as just another `&` reference. An `&Rc<T>` is a temporary stack-based ephemeral `&&T`. Whatever `Rc<T>`'s you are going to be using are going to be placed on the stack, `*`-ed into `T` and promptly discarded. And `id` you get from them will get invalidated pretty much as soon as you declare it. If anything, it's the `&T` you want here.
users.rust-lang.org
June 12, 2025 at 4:00 PM
Why leaning C before Rust is not necessary
StefanSalewski: > That is really sad. It should took not more than 5 minutes. Both concepts are quite simple, and for using Rust you really need only a very basic understanding. It may seem simple to us, but if you don't have any insight on how a CPU works and haven't already studied a language where you must manage the memory a little (like C), perhaps it's not so simple without a good introductory book. You must understand that there's a CPU, which executes its instructions from a memory and has a limited number of registers, that one register points to the instruction to execute, that because of those limitations, it must use a stack to save the registers when it calls a subroutine and to store the local variables that don't fit in the available registers, and that data can also be stored on the heap. Depending on the type and the language, some data can or cannot be on the stack, or only partially, so sometimes it's only the pointer that's on the stack, etc. There are great books to explain all of this, but it must take a little while to wade through all those notions, even just to understand what a stack is. On the plus side, it'll be useful to better understand what's happening when a problem occurs, even in languages that don't expose everything.
users.rust-lang.org
June 12, 2025 at 2:00 PM
Why leaning C before Rust is not necessary
Qoori: > Just understanding heap and stack took more than a week to get it right in my brain. That is really sad. It should took not more than 5 minutes. Both concepts are quite simple, and for using Rust you really need only a very basic understanding. Unfortunately some Rust books discuss the stack/heap stuff in much more detail then necessary, and some books sometimes makes wrong statements, i.e. that integers or arrays are stored always on the stack. This is obvious wrong, as for example integers or arrays can be the component type of a vector, and so live in the heap, as the vector data buffer is always located in the heap. Such errors can arise easily in a discussion -- I hope I used some more care in my own book. For the stack understanding -- some books do use the picture of a stack of plates in a diner or restaurant. That makes it quite obvious that one can only add or remove elements from the top. And for the heap -- it is just a large memory region, where we can requests storage blocks from the the operating system and return it when not any longer needed. In C explicitly with malloc() and free(), in Rust a bit more hidden. For owned strings and vectors, there is the combination of both: The vector is a struct, with 3 fields -- capacity, length, and a pointer to the actual data on the heap.
users.rust-lang.org
June 12, 2025 at 2:00 PM
Macro_rules! for generating const-generic parameters
You could start from here: macro_rules! comb { ( $( $($a:literal,)* $b:ty $(,$c:ty)* );+ ) => { comb!( $( $($a,)* false $(,$c)* );+ ; $( $($a,)* true $(,$c)* );+ ); }; ( $( $($a:literal),+ );+ ) => { [$( Self::route_beam_impl::<$($a),+>(), )+] }; } const FN: [RouteFn; 8] = comb!(bool, bool, bool); The `bool` type can be replaced by any other type; it's just there to differentiate the literal `true` or `false` from what hasn't been replaced yet. It will expand to: const FN: [RouteFn; 8] = [Self::route_beam_impl::<false, false, false>(), Self::route_beam_impl::<true, false, false>(), Self::route_beam_impl::<false, true, false>(), Self::route_beam_impl::<true, true, false>(), Self::route_beam_impl::<false, false, true>(), Self::route_beam_impl::<true, false, true>(), Self::route_beam_impl::<false, true, true>(), Self::route_beam_impl::<true, true, true>(), ]; You can replace by something else by changing the last pattern in the macro, but it must produce a valid output in the context it's used, which is why I had to use the turbofish syntax. The macro replaces a list of comma-separated mix or `bool` and `true`/`false`, those lists being separated by semicolons. When there are only `true`/`false` remaining, it goes to the 2nd pattern to create the array. It's a little tricky, to be honest, and that won't make your code very easy to read. PS: There's a limit to how much you can recurse into a macro, but the default limit is 128, so you should be safe.
users.rust-lang.org
June 12, 2025 at 2:00 PM
Can the compiler reorder code around my log with timestamps?
Hi, I have a long-ish function that sometimes takes much longer than it should to execute. I've built myself a small logging type (see below) so I can narrow down where the problem lies. Essentially, I can call timed_log.log("now doing this and that"); throughout the function and that logs the message to a cache plus the timestamp at which it was called, and I can later print the log from that cache if the overall execution time was too long (thereby avoiding the unnecessary I/O load of saving the log for the case when performance is good). However, I do fear that the log might not be accurate, as I'm not sure whether the compiler is allowed to rearrange code around the calls to the logger. I've found this thread on StackOverflow where someone has that exact problem when benchmarking their code, and there's no definitive answer 2 years later. I'd rather not have to dig into the assembly trying to understand what the compiler did, since this is a multi-crate scenario and also I'm cross-compiling from amd64 to aarch64 and I'm unfamiliar with the tooling. Is there some sort of marker I can insert that tells rustc not to reorder instructions across it? Or is my fear unfounded, because there's some reason rustc doesn't actually reorder operations across my `Instant::elapsed` call? (If this decreases performance a bit, I can live with that.) I have found `std::sync::atomic::fence()`, but I cannot from the docs whether this is what I'm looking for. * * * use std::{fmt::{Display, Write}, time::{Duration, Instant}}; pub struct TimedLog<T> { t_start : Instant, log_entries : Vec<(Duration, T)> } impl<T> TimedLog<T> { pub fn new() -> Self { Self { t_start: Instant::now(), log_entries : vec![] } } pub fn log(&mut self, entry : impl Into<T>) { self.log_entries.push((self.t_start.elapsed(), entry.into())); } pub fn clear_entries(&mut self) { self.log_entries.clear(); } pub fn reset(&mut self) { self.log_entries.clear(); self.t_start = Instant::now(); } } impl<T> TimedLog<T> where T : Display { pub fn write<W>(&self, writer : &mut W) where W : Write { let mut t_old = None; for (t, msg) in self.log_entries.iter() { let dt = match t_old { None => Duration::new(0, 0), Some(t_old) => *t - t_old }; writeln!( writer, "t = {:11.6} ms | dt = {:11.6} ms | {}", t.as_secs_f32() * 1000., dt.as_secs_f32() * 1000., msg ); t_old = Some(*t); } } }
users.rust-lang.org
June 12, 2025 at 12:03 PM
`Pin<Rc<T>>` vs `Rc<Pin<T>>`
Context: I've got a data structure (call it `Thing`) which is always passed around in an `Rc`, and `Thing` currently has an `id` field, because two `Thing`s with the same data are not necessarily the same thing (and may become different in the future because of interior mutability). So my implementation of `PartialEq` for `Thing` compares their `id`s; `Hash` also just uses the id. Generating and storing ids seems unnecessary though, seeing as the `Rc` provides an id for us in the form of its backing pointer. I was however a bit hesitant to use `Rc::as_ptr/ptr_eq` for these purposes though, because I wasn't certain that the pointer would never change. Posts in Does comment on Rc<T>.as_ptr() imply Rc<T> is always pinned? suggest that it'd be totally fine so long as I don't use `make_mut`; I don't use use `make_mut`, but I'd like to make certain that I don't shoot myself in the foot later on. The actual question: If I want to guarantee that the pointer I obtain from an `Rc` will always point to the same data for the lifetime of that `Rc`, should I be using `Pin<Rc<Thing>>` or `Rc<Pin<Thing>>`? To me, the former implies that the address of the `Rc` won't change, which isn't helpful because I'll be cloning `Rc`s anyway, whilst the latter implies that the location of the `Thing`, which will be the backing pointer of the `Rc`, won't change. Moreover, it seems to me that doing anything useful with a `Pin<Rc<T>>` is quite difficult because I don't think you can just access a `&Rc<T>` from a `Pin<Rc<T>>`, which you need for doing anything useful with an `Rc`. However, I do suspect that I don't understand `Pin` very well, having only read its documentation for the first time today. Thanks in advance for any clarity you can shed on the matter.
users.rust-lang.org
June 12, 2025 at 12:03 PM
Macro_rules! for generating const-generic parameters
Hello, I am currently working on a performance critical part of a person project (long-range router plotter for Elite: Dangerous, using parallel beam-search across and implicit graph) and i want to do some compile-time branch-elimination (at the cost of code size). The signature for my worker functions looks like this: pub(crate) fn route_beam_impl< const REFUEL: bool, const SHIP: bool, const LIMIT: bool, >(&self, args: ArgTuple) -> _ And contains a few `if`s that disable parts of the code based on the const-generics parameters. My idea is to generate an array of functions pointers for every combination of parameters using a macro and then at runtime computing an index into the array and calling the corresponding function (that call is outside the hot path, so the indirection doesn't matter) but i'm struggling with the implementation since I have almost no experience writing macros. My idea was to have a macro invocation like `const FN: [RouteFn;8] = generate_static_dispatch_array(Self::route_beam_impl; REFUEL, SHIP, LIMIT)` which would then generate something like const FN: [RouteFn;8] = [ Self::route_beam_impl<false,false,false>, Self::route_beam_impl<false,false,true>, Self::route_beam_impl<false,true,false>, ... ]; But I'm not sure how to actually implement my idea. Best regards, Daniel
users.rust-lang.org
June 12, 2025 at 12:05 PM