Fred Hebert
@ferd.ca
Staff SRE @ honeycomb.io, Tech Book Author, Resilience in Software Foundation board member, Erlang Ecosystem Foundation co-founder, Resilience Engineering fan. SRE-not-sorry.
blog: https://ferd.ca
notes: https://ferd.ca/notes/
blog: https://ferd.ca
notes: https://ferd.ca/notes/
Nothing prepared me to how many times I'd need to re-read my own texts with minor variations through all the drafting, editing, technical reviewing, layout, and then indexing. It's a lot!
Also how different the process could be across publishers within the same space.
Also how different the process could be across publishers within the same space.
November 7, 2025 at 8:50 PM
Nothing prepared me to how many times I'd need to re-read my own texts with minor variations through all the drafting, editing, technical reviewing, layout, and then indexing. It's a lot!
Also how different the process could be across publishers within the same space.
Also how different the process could be across publishers within the same space.
It can be framed positively in terms of “I won’t feel like I’m interrupting you to ask for assistance” or “I can now accomplish these tasks solo,” which is what I think was the original intent.
November 4, 2025 at 11:37 PM
It can be framed positively in terms of “I won’t feel like I’m interrupting you to ask for assistance” or “I can now accomplish these tasks solo,” which is what I think was the original intent.
well first of all, I like helping people and sharing with them.
regardless, there's a huge positive in that there's information more easily accessible for them and that I'd no longer be a bottleneck or point of failure.
and I get that's what they mean.
regardless, there's a huge positive in that there's information more easily accessible for them and that I'd no longer be a bottleneck or point of failure.
and I get that's what they mean.
November 4, 2025 at 5:49 PM
well first of all, I like helping people and sharing with them.
regardless, there's a huge positive in that there's information more easily accessible for them and that I'd no longer be a bottleneck or point of failure.
and I get that's what they mean.
regardless, there's a huge positive in that there's information more easily accessible for them and that I'd no longer be a bottleneck or point of failure.
and I get that's what they mean.
Yeah. Part of what helped there was figuring out how/where to best do part of the piece elsewhere on the neck (where I’m not as quick for sight reading and won’t try as early), which regularized the finger patterns and made it easier to focus on one bit at a time.
November 3, 2025 at 3:54 AM
Yeah. Part of what helped there was figuring out how/where to best do part of the piece elsewhere on the neck (where I’m not as quick for sight reading and won’t try as early), which regularized the finger patterns and made it easier to focus on one bit at a time.
"none of these citations are in the bible"
October 29, 2025 at 7:10 PM
"none of these citations are in the bible"
It was arxiv.org/abs/2510.01395 — I don’t recall if they went deep on defining the construct compared to mostly looking at the impact of that level of support in specific life situations and how it impacted people’s attitude towards repair or not.
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolat...
arxiv.org
October 29, 2025 at 4:55 PM
It was arxiv.org/abs/2510.01395 — I don’t recall if they went deep on defining the construct compared to mostly looking at the impact of that level of support in specific life situations and how it impacted people’s attitude towards repair or not.
And yeah imo that’s a fundamental issue with wanting automation with enough autonomy to know when to break rules/expectations and to also never deviate from them. We trust people more for that, but many organizational systems still ignore that reality and put people in double binds all the time.
October 29, 2025 at 4:50 PM
And yeah imo that’s a fundamental issue with wanting automation with enough autonomy to know when to break rules/expectations and to also never deviate from them. We trust people more for that, but many organizational systems still ignore that reality and put people in double binds all the time.
Yeah they had defined it as social sycophancy, in that it reaffirmed the user’s actions, perspectives and self-images, even if it disagreed with them (“no, you didn’t go too far when …”) and considered it as a ratio of when alternatives did not.
October 29, 2025 at 4:46 PM
Yeah they had defined it as social sycophancy, in that it reaffirmed the user’s actions, perspectives and self-images, even if it disagreed with them (“no, you didn’t go too far when …”) and considered it as a ratio of when alternatives did not.
The way I see it both have a good potential to cause harm (though not exclusively so).
It was interesting to see the latest preprint on sycophancy use Reddit’s /r/AmITheAsshole top voted responses as a way to gauge human social normative judgment too, sort of combining both.
It was interesting to see the latest preprint on sycophancy use Reddit’s /r/AmITheAsshole top voted responses as a way to gauge human social normative judgment too, sort of combining both.
October 29, 2025 at 4:36 PM
The way I see it both have a good potential to cause harm (though not exclusively so).
It was interesting to see the latest preprint on sycophancy use Reddit’s /r/AmITheAsshole top voted responses as a way to gauge human social normative judgment too, sort of combining both.
It was interesting to see the latest preprint on sycophancy use Reddit’s /r/AmITheAsshole top voted responses as a way to gauge human social normative judgment too, sort of combining both.
I use the hell out of proxies too, often in place of asking "is this good software in the grand scheme of things?" and forcing myself into philosophical dread (a good monday morning activity), and I have no issue with using your own definitions too.
October 27, 2025 at 3:22 PM
I use the hell out of proxies too, often in place of asking "is this good software in the grand scheme of things?" and forcing myself into philosophical dread (a good monday morning activity), and I have no issue with using your own definitions too.
These are things that reflect how we think we'll be able to position the software solution within the space and move it around, and they're far easier to contend with that "oh gosh how much is my module impacting the concept of justice today?!"
October 27, 2025 at 3:20 PM
These are things that reflect how we think we'll be able to position the software solution within the space and move it around, and they're far easier to contend with that "oh gosh how much is my module impacting the concept of justice today?!"
I've read your definition and I get it. As practitioners we need & seek good proxies of quality.
Abstractions & interfaces are context-fit (what you expose/hide changes with use), but we like them—and properties like maintainability—as approximations of whether code's fit for use or adaptable.
Abstractions & interfaces are context-fit (what you expose/hide changes with use), but we like them—and properties like maintainability—as approximations of whether code's fit for use or adaptable.
October 27, 2025 at 3:20 PM
I've read your definition and I get it. As practitioners we need & seek good proxies of quality.
Abstractions & interfaces are context-fit (what you expose/hide changes with use), but we like them—and properties like maintainability—as approximations of whether code's fit for use or adaptable.
Abstractions & interfaces are context-fit (what you expose/hide changes with use), but we like them—and properties like maintainability—as approximations of whether code's fit for use or adaptable.
There are technical and internal properties that matter in judging software, but how much they will matter in the overall system will be related to how connected and involved the software will be to other parts of the system.
October 27, 2025 at 2:43 PM
There are technical and internal properties that matter in judging software, but how much they will matter in the overall system will be related to how connected and involved the software will be to other parts of the system.
This is vague, but it's how you harmonize concepts like purpose, intent, ability, criticality, maintainability, flexibility, profitability, sustainability, or whether it helps or harms people.
All of these are seen through interactions with software, not purely on its internal attributes.
All of these are seen through interactions with software, not purely on its internal attributes.
October 27, 2025 at 2:41 PM
This is vague, but it's how you harmonize concepts like purpose, intent, ability, criticality, maintainability, flexibility, profitability, sustainability, or whether it helps or harms people.
All of these are seen through interactions with software, not purely on its internal attributes.
All of these are seen through interactions with software, not purely on its internal attributes.
Software quality is often based on properties analyzed in isolation, but I like to see it as a living technical artifact, as an outcome.
It's a lagging indicator of how folks frame their position and ability to act in a broader system, of how they operationalize and materialize their insights.
It's a lagging indicator of how folks frame their position and ability to act in a broader system, of how they operationalize and materialize their insights.
October 27, 2025 at 2:41 PM
Software quality is often based on properties analyzed in isolation, but I like to see it as a living technical artifact, as an outcome.
It's a lagging indicator of how folks frame their position and ability to act in a broader system, of how they operationalize and materialize their insights.
It's a lagging indicator of how folks frame their position and ability to act in a broader system, of how they operationalize and materialize their insights.
A competitor does something new and your formerly great app now sucks. How fast you adjust is partly a technical property, but also a function of your org, its practices and capabilities.
Your ability to keep software good relies on navigating its tradeoff space, its interactions, and inertia.
Your ability to keep software good relies on navigating its tradeoff space, its interactions, and inertia.
October 27, 2025 at 2:41 PM
A competitor does something new and your formerly great app now sucks. How fast you adjust is partly a technical property, but also a function of your org, its practices and capabilities.
Your ability to keep software good relies on navigating its tradeoff space, its interactions, and inertia.
Your ability to keep software good relies on navigating its tradeoff space, its interactions, and inertia.
It has many participants: writers, owners, users, operators, targets (acted on), processes, etc. The goodness is a snapshot of the fitness of the software and its activities to the landscape it interacts with.
It can be good for engineers, great for owners, okay for users, horrible for bystanders.
It can be good for engineers, great for owners, okay for users, horrible for bystanders.
October 27, 2025 at 2:41 PM
It has many participants: writers, owners, users, operators, targets (acted on), processes, etc. The goodness is a snapshot of the fitness of the software and its activities to the landscape it interacts with.
It can be good for engineers, great for owners, okay for users, horrible for bystanders.
It can be good for engineers, great for owners, okay for users, horrible for bystanders.
Reposted by Fred Hebert
5. For example, research produced and funded by tech companies often either frames problems as user-driven, or explores solutions as the obligation of users (e.g. community notes). Seldom does it explore consequences of design, UX, or algorithmic implementation, let alone the business model.
October 24, 2025 at 1:01 AM
5. For example, research produced and funded by tech companies often either frames problems as user-driven, or explores solutions as the obligation of users (e.g. community notes). Seldom does it explore consequences of design, UX, or algorithmic implementation, let alone the business model.
Of course there’s tons to learn, and the act of moving through failure relies on expanded understanding and new insights.
Always Surprised, Never Surprised About It.
Always Surprised, Never Surprised About It.
October 22, 2025 at 2:35 AM
Of course there’s tons to learn, and the act of moving through failure relies on expanded understanding and new insights.
Always Surprised, Never Surprised About It.
Always Surprised, Never Surprised About It.
A thing that comes to mind is that exactly one month ago, I wrote a post on ongoing tradeoffs and incidents as landmarks (ferd.ca/ongoing-trad...)
I think at this point I’ve internalized that this is how we do things as an industry.
I don’t really expect change at a systemic level. This is normal.
I think at this point I’ve internalized that this is how we do things as an industry.
I don’t really expect change at a systemic level. This is normal.
Ongoing Tradeoffs, and Incidents as Landmarks
Think of incidents as landmarks when finding your way. The tradeoffs you make can inform the type of incidents you get, and they in turn let you evaluate how you balance priorities and goal conflicts.
ferd.ca
October 22, 2025 at 2:35 AM
A thing that comes to mind is that exactly one month ago, I wrote a post on ongoing tradeoffs and incidents as landmarks (ferd.ca/ongoing-trad...)
I think at this point I’ve internalized that this is how we do things as an industry.
I don’t really expect change at a systemic level. This is normal.
I think at this point I’ve internalized that this is how we do things as an industry.
I don’t really expect change at a systemic level. This is normal.