Gabriel
banner
morecoffeeplz.bsky.social
Gabriel
@morecoffeeplz.bsky.social
1.1K followers 460 following 380 posts
AI Research scientist. Former OpenAI, Apple infosec. “Professor” at John’s Hopkins SAIS Alperovitch Institute. Great deceiver of hike length and difficulty.
Posts Media Videos Starter Packs
Reposted by Gabriel
2026 DBIR sneak peek:

“Water plays an increasingly significant role in [ransomware] attacks. In 2024, 100% of recorded ransomware events were attributed to threat actors that drink water”
P.8: This is the central claim.
I miss when the internet was fun.
Reposted by Gabriel
What if we did a single run and declared victory
Thanks! DM me if you are interested in the slides :)
Which is to say that as the context window fills up it just acts as a mirror for how the individual wants to be treated. Yikes.
The only people I know that refer to ChatGPT as “Chat” are those in romantic relationships with it.

nypost.com/2025/10/16/b...
Reposted by Gabriel
"I don't have anything to hide why should I care about privacy?"
The politician in South Carolina who has introduced a bill redefining contraception as abortion also wants people who share websites to be charged with aiding and abetting homicide.
Reposted by Gabriel
Normal person: I asked AI and it told me--

Every AI researcher:
Parrot Lying
Reposted by Gabriel
“What if you could fuck the singularity?” is the apotheosis of technofuturism (2025)
Reposted by Gabriel
BREAKING: Friday night massacre underway at CDC. Doznes of "disease detectives," high-level scientists, entire Washington staff and editors of the MMWR (Morbidity and Mortality Weekly Report) have all been RIFed and received the following notice:
Some research from my team!
🔎 Attackers are embedding LLMs directly into malware, creating code that can generate malicious logic at runtime rather than embedded in code.

🔥New @sentinellabs.bsky.social research by @alex.leetnoob.com, @vkamluk.bsky.social, and Gabriel Bernadett-Shapiro at #LABScon 2025. 🔥 s1.ai/llm-mw
@sentinelone.com social team I am also on bluesky 😂
Reposted by Gabriel
Not the BPO report we need, but definitely the one we deserve.
We are releasing details on BRICKSTORM malware activity, a China-based threat hitting US tech to potentially target downstream customers and hunt for data on vulnerabilities in products. This actor is stealthy, and we've provided a tool to hunt for them. cloud.google.com/blog/topics/...
Another BRICKSTORM: Stealthy Backdoor Enabling Espionage into Tech and Legal Sectors | Google Cloud Blog
BRICKSTORM is a stealthy backdoor used by suspected China-nexus actors for long-term espionage.
cloud.google.com
3. What additional constraints do LLMs produce for adversaries? Hunting with the contraints of our adversaries was our initial premise. We've been doing it for years, LLMs simply present a new dimension for us to explore. If you'd like to work with us on this please let us know!
Malware that can run simple instructions, identify the target device, important files, and provide summaries back to a C2 would eliminate or streamline a significant amount of adversary workload.
2. LLM-enabled malware is interesting and (we believe) important to study, but it is unclear exactly what the operational advances are. Assuming we get to the point of LLMs running natively on endpoints malware that could hijack that process may be extremely useful.
Ok some questions that this research posed for us:

1. Hunting for prompts and API keys works, but it is a brittle detection. Eventually adversaries will move to proxy services that provide some level of obfuscation. What do we do then?
If we want to understand LLM risks, we should align expectations with risks we can observe and measure, not hype.
Understanding how capable LLMs are wrt hacking is important work, but setting that aside for the moment, in a year of analysis we did not observe the capabilities that labs are concerned with being deployed by malicious actors in the wild.
We noted that the capabilities we observed in LLM-enabled malware were operational, that is they helped adversaries with specific tasks.

That aligns with current LLM capabilities in software development and how they’re deployed.
Traditionally, malware analysis starts at a disadvantage, you work backward from development assumptions.

With prompts, intent is immediately visible. No need to second-guess the adversary’s aim.