Joshua White
banner
jrw14.bsky.social
Joshua White
@jrw14.bsky.social
Books. Hiking. Camping. Kayaking. GenAI. Self-hosting. Gaming. Soccer. Outdoors. More books. Tinkering. Linux-ing (cachyOS). Learning.

Be good to each other.

OH -> NM -> OH -> FL -> OH -> NC
...I actually quit the quiz right after this, just to double check to see if there was a disclosure before taking it... there was not. You don't inject hidden instructions into pages and hope nobody notices.

You do better than that.
November 4, 2025 at 12:28 AM
Here's what bothers me most. I understand the problem they're trying to solve; cheating online is trivially easy now. But you don't teach a class about using "AI for Good" by using AI nefariously, especially without disclosing it...
November 4, 2025 at 12:28 AM
Turns out, yes.

This is what happens when you copy and paste the question and answers (I put it in my private LLM stack fronted by Open WebUI just to see how it worked).
November 4, 2025 at 12:28 AM
The story is different. I've already caught Coursera doing questionable things: my privacy-focused web browser (I ❤️ you, Brave) blocks over 2,000 trackers and ad services from their platform. So I was curious: would a company that aggressive with user data resort to something this ethically dubious?
November 4, 2025 at 12:28 AM
I get the first reaction: "You only found this because you were trying to cheat." Fair enough. But if I was cheating, why would I tell on myself?

Also please note, the question is answered and correct already ;)
November 4, 2025 at 12:28 AM
...hidden prompts designed to be picked up by AI systems like Perplexity Comet, or if you're still of the sort who thinks copying and pasting text directly from websites is a good idea. This is what is called indirect prompt injection. And it's been responsible for many, many bad things already.
November 4, 2025 at 12:28 AM