How do you maintain basic standards with colleagues who refuse to acknowledge the problem?
Unfortunately... that's a lot of work.
Unfortunately... that's a lot of work.
It's not even just about people blindly trusting what ChatGPT tells them. LLMs are poisoning the entire information ecosystem. You can't even necessarily trust that the citations in a published paper are real (or a search engine's descriptions of them).
It's not even just about people blindly trusting what ChatGPT tells them. LLMs are poisoning the entire information ecosystem. You can't even necessarily trust that the citations in a published paper are real (or a search engine's descriptions of them).
holy shit, an accurate legal critique of LLMs. LLMs don't reason because they're just stitching together plausible-looking sentences indifferent to the content
Y'all are supposed to take sociotechnical approaches to the study of technology. A throwaway AI-generated image does not help that credibility.
Y'all are supposed to take sociotechnical approaches to the study of technology. A throwaway AI-generated image does not help that credibility.
I think this explains the massive disconnect we see in how CEOs talk about AI versus everyone else. It also raises the question of how useful it truly is for frontline work?
I think this explains the massive disconnect we see in how CEOs talk about AI versus everyone else. It also raises the question of how useful it truly is for frontline work?