The winter deadline for our peer reviewed section is December 31st. Submissions are welcome between 1,200 and 2,000 words that connect the environmental present to its past.
We're excited to read your work!
g-ehr.com/submit/
The winter deadline for our peer reviewed section is December 31st. Submissions are welcome between 1,200 and 2,000 words that connect the environmental present to its past.
We're excited to read your work!
g-ehr.com/submit/
If you were creating a new U.S. women's history survey course, what books would be must-reads for lecture prep? They don't have to be new, and can include material from pre-colonial to the present. Give me a reading list! (I need to spend some PD funds)
If you were creating a new U.S. women's history survey course, what books would be must-reads for lecture prep? They don't have to be new, and can include material from pre-colonial to the present. Give me a reading list! (I need to spend some PD funds)
Don’t forget Moby-Dick is a Christmas novel!!!!
Don’t forget Moby-Dick is a Christmas novel!!!!
Tony Judt was right.
Tony Judt was right.
Tony Judt was right.
Oxford’s word of the year is “rage bait” and Cambridge picked "parasocial."
Together, they paint a picture of digital nihilism. https://buff.ly/53z5RCh
join AAUP for a conversation with educators, educator unions, and the Collaborative Research Center for Resilience
zoom.us/webinar/regi...
join AAUP for a conversation with educators, educator unions, and the Collaborative Research Center for Resilience
zoom.us/webinar/regi...
It's not even just about people blindly trusting what ChatGPT tells them. LLMs are poisoning the entire information ecosystem. You can't even necessarily trust that the citations in a published paper are real (or a search engine's descriptions of them).
It's not even just about people blindly trusting what ChatGPT tells them. LLMs are poisoning the entire information ecosystem. You can't even necessarily trust that the citations in a published paper are real (or a search engine's descriptions of them).
It's not even just about people blindly trusting what ChatGPT tells them. LLMs are poisoning the entire information ecosystem. You can't even necessarily trust that the citations in a published paper are real (or a search engine's descriptions of them).
When I was searching in various places to confirm that those citations were fabricated, Google's AI overview just kept the con going.
When I was searching in various places to confirm that those citations were fabricated, Google's AI overview just kept the con going.
mid-theory.com/2025/12/04/u...
mid-theory.com/2025/12/04/u...
✅ learn to read
✅ learn to read
I think this explains the massive disconnect we see in how CEOs talk about AI versus everyone else. It also raises the question of how useful it truly is for frontline work?
www.hollywoodreporter.com/tv/tv-news/t...
www.hollywoodreporter.com/tv/tv-news/t...