Scholar

Casey Fiesler

Casey Fiesler is an American associate professor at University of Colorado Boulder who studies technology policy, internet law and policy,… more

H-index: 35
Computer science 40%
Communication & Media Studies 18%
cfiesler.bsky.social
Oh I don't think it really matters; it's just a question of whether they feel comfortable with the specific use case of the assignment.
cfiesler.bsky.social
Part of this assignment will be talking to/interviewing people who use AI.
cfiesler.bsky.social
oh that's a great point, I'll ponder what that would look like
cfiesler.bsky.social
I don't understand this question either. :) What does it mean to "write against"?
cfiesler.bsky.social
I'm afraid I don't understand this question... why would they need recorded data?
cfiesler.bsky.social
For the specific assignment here, I'm imagining a combination of talking to/interviewing people they know who use AI + exploring online content like discussions in subreddits.
cfiesler.bsky.social
That would be very interesting! I mean I guess it would make things easier for me.
cfiesler.bsky.social
I did waffle on this a bit in large part due to concern about workload of coming up with alternative assignments etc... but I think it's important that students who are critical of AI also have an opportunity to learn about it. And the class will be small this first time so may as well try it out!
cfiesler.bsky.social
I'm workshopping my AI & Society course for next semester, and in particular what I've been calling the "conscientious objector" path through the class (though I've given it another name). Thoughts?
Alternative Engagement Pathway: This course recognizes that learning about artificial intelligence involves both using the technology and thinking critically about when, why, and whether to use it. Students may hold different values, comfort levels, or ethical positions regarding direct engagement with AI tools. They will be encouraged to explore and articulate these positions, and out of respect for their ultimate decisions, this course offers an Alternative Engagement Pathway for assignments that involve using generative AI. These alternative assignments will allow students to meet the same learning outcomes through analytical, observational, or reflective work rather than direct tool use. Generative AI Reflection Project
Throughout the semester, you will engage in a sustained exploration of how generative AI tools fit into your own life (as helpers, collaborators, or maybe frustrations). You’ll select one or more everyday or academic activities where you can experiment with AI (for example: task management, brainstorming, organizing, coding, studying, or artistic creation), and you will also compare AI use versus doing tasks on your own. Over the course of the semester, you’ll keep a reflective journal documenting your process, outcomes, and insights about what AI can and can’t do well. (Note: The activities you choose cannot be related to your other coursework unless generative AI use is explicitly allowed and articulated in a syllabus statement that can be verified.)

The goal is not necessarily to help you use AI “better” but to think critically and concretely about when AI actually adds value, when it falls short, and what ethical and personal considerations shape those boundaries for you.

The alternative engagement pathway for this project will have you take on the role of a critical observer, where you analyze AI outputs created by others. This path invites you to investigate how AI functions and impacts human work without personally integrating it into your own tasks.
cfiesler.bsky.social
Sometimes I imagine what it might be like to teach an entire course on technology and intellectual property instead of just a single class. I feel like the students get the most absolutely chaotic but hopefully interesting brain dump from me in that class though haha.
cfiesler.bsky.social
If you click through a subtle link on results you do get to a decent disclaimer/warning from Westlaw, but wouldn't it be a good idea to FORCE users to see this (and to click "I understand") before using the tool? And also ideally an educational explanation for how it works and why it can be wrong?
screenshot of "AI-Assisted Research" that includes a small link labeled "How the AI works" above the question How AI-Assisted Research works
Close
AI-Assisted Research uses large language models - a type of generative AI - and focuses the models on the language of cases, statutes, and other primary law to improve accuracy.

In addition, primary law is referenced in the responses with the actual language from the source, and links are included to read the full primary law documents. Even with these and other precautions, AI-Assisted Research can occasionally produce inaccuracies, so it should always be used as part of a research process in connection with additional research to fully understand the nuance of the issues and further improve accuracy.

The AI-generated summary of results above the list of primary law authority can be extraordinarily useful for getting an overview of the issues and pointers to primary authority, but it should never be used to advise a client, write a brief or motion for a court, or otherwise be relied on without doing further research.

Use it to accelerate thorough research. Don't use it as a replacement for thorough research.
cfiesler.bsky.social
But importantly, Westlaw makes no attempt to explain WHY it generates fictitious sources. There is zero attempt at education here, which I think is usually the case even when LLMs have disclaimers.

And again, I highly suspect that legal technology companies are downplaying limitations.
cfiesler.bsky.social
I just had a look at the Westlaw AI tool, blamed by one of the lawyers in the article (re: them not understanding at the time that it used AI and could generate fictitious sources). This is the disclaimer on the main page for the tool, but there's nothing included with generated results.
AI-Assisted Research uses generative AI and can occasionally produce inaccuracies, so it should always be used as part of a research process in connection with additional research where primary sources are checked to fully understand the nuance of the issues and further improve accuracy.
cfiesler.bsky.social
... I suspect even when not mentioned there is overtrust in AI here. As I mentioned in my original thread I think it's really important that lawyers are educated about the limitations of AI and sufficiently scared of hallucinations.

But this issue re: legal research AI tools is really concerning!
cfiesler.bsky.social
In a great piece of work @404media.co (via @jasonkoebler.bsky.social ) analyzed court records "where a lawyer offered a formal explanation or apology" for problematic AI use: www.404media.co/18-lawyers-c... There is more blame on overwork than lack of knowledge, though...
cfiesler.bsky.social
A lawyer in my social media comments is telling me that it's "cruel" to suggest that lawyers should be ethically accountable for mistakes introduced by AI because the weight of technology's flaws shouldn't be on burned out lawyers.

And like, all sympathy to junior associates, but also...
cfiesler.bsky.social
Not even so much about the change, but the *timing* of announcing was borderline cruel for:

(1) Students who assumed they were eligible and have already been preparing materials
(2) Students who assumed they could apply next year and have to scramble to apply now

www.science.org/content/arti...
‘Completely shattered.’ Changes to NSF’s graduate student fellowship spur outcry
The announcement comes months later than usual, leaving many would-be applicants stranded
www.science.org
cfiesler.bsky.social
An informal poll for fellow academics: What is an appropriate/typical range for numbers of papers to review on an annual basis? Either pure numbers or e.g. proportionate to the number of papers that you submit per year. I'm curious what folks' heuristics are for this.
cfiesler.bsky.social
I'm working on some new standup material, and I have this joke about how thanks to my sadistic constitutional law professor who used cold calling and the socratic method I know way too much about constitutional law, and now every day since January has been a bad day.
cfiesler.bsky.social
Oh yeah to be clear this wasn't about FERPA. The question was about "demanding" that their own child show them their grades even if they expressed that they didn't want to.
cfiesler.bsky.social
I made the mistake on commenting on a random video I saw where someone was asking for opinions.

Anyway my opinion is that if parents choose to help their child pay for college, they are still not *entitled* to information about their *adult* child's grades. Apparently this is an unpopular opinion.
cfiesler.bsky.social
floating a thought for feedback:
if the liar's dividend is the benefit bad actors can receive from a world in which there is so much doubt about what is real and what isn't,
I was thinking about "the librarian's dividend" re: the value of the people and institutions who help us evaluate information
cfiesler.bsky.social
I’m spending today at a big staff development event (few hundred people) for Arapahoe Libraries, focused on AI. I’m running sessions about ethics. Am really eager to get a sense of the vibe and what kinds of questions people have… (I also just love hanging out with librarians.)
cfiesler.bsky.social
At any given time I am usually reading one audiobook, one physical book, and one book on my kindle. This is a pretty good representation of the variety of my tastes at the moment. :)
Screenshot from story graph. Current Reads: The Thursday Murder Club, Katabasis, The One.
cfiesler.bsky.social
I suspect the poster is probably asking about some specific form of AI. (Though if not I guess I’m pretty fond of the predictive algorithm on my insulin pump that’s helping to keep me alive.)

Though regardless I think people would just give very different answers to this question.
cfiesler.bsky.social
Cold calling a student and then grilling them.
cfiesler.bsky.social
Ok I had literally not thought of writing off my standup classes on my taxes until this moment.
cfiesler.bsky.social
I had the random realization the other day that cold calling with the socratic method is basically the same as comedians doing crowd work.

References

Fields & subjects

Updated 1m