A/Prof Narrelle Morris
@narrellemorris.bsky.social
590 followers 210 following 350 posts
Curtin Law School. Legal history, statutory interpretation, research and writing. Japan. Permanently at risk of being squashed by books in my office.
Posts Media Videos Starter Packs
Pinned
narrellemorris.bsky.social
My article “Current Approaches to the Use of Generative AI in Australian Courts and Tribunals: Should Australian Judges Have Guidelines Too?” is now out in the Journal of Judicial Administration!

Short answer: yes.

Slightly longer answer: nobody should be using it for legal research or writing.
narrellemorris.bsky.social
“Tech companies have pushed faulty Gen AI that cannot do what is claimed into every sector, boosted by media uncritically accepting hyped up claims of productivity, but really its employers to blame for AI workslop because they aren’t investing enough in it”.

www.theguardian.com/business/202...
AI tools churn out ‘workslop’ for many US employees lowering trust | Gene Marks
Studies show widespread errors in AI-generated work, as employers fail to train staff properly
www.theguardian.com
Reposted by A/Prof Narrelle Morris
michae.lv
Does your university have a contract with Grammarly? Write to the decision-maker asking if they think the university should be paying for a tool that is fast integrating features that can only be used for academic misconduct and cognitive offloading and request they drop the contract.
jedbrown.org
It is not "attribution and sourcing" to generate post-hoc citations that have not been read and did not inform the student's writing. Those should be regarded as fraudulent: artifacts testifying to human actions and thought that did not occur.
www.theverge.com/news/760508/...
For help with attribution and sourcing, Grammarly is releasing a citation finder agent that automatically generates correctly formatted citations backing up claims in a piece of writing, and an expert review agent that provides personalized, topic-specific feedback. Screenshot from Grammarly's demo of inserting a post-hoc citation.
https://www.grammarly.com/ai-agents/citation-finder
narrellemorris.bsky.social
I think you mean too many old male dictators!
Reposted by A/Prof Narrelle Morris
petertarras.bsky.social
Recently gave a colleague feedback on a paper that was partly written with ChatGPT. What did I learn from this and how should we deal with cases like this? Here are some thoughts 🧵(1/x)
Reposted by A/Prof Narrelle Morris
barbarapocock.bsky.social
How we ‘fess up when we mess up shows who we are. Pay it all back Deloitte. And apologise. Give yrsrlf some procurement teeth Labor: the capability to ban for poor behaviour, commensurate with the misdemeanour, up to 5 yrs.
Reposted by A/Prof Narrelle Morris
tedmccormick.bsky.social
A striking thing about articles I’ve read claiming to “study the effects” of generative AI on student writing skills and consumption of information is that (1) they nearly always find the effects are negative and (2) most “conclusions” are still written assuming that we must use AI, for some reason.
narrellemorris.bsky.social
Unless there’s a use for phrases like “I’m going to dance with the King at his chateau”.
narrellemorris.bsky.social
Hard pass on that one. Congrats on the new position and feel exactly the same re Canberra. Hope to see out my career one day there with indulgent access to NLA, NAA and AWM.
narrellemorris.bsky.social
You’re kidding us, right? Reporting on AI and not aware UNTIL
NOW about Gen AI “hallucinations”? Is this what happens when you only read press releases from the AI boosters and don’t stop to think about any of their claims?

www.smh.com.au/business/wor...
narrellemorris.bsky.social
It should come with a ban on govt consultancy or tendering, not just a refund.
maximumwelfare.bsky.social
#BREAKING 🚨 Deloitte to refund government, admits using AI in $440k report into mutual obligations issues.

Fake quotes from Federal Court case that ended Robodebt deleted from new report in Friday DEWR dump.

📰 AFR

✍️ @paulkarp.bsky.social

✍️ @edmundtadros.bsky.social

🗣️ @chrisrudge.bsky.social
HEADLINE: Deloitte to refund government, admits AI errors in $440k report Deloitte Australia will issue a partial refund to the federal government after admitting that artificial intelligence had been used in the creation of a $440,000 report littered with errors including three nonexistent academic references and a made-up quote from a Federal Court judgement.

A new version of the report for the Department of Workplace Relations (DEWR) was quietly uploaded to the department’s website on Friday, ahead of a long weekend across much of Australia. It features more than a dozen deletions of nonexistent references and footnotes, a rewritten reference list, and corrections to multiple typographic errors.

(photo of Deloitte Australia HQ) Deloitte Australia has made almost $25 million worth of deals with the Department of Workplace Relations since 2021. Photographer Dion Georgopoulos The first version of the report, about the IT system used to automate penalties in the welfare system such as pauses on the dole, was published in July. Less than a month later, Deloitte was forced to investigate the report after University of Sydney academic Dr Christopher Rudge highlighted multiple errors in the document.

At the time, Rudge speculated that the errors may have been caused by what is known as “hallucinations” by generative AI. This is where the technology responds to user queries by inventing references and quotes. Deloitte declined to comment.

The incident is embarrassing for Deloitte as it earns a growing part of its $US70.5 billion ($107 billion) in annual global revenue by providing advice and training clients and executives about AI. The firm also boasts about its widespread use of the technology within its global operations, while emphasising the need to always have humans review any output of AI. SUBHEADING: Deleted references, footnotes

The revised report has deleted a dozen references to two nonexistent reports by Professor Lisa Burton Crawford, a law professor at the University of Sydney, that were included in the first version. Two references to a nonexistent report by Professor Björn Regnell, of Lund University in Sweden, were also deleted in the new report.

Also deleted was a made up reference to a court decision in a leading robo-debt case, Deanna Amato v Commonwealth.

The new report has also deleted a reference to “Justice Davis” (a misspelling of Justice Jennifer Davies) and the made-up quote from the nonexistent paragraphs 25 and 26 in the judgement: “The burden rests on the decision-maker to be satisfied on the evidence that the debt is owed. A person’s statutory entitlements cannot lawfully be reduced based on an assumption unsupported by evidence.”
narrellemorris.bsky.social
Anyone using Gen AI thinking it makes their legal research more efficient is actually cutting corners and, unless they independently verify every word of it, which is time consuming (not an efficiency), they are putting their ability to keep practising law at risk. As many lawyers are finding out.
narrellemorris.bsky.social
Having a law degree is nice, I do too. I’ve taught legal research using tech for 15 years. As you noted, I actually do research to justify my conclusions. Feel free to read all the footnotes in my article. If you disagree, do research and publish it. Otherwise it’s just your personal anecdata.
narrellemorris.bsky.social
Unless you’re a lawyer or a legal academic, I don’t think you have the knowledge or experience to claim Gen AI makes legal research more efficient, especially given your claimed expertise around AI is to “productionise” it, whatever that means.
narrellemorris.bsky.social
which seems to include portraying anything but joyous acceptance of it with being a Luddite. Of course it’s here to stay. What’s important is understanding its limitations and the risks posed, not rejecting those.
narrellemorris.bsky.social
It’s not narrow minded to ask questions about the productivity and efficiency (and accuracy) claims by those selling Gen AI, particularly when they now have billions spent and little recourse to recoup those costs unless they dig deeper and harder,
narrellemorris.bsky.social
people not having sufficient understanding of how it works and blindly trusting its output, and ultimately not interrogating the claims about productivity and efficiency coming straight from AI bubble boosterism. While accepting the rampant economic, enviro, data, and human costs of it.
narrellemorris.bsky.social
Efficiency is a common claim but only works if you can trust the output, which you can’t. Even closed datasets like in Lexis and Westlaw hallucinate.

Tech has a place in legal work, no question. Invaluable for high volume discovery. But the most dangerous thing about Gen AI is …
narrellemorris.bsky.social
Gen AI is not effective for legal research is the key point, regardless of whether it is free ChatGPT or very expensive professional legal databases with it embedded. This will not change while its “intelligence” is based on statistical probability of words, one after the other.
narrellemorris.bsky.social
Doesn’t use my new article below but an interesting resd.

My question to those thinking Gen AI will replace research skills is how do you think law students will understand the fundamentals of what legal research is, and requires, if reduced to simply prompt an AI.

www.abc.net.au/news/2025-10...
narrellemorris.bsky.social
It so helpful to gain insight into people, although when you’re following a judge around like this, one cannot help be appalled at the risk it placed them at. It’s entirely inconceivable today to know home addresses and which hotel they were staying at on vacation etc.
narrellemorris.bsky.social
As a statutory interpretation instructor, I’d really love to see that draft bill, explanatory memorandum et al. Talk about stepping outside your lane (in more ways than one).