Nearly Legal
@nearlylegal.co.uk
8.6K followers 1.2K following 4K posts
Solicitor. Done housing law since 2006. The Guardian says top 5 for squalor. Legal Aid Housing Lawyer of 2018. Co-author of Homes (Fitness for Human Habitation) Act 2018. ‘Not an academic authority’ - Judge Carr. https://nearlylegal.co.uk
Posts Media Videos Starter Packs
nearlylegal.co.uk
Interesting. We will have to see how attempts are made to exploit this.
sharedownersnet.bsky.social
🆕 BREAKING NEWS

We won❗🎉

The Government has finally accepted Lord Young's amendment to the Renters' Rights Bill.

Shared owners who are accidental landlords will be exempt from the 12-month ban on reletting if their sale falls through.

#BuildingSafetyCrisis
nearlylegal.co.uk
I was only following orders...
nearlylegal.co.uk
But what I set out there was your original proposal.

How can you check the output if it is in a field you have no experience in?
nearlylegal.co.uk
‘Basically it is fine if asked to only refer to a very limited data set, so long as you check every citation for actually existing, and check that the citation actually supports what it said it does, and check whether any conclusions drawn are supported by the data’.

OK.
nearlylegal.co.uk
We started with your assertion that you could get an LLM to respond accurately. Now you are asserting that it takes extensive testing to even establish what the (inevitable) risks are. Which was where I started.

bsky.app/profile/aaro...
aaronsterling.bsky.social
I've found it helpful to say "Answer in two parts. In the first part, provide only quotes and citations from XYZ source. In the second part, write conclusions based on the cited quotes in the first part."
nearlylegal.co.uk
The whole article is about the extensive effort by humans needed to assess (and mitigate, not remove) the legal and reputational risk posed by LLM AI because it inevitably makes things up (amongst other failings). Thank you for making my point.
nearlylegal.co.uk
I think express wording would be required. An implied repeal is not attractive.
nearlylegal.co.uk
Is it a payment in relation to the termination of a tenancy, so falling under s1(6)(a)? I’m not at all sure - it may be in consequence of the termination of the tenancy, but not a necessary one.
nearlylegal.co.uk
Agreed not temporal per se, but connection requires more, I think. A contractual penalty for holding over after tenant NTQ in the tenancy agreement would certainly be caught. But a statutory penalty based precisely on there no longer being a tenancy seems less obviously caught.
nearlylegal.co.uk
I think Mr Sterling is an AI bot.
nearlylegal.co.uk
I think Mr Sterling is an AI bot.
nearlylegal.co.uk
None of which links are about ‘deposing LLMs’. If you are not an AI bot, you are doing a very good impression of one.
nearlylegal.co.uk
Google on "Deposing LLMs to verify correctness"

Are you just making stuff up?
nearlylegal.co.uk
No, that is not what he was saying. You are too generous.
nearlylegal.co.uk
Oh mate, you really, really don't want to get into LLMs and legal proceedings. Let's just say careers have been ended, and more will be. By the way, did you use an AI for that response?
nearlylegal.co.uk
Particularly in the context of the unreliability of LLM AI. 'I found it to be accurate, so..' doesn't really work.
nearlylegal.co.uk
Of course everyone makes errors. The difference is that a human error is generally not something simply invented to plausibly respond to a prompt.
nearlylegal.co.uk
I don't care about your 'lived experience', this is a central element of what LLM AIs do. They invent plausible text based on their models. No amount of 'and make this accurate' will change that. You are, at best constraining them to making stuff up that looks like your dataset and citations.
nearlylegal.co.uk
No, no no no. That won't work. There is no way to stop the plausibility engines from making stuff up.
nearlylegal.co.uk
I've read your other replies and none of that makes sense.