It started over coffee with a friend. We were talking about how artificial intelligence could take over huge chunks of the labour market; everything from menial tasks to skilled trades. Imagine a near-future where a robot builds your extension, an AI mechanic services your car and your smart fridge not only orders the milk but installs the door hinges too.
It sounded efficient, impressive…inevitable. As my friend continued, a thought popped into my head that I couldn’t shake: what happens when things go wrong?
And then it hit me.
What if, in this AI-powered future, the real selling point of a human tradesperson isn’t their craftsmanship, but the fact that they’re easier to sue?
If you’ve ever dealt with a cowboy builder, you’ll know the type. Half-finished jobs, suspiciously high “unexpected” costs, and a van that disappears from your driveway the second you ask for receipts.
It’s frustrating, infuriating even. However, in the end, you can take them to court. You have a name, an address, a contract. There’s a clear path to holding someone accountable.
Now picture the same situation in 20 years, except your builder is a humanoid robot supplied by a company on the other side of the world. The job goes wrong, the roof leaks, and the company blames a software update from yet another provider. Before you know it, you’re chasing accountability through a supply chain that stretches across three continents.
It turns out the “cowboy” you knew might have been easier to deal with than the AI who doesn’t even have a surname.
Here’s the real snag: the law hasn’t quite caught up.
In the UK, there’s no single AI law or regulator. Instead, oversight is scattered across different agencies; and liability is handled by dusting off old rules for product safety and negligence. The EU has taken a bolder step with its AI Act, but even that struggles when AI systems evolve after deployment or are built by one company, trained by another and operated by a third.
So, if your AI roofer drops a tile through your car windscreen, who’s actually responsible? The manufacturer? The software provider? The local contractor who hired the machine?
Right now, the answer is: It’s complicated… “complicated” is expensive for consumers.
This is where I think humans might have an unexpected edge. I call it the ‘accountability dividend’.
In a future where AI handles more of our physical world, the fact that a human is legally straightforward could become their biggest selling point.
With a person, liability is clear: there’s the name, a legal identity, an address for the claim form. With AI, you might spend months, even years, working out which part of the global supply chain is actually to blame.
It’s not that human tradespeople are perfect. It’s that they’re reachable; and in a world full of self-learning, cross-border, cloud-connected machines, reachability might just be the thing that wins the job.
One way forward could be to borrow an idea from the construction industry: a legally responsible person for every job.
In construction, there’s always someone on site who carries the ultimate legal duty for safety and compliance. If something goes wrong, you know exactly where the buck stops.
Why not apply that to AI-powered services? Whether it’s a robot laying bricks or a self-driving van delivering your goods; there could be a named, accountable human (in your country) who takes legal responsibility.
It wouldn’t solve every problem, especially with AI systems built and operated across multiple borders. What it would do, is give consumers a fighting chance when things go wrong; and it could help ensure AI adoption doesn’t leave accountability behind.
This all began as a casual conversation between good friends but the more I’ve thought about it; the more it feels like a genuine legal challenge hiding in plain sight.
If AI really does take over the trades, the way we think about trust, risk and responsibility will have to change. Maybe the future human labour isn’t just about skill or craftsmanship; it could actually be about how easy it is to take court action when the job goes wrong.
When our homes, cars and cities are built by machines; who or what will we hold accountable. And will we still be willing to pay a premium for a human face, a human name and the comfort of knowing exactly where to send the claim form?

Freddie Spindler is a recent Law graduate from The Open University. His areas of interest include commercial and constitutional law, alongside a growing focus on intellectual property and the legal challenges posed by artificial intelligence.