Why 'Ask GPT' Is a Tribunal-In-Waiting

By Paul Marsh

HR is always banging on about efficiency, freeing up time, adding strategic value and getting away from the grunt work - and a beautiful gift has been handed to the profession: the wonderful world of GPT etc.

UK HR teams are increasingly using AI assistants such as ChatGPT, Copilot and Claude to save time, draft documents, summarise law and answer day-to-day employment law questions. Used properly, they can be very helpful. Used carelessly, they can create serious legal and commercial risk. These tools do not necessarily know the law. They predict words based on patterns in data. That means they can sound confident while being wrong, outdated or misleading.

In the UK, employment law is technical, constantly changing and heavily dependent on case law and context. An AI model may have been trained on material that is months or even years out of date. Even when the law itself has not changed, how it is applied often has. A tribunal decision can shift how a rule works in practice. An AI assistant will not reliably know which version of the law applies today.

A common example is unfair dismissal: An AI might correctly say that employees need two years’ service to claim unfair dismissal. But it may fail to flag the many exceptions, such as whistleblowing, discrimination, health and safety, pregnancy or asserting statutory rights. An HR manager relying on that answer might think a dismissal is safe when in reality it carries unlimited liability.

Another risk area is holiday pay. UK holiday pay law is heavily shaped by European case law and recent Supreme Court decisions. Whether overtime, commission and bonuses must be included depends on detailed rules and the type of worker involved. Many AI systems still give oversimplified or outdated answers such as “holiday pay is basic pay only”. That is wrong in many situations and could lead to underpayment claims going back years.

TUPE is another trap. AI tools often describe TUPE in high level terms but miss key points. For example, they may fail to explain that dismissals connected to a transfer are automatically unfair unless there is a genuine economic, technical or organisational reason. They may also give incorrect guidance on when a service provision change applies, which is one of the most litigated parts of UK employment law.

Even something as everyday as redundancy can be misrepresented. GPT might say that the lowest performers should be selected. In reality, selection criteria must be fair, objective and non discriminatory, and consultation obligations vary depending on numbers and circumstances. Following a simplistic AI answer could easily lead to unfair dismissal and discrimination claims.

Data protection is a further risk. HR teams deal with sensitive personal data including health, disciplinary and grievance information. Entering that into a public AI tool can breach the Data Protection Act if it is stored or used to train models outside the organisation’s control. Many people do this without realising it. There is also a risk of bias. AI models reflect the data they were trained on. If that data contains historic bias, for example around age, disability or gender, the output may unintentionally reinforce it. Using AI to screen CVs, write performance feedback or suggest disciplinary outcomes without human oversight can create discrimination risk.

None of this means HR should not use AI. It means it must be used as a starting point, not a final authority. AI is good at drafting, summarising and suggesting options. It is not good at legal judgment, risk weighting or applying law to specific facts.

So, what to do? Use AI to speed up first drafts, idea generation and general understanding. Then check everything against up-to-date legal sources, internal policies and, where necessary, qualified legal advice. Treat AI like a junior assistant who works fast but gets things wrong.