AI in the workplace: What to automate, what to keep human, and how to avoid mistakes
Generative AI promises to transform office work, but the gap between hype and reality is risible
As tools like ChatGPT become ubiquitous, workers and managers face a stark choice: which tasks should be handed to algorithms, and which require the irreplaceable qualities of human judgement?
The answer matters. Get it wrong and organisations risk data breaches, discrimination claims, and the erosion of skills that took decades to build. Get it right and workers could be freed from drudgery to focus on creativity, strategy, and the human connections that no machine can replicate.
The automation dilemma
Walk into any British office today and you will find workers experimenting with AI, often without their employer's knowledge. They are using ChatGPT to draft emails, summarise reports, and generate ideas. Some are feeding it confidential client data or sensitive HR information, blissfully unaware they may be breaching data protection law.
Meanwhile, executives are under pressure to demonstrate they have an "AI strategy". The result is often hasty deployment of tools that automate the wrong things, or worse, automate bias and error at scale.
The fundamental question is not whether AI works—it demonstrably does for certain tasks—but whether we are deploying it wisely. And the evidence suggests we are not.
What AI does well (and what it does not)
Artificial intelligence excels at repetitive, pattern-based tasks where speed matters and errors are easily caught. Summarising lengthy documents, transcribing meetings, categorising emails, drafting routine correspondence—these are areas where AI delivers measurable gains without significant risk.
The technology is also remarkably good at generating plausible-sounding text, which is both its strength and its greatest danger. AI will confidently invent statistics, fabricate citations, and present fiction as fact. It has no understanding of truth, only patterns in the data it was trained on.
This makes AI a poor choice for anything requiring accuracy, accountability, or ethical judgment. High-stakes decisions—hiring, dismissals, medical diagnoses, legal advice—cannot be delegated to systems that lack understanding and cannot be held responsible when things go wrong.
Yet this is precisely what some organisations are attempting. Recruitment tools that screen CVs have been found to discriminate against women and ethnic minorities. Performance management systems that monitor workers' keystrokes and mouse movements are creating toxic cultures of surveillance. Customer service chatbots are infuriating people with scripted responses that ignore context and nuance.
The pattern is clear: AI works when it assists humans with well-defined, low-stakes tasks. It fails when it replaces human judgment in complex, high-stakes situations.
The human skills AI cannot replicate
There are tasks that should never be automated, not because the technology is incapable, but because the act of doing them is itself valuable.
Consider a manager delivering difficult feedback to an employee. An AI could generate the words, but it cannot read body language, adjust tone in response to emotion, or demonstrate the empathy that makes criticism constructive rather than destructive. The conversation is not simply about transmitting information; it is about maintaining a relationship and preserving dignity.
Or consider a solicitor advising a client facing financial ruin. The legal analysis might be straightforward, but the client needs reassurance, strategic thinking, and someone who will be accountable if the advice proves wrong. No algorithm can provide that.
These are not edge cases. Much of the valuable work involves navigating ambiguity, building trust, exercising ethical judgement, and taking responsibility. These are fundamentally human acts that lose their meaning when delegated to machines.
The risk is not that AI will become too intelligent, but that we will become too willing to outsource our thinking. Already, there are reports of workers losing the ability to write clearly because they rely on AI to do it for them. Junior solicitors who never draft their own contracts will struggle to become senior ones. Graduates who used AI to write their essays may find they cannot think critically under pressure.
The governance gap
The deeper problem is that AI is being deployed faster than organisations can govern it. Most British companies have no policy on AI use. Staff are left to make their own judgements about what is appropriate, often with little understanding of the risks.
The result is a patchwork of ad hoc experimentation. One team uses AI to generate marketing copy and publishes it without review. Another feeds customer data into a free chatbot, inadvertently sharing it with a US tech company. A third automates expense approvals and discovers too late that the system has been approving fraudulent claims.
What is needed is not a ban on AI, but thoughtful governance. Organisations must decide which tools are approved, what data can be shared with them, and where human oversight is mandatory. They must train staff to use AI effectively and to recognise its limitations. And they must be clear about accountability: when AI-assisted work goes wrong, a human must be responsible.
This is not happening at scale. The government has taken a light-touch approach to AI regulation, preferring guidance to legislation. The EU's AI Act, which comes into force next year, will impose stricter requirements, but only on "high-risk" systems. For most workplace AI, the rules remain vague.
A framework for decisions
So how should organisations decide what to automate? A simple framework helps.
Automate tasks that are repetitive, high-volume, and low-stakes. Summarising documents, scheduling meetings, and categorising data; these are safe bets. The worst-case scenario is a minor error that is easily corrected.
Use AI to assist, not replace, in medium-stakes tasks. Research, drafting, and preliminary analysis can be accelerated by AI, but humans must review the output and take responsibility for it. Think of AI as a very fast, very confident junior colleague who needs supervision.
Keep humans in control of high-stakes decisions. Anything involving significant money, legal liability, personal data, or people's livelihoods should remain human-led. AI can provide information to support these decisions, but it cannot make them.
Never automate tasks that require empathy, ethics, or accountability. Redundancy consultations, crisis communications, strategic planning—these are not simply about producing an output, but about exercising judgement and taking responsibility. They are irreducibly human.
The safe starting points
For organisations unsure where to begin, there are low-risk workflows that offer quick wins.
Use AI to transcribe meetings and extract action points, with a human reviewing before distribution. Deploy chatbots to answer routine internal queries, with clear escalation paths to humans for complex issues. Automate the generation of standard reports, with human sign-off before they are shared. Use AI to proofread documents, but not to write them from scratch.
These workflows share a common feature: AI does the tedious work, but humans retain control and accountability.
The high-risk traps
Equally, there are workflows that should not be automated without expert guidance and robust safeguards.
Do not use AI to screen job applicants unless you have tested it rigorously for bias and maintained human oversight. Do not automate performance evaluations; the damage to morale and the risk of unfair treatment claims are too great. Do not use AI to generate legal documents, financial advice, or regulatory submissions without qualified professionals reviewing every word.
And never, ever use AI to handle sensitive communications (grievances, complaints, crisis response) where tone, empathy, and context are everything. The reputational damage from a botched AI response can take years to repair.
The skills question
Perhaps the most profound question is what happens to workers' skills in an AI-augmented workplace. If junior staff never learn to write clearly because AI does it for them, how will they develop into senior leaders who can think and communicate effectively?
This is not a hypothetical concern. Law firms report that trainee solicitors who rely heavily on AI struggle with independent legal reasoning. Marketing agencies find that junior creatives who use AI for ideation produce derivative work. Accountancy firms worry that graduates who automate calculations never develop the numerical intuition that catches errors.
The solution is not to ban AI, but to be intentional about skill development. Rotate staff through tasks that require them to work without AI assistance. Maintain training in core competencies, writing, analysis, and critical thinking. Treat AI as a tool that enhances expertise, not a substitute for developing it.
The accountability imperative
Ultimately, the question of what to automate comes down to accountability. When work goes wrong, someone must be responsible. AI cannot be sued, cannot be disciplined, cannot learn from its mistakes in any meaningful sense.
This means that any task where accountability matters (and that is most tasks of consequence) must have a human in the loop. Not a human rubber-stamping AI decisions, but a human genuinely exercising judgement and taking responsibility.
The organisations that thrive in the age of AI will be those that use it to amplify human capabilities, not replace them. They will automate the tedious and repetitive, freeing workers to focus on the creative, strategic, and interpersonal work that machines cannot do.
They will also be the organisations that resist the siren call of total automation, recognising that some tasks are valuable precisely because humans do them. The future of work is not humans versus machines, but humans and machines working in partnership—with humans firmly in charge.