Skip to content
Subscriber Assistance+1 215 942 8226
Subscriber Login
Select
Shop Here
eShop
Hologram of the Brain with AI Icon and the Statue of Goddess Themis next to it symbolizing Law, Equality, Legislation and artificial intelligence.

Article

AI and Duty of Care: Redefining Corporate Responsibility in the Age of Intelligent Systems

Ribbon

Generative AI is influencing how companies protect their people – and the law is catching up.

When an algorithm screens job candidates, assigns shift patterns, or flags safety risks, who bears responsibility if something goes wrong? As artificial intelligence moves from support tool to decision-maker, organizations face a fundamental question: How do traditional duties of care apply when machines are making calls that affect employee safety, wellbeing, and livelihoods?

The answer matters more than ever. From boardrooms in London to manufacturing floors in Hyderabad, AI is becoming integral to workforce management, crisis response, and operational safety. But this power needs to be balanced with accountability. Organizations that fail to govern AI responsibly do not just risk regulatory penalties – they risk the trust of their people and the resilience of their operations.

Why AI Impacts Duty of Care

Duty of Care has long required employers to take reasonable steps to prevent foreseeable harm to employees. However, use of AI to address those steps introduces complexity and risks that traditional frameworks have not yet fully contemplated.

Unlike human managers, AI systems process vast datasets at speed, identifying patterns and making predictions that would be impossible manually. Ultimately, generative AI still works on probability. An AI tool might analyze thousands of variables to predict workplace accidents, optimize evacuation routes during crises, or monitor employee stress indicators through productivity data. These capabilities can genuinely enhance safety and wellbeing – but they also create new risks.

The challenge and risks lie in AI's distinctive characteristics: opacity in decision-making, potential for embedded bias, capacity for rapid scaling of errors, and the difficulty of assigning accountability when things go wrong and the significant risk of hallucinations. When an AI system makes a flawed recommendation that affects worker safety, is the employer liable? The software vendor? The data scientist who trained the model?

Courts and regulators increasingly answer: the organization or employer deploying the system bears primary responsibility. The principle is clear – using AI to make decisions does not reduce your Duty of Care. It expands it.

The Regulatory Landscape of AI

Organizations operating across borders now face a patchwork of emerging AI regulations, with Europe leading the charge.

The EU AI Act: High-Risk Designation for HR and Safety

The European Union's Artificial Intelligence Act entered into force in August 2024, establishing the world's first comprehensive legal framework for AI.1 For organizations managing people and safety, the implications are immediate.

The Act classifies AI systems used in employment, worker management, and access to essential services as "high-risk."1 This includes tools for recruitment, performance evaluation, task allocation, and monitoring worker behavior or health. As of February 2025, certain practices – such as using AI to detect or predict emotions in the workplace – are prohibited outright, with fines for breaches reaching up to €35 million or 7% of global annual turnover.10

Starting August 2026, deployers of high-risk AI systems must meet stringent obligations: transparent disclosure to workers before implementation, meaningful human oversight with authority to intervene, continuous monitoring for discriminatory impacts, and detailed logging of AI decisions for at least six months.1 These requirements apply not only to EU-based organizations but also to any company whose AI outputs are used in the EU – creating extraterritorial reach that affects global operations.

International Frameworks: Organization for Economic Co-operation and Development (OECD) and ISO Standards

Beyond binding regulation, international standards provide guidance for responsible AI governance.

The OECD AI Principles, updated in May 2024 and now endorsed by 47 jurisdictions, emphasize systematic risk management throughout the AI lifecycle.2 Principle 1.5 explicitly requires organizations to address risks related to "harmful bias, human rights including safety, security, and privacy, as well as labor and intellectual property rights." The updated principles underscore that governments should "promote the responsible use of AI at work, to enhance the safety of workers" and ensure benefits are "broadly and fairly shared."2

ISO 31000:2018, the international standard for risk management, provides a framework that many organizations are adapting for AI-specific risks.3 While not AI-specific, its principles – integrated, comprehensive, and dynamic risk management – align closely with the challenges of governing algorithmic systems. Organizations are increasingly using ISO 31000 as a foundation to identify, assess, and mitigate AI-related risks across operations.

ISO 45003:2021, the world's first standard for psychological health and safety at work, gains new relevance as AI influences job design, workload distribution, and employee monitoring.4 The standard's focus on psychosocial risks – including work-life balance, autonomy, and organizational culture – directly intersects with concerns about algorithmic management and AI-enabled surveillance.

Corporate Governance and Traditional Fiduciary Duties

In the United States, the Caremark standard – established in Delaware Chancery Court's 1996 decision re Caremark International Inc. Derivative Litigation – imposes fiduciary duties on boards to implement and monitor internal compliance systems.5 This builds upon traditional board level duties to exercise due and reasonable care, skill and diligence found in the Companies Act 2013 in India and Companies Act in the UK 2006. Boards must develop sufficient understanding on the limitations and capabilities of AI so that they can competently evaluate the risks and ensure proper governance.

Recent cases extend Caremark principles to AI governance. Boards must understand how AI systems operate, what data they use, and what risks they pose – particularly in high-stakes domains like workforce safety and health. The 2023 McDonald's Corp. Stockholders Derivative Litigation decision confirmed that not only directors, but also corporate officers have oversight duties, requiring them to "make a good faith effort to put in place reasonable information systems" and avoid "consciously ignoring red flags." 6

More specifically, UK courts are uncovering problems with the use of AI by lawyers. In one incident with the use of AI. In a recent English High Court Judgement7 the statement of grounds contained a number of citations which did not exist. Further, witness statements from a client and a lawyer relied on numerous authorities which did not exist. In both instances the lawyers were fortunate not to be subject to contempt proceedings however to perhaps reiterate the Caremark case, “To mitigate the risk of similar situations arising in the future, those with managerial responsibilities within law firms, chambers and in-house legal teams should ensure that they publish policies and guidance on the responsible use of AI…” India is experiencing its own challenges in the legal sector using AI with several cases8 with experiencing quoting of non-existent cases and legislation.

For organizations deploying AI, this creates clear accountability: senior leaders must actively oversee AI systems that could cause "corporate trauma" – exactly what happens when AI failures harm employees or operations. Failure to establish adequate AI oversight mechanisms could expose directors and officers to personal liability.

The Organizational Implications for Risk, HR, and Safety Teams

AI fundamentally redefines the approach to corporate Duty of Care for security, risk, HR, and safety practitioners.

Risk & Compliance: AI governance is likely to become mission critical. This involves assigning risk ownership, inventorying AI systems impacting workers, conducting pre-deployment impact assessments, and continuous monitoring for bias/errors. Crucial documentation of oversight, risk assessments, and corrective actions is necessary to demonstrate reasonable care.

HR & People Risk: HR is central to AI challenges, as tools affect recruitment, performance, and scheduling. Organizations must build trust by ensuring transparency—explaining how AI influences decisions, what factors are considered, and how decisions can be contested. Human oversight must be meaningful, with overseers having the training, authority, and protection to override AI recommendations, not just "rubber stamp" them. On May 16, 2025, the US District Court for the Northern District of California certified the first nationwide collective action under the Age Discrimination in Employment Act in a case alleging that the Workday AI system disproportionally rejects candidates over the age of 40. The novel theory of liability is that the AI system acts not as a mere software tool but as an agent of each employer who deploys it.9 This highlights the need for both employers and their vendors to carefully evaluate the effect of the AI they deploy as both can be liable if biased decision making is a result of an automated process.

Health, Safety, & Security: AI offers predictive safety benefits (e.g., accident risk identification) but creates new hazards, such as triggering false alarms, overlooking risks due to biased data, or causing psychological stress via surveillance. AI safety tools must be governed rigorously: regularly tested, monitored for adverse psychological impacts, used to complement human judgment, and include accessible escalation pathways for workers.

Practical Steps: Adapting Now for an Algorithmic Future

Organizations can proactively strengthen AI Duty of Care by addressing the steps below without waiting for full regulation.

  • Conduct AI Risk Assessments: Inventory high-risk AI systems (worker management, safety) to assess potential harms (bias, privacy, stress, errors), likelihood, and severity. Prioritize high-risk systems.
  • Establish Clear Accountability: Appoint specific individuals (at operational and board levels, e.g., fulfilling Caremark duties) with the authority and resources for AI oversight.
  • Implement Meaningful Human Oversight: Design human review that is practical at AI's operational speed, such as slowing processes, using AI to flag edge cases, or tiered review systems.
  • Build Transparency into Systems: Design AI to provide workers with clear explanations for decisions (e.g., shift assignments, safety alerts).
  • Train Across the Organization: Mandate AI literacy training for managers (limitations), workers (rights), and board members (governance principles), aligning with requirements like the EU AI Act.
  • Monitor Continuously and Adapt: Establish regular review cycles (at least quarterly for high-risk systems) to check performance, audit for bias/discrimination, and verify human oversight effectiveness, as systems naturally "drift."
  • Document Everything: Maintain comprehensive records of all AI governance activities including risk assessments, monitoring, incidents, and board discussions to demonstrate good-faith compliance.
  • Recognise Risks: Utilizing AI will not amount to a defence in the event of a claim for negligence for leadership.

The Cultural Shift: From Compliance to Confidence

Ultimately, utilizing AI Duty of Care methods is not going to assist in avoiding liability. It is about building organizational resilience in an age of algorithmic decision-making.

The organizations that will thrive are those that view AI governance not as a legal burden but as a competitive advantage. When workers trust that AI tools genuinely enhance their safety and wellbeing rather than merely monitoring them, engagement increases. When customers see evidence of responsible AI governance, confidence grows. When regulators observe proactive compliance, enforcement actions become less likely.

This requires cultural change. Legal and risk teams must move from gatekeepers to enablers, helping the organization deploy AI responsibly rather than simply saying no. Technical teams must embrace transparency and explainability as design principles, not afterthoughts. Leadership must model accountability by asking hard questions about AI systems and accepting that not every efficiency gain is worth the risk.

Looking Ahead: The 2030 Vision

In five years, AI-driven Duty of Care will be shaped by three trends:

  1. Regulation: Expect expanding, harmonized global AI frameworks and case law defining "reasonable" AI oversight.
  2. Technology: Explainable AI, privacy-preserving techniques, and automated bias detection will become commercial standards, driven by regulation.
  3. Societal Expectations: Workers will demand AI transparency; investors will include AI governance in ESG; and insurance markets will adjust based on AI risk management.

Regardless of these changes, the fundamental duty remains organizations deploying AI are responsible for its impacts and must take reasonable care to prevent foreseeable harm.

Duty of Care in the Algorithmic Age

AI offers huge potential for safer, more resilient organizations, but requires extending the Duty of Care to algorithms. This means understanding how AI systems work, ensuring human oversight for high-stakes applications, monitoring for harms, being transparent with workers, and documenting governance.

Regulators make clear that organizations are accountable for AI's impact; "the AI did it" is not a defense. For security, risk, HR, and safety professionals, AI governance is urgent and central to modern Duty of Care. Going beyond compliance demands rethinking how we govern systems affecting human wellbeing.

Organizations integrating AI governance with Duty of Care will build the trust and resilience needed to thrive. The future of Duty of Care is about deploying powerful AI with the same care, accountability, and commitment to human wellbeing it has always demanded.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  2. OECD, "Recommendation of the Council on Artificial Intelligence," as amended on 3 May 2024. Available at: https://oecd.ai/en/ai-principles
  3. ISO 31000:2018, Risk management — Guidelines; International Organization for Standardization, February 2018. Available at: https://www.iso.org/standard/65694.html
  4. ISO 45003:2021, Occupational health and safety management — Psychological health and safety at work — Guidelines for managing psychosocial risks; International Organization for Standardization, June 2021. Available at: https://www.iso.org/standard/64283.html
  5. US Case Law - Caremark; In re Caremark International Inc. Derivative Litigation, 698 A.2d 959 (Del. Ch. 1996). Available at: https://law.justia.com/cases/delaware/court-of-chancery/1996/13670-3.html
  6. US Case Law - McDonald's; In re McDonald's Corporation Stockholder Derivative Litigation, C.A. No. 2021-0324-JTL (Del. Ch. Jan. 25, 2023). Available at: https://courts.delaware.gov/Opinions/Download.aspx?id=343130
  7. https://www.bailii.org/ew/cases/EWHC/Admin/2025/1383.html
  8. https://www.damiencharlotin.com/hallucinations/?q=&sort_by=-date&states=India&period_idx=0:  https://www.damiencharlotin.com/documents/925/KMG-Wires-Private-Limited.pdf; https://indiankanoon.org/doc/198343396/ 
  9. 2025 05 27 “Federal Court Allows Collective Action Lawsuit Over Alleged AI Hiring BIas” Holland and Kinght Alert by Holland & Knight LLP Federal Court Allows Collective Action Lawsuit Over Alleged AI Hiring Bias | Insights | Holland & Knight
  10. European Commission. (2025, February 4). Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act. Shaping Europe’s digital future. Available at: https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act?