Agentic AI and the Boundaries of Contractual Liability: A (Hypothetical) Bangladesh Perspective

 

Every age has a buzzword that sells to the masses, since 2023, this buzzword has been Artificial Intelligence. What started off as a consumer grade chatbot which answered questions, has evolved and developed to a level of running basic to even mid-tier business tasks autonomously via consumer grade Agentic AI. Agentic AI are highly autonomous systems which can independently plan, reason, decide, and execute multi-step reasoning tasks to execute tasks with minimal human input: Somewhat like Jarvis from Iron Man. Unlike consumer grade AI such as Chatgpt or Gemini, these agentic AI mirrors the concept of human agency by handling real life tasks delegated to it autonomously.

So, with more and more businesses facing the choice of adopting emerging AI technologies into their workflow, who becomes liable for the actions of these “Agentic AI”? This question becomes all the more important if we consider the fact that more and more entry level tasks to basic mid-tier tasks are being delegated to these agentic ai models? Furthermore, in our legal jurisdiction of Bangladesh, it is even more vague as there is a huge gap between technologies emerging vs laws/regulations being adopted to regulate them.

Bangladesh does not have any law/regulations specifically addressing AI liability. Unfortunately, the National AI Policy Bangladesh 2026 which aims for regulating AI by 2030 does not address this issue.

For now, AI liability in Bangladesh's contract law remains pinned on the human operating it, treating AI tools as extensions of the user. If the user uses the content produced by an AI tool and subsequently uses it, the user shall be liable. This essentially absolves AI companies of any form of legal liability regardless of their output.

Automation via agentic AI may not cause irremediable losses when used internally at organisations, however, may result in major problems when deployed on a large scale to handle external tasks on behalf of an organization.

Under Bangladeshi Contract Law, organizations cannot delegate core tasks requiring personal judgment, legal capacity or authority to AI tools due to delegation and capacity factors under Section 190 of the Contract Act,1872, which bars agents from delegating acts they must perform themselves unless business or trade customs/necessity allows sub agency. AI cannot qualify neither as a human or sub agent, neither a human sub-agent or a recognized entity with any mandates.

It can be argued that The Companies Act 1994 permits boards to delegate to directors, managers, or human agents board resolutions, however AI lacks legal capacity as they are not “persons” who have been defined as “shall include any company or association or body of individuals, whether incorporated or not” in s.3(39) of the General Clauses Act, 1897 resulting in any decision related act by agentic AI tools invalid.

As per the draft AI Policy 2026-2030, court shall be inclined to view AI as a non-delegable tool, holding the delegating human or organisation liable while holding AI decisions void and unenforceable.

Another important aspect under Bangladeshi law is, the Contract Act demands that the parties to a contract have the legal capacity which includes a majority age of 18, sound mind and no disqualifications under Section 11 of the act. This essentially flags AI as non-persons and not holding legal capacity. This requirement is to be read in line with s.10 of the act which requires free consent from competent parties to the contract, in this regard, AI tools lack intent rendering it’s output void. Furthermore, AI tools only execute what the user instructs it to conduct thereby the liability of it falling on the operator.

For a wide interpretation of Chapter X (Sections 182-238) of the Contract Act 1872 outlining the agency principles, it casts the AI tool operator as the principal and the AI tool as the agent. Essentially, this makes the operator fully liable for any errors. Can the AI company which developed the AI tool be held accountable for such errors? Unfortunately, no if we apply S.73 of the Contract Act which outlines the suffering party shall can receive compensation from the breaching party, as discussed above, the operator in this scenario shall be held liable, not the AI developer.

As of now, no precedent setting case laws exist, however, Bangladesh being a common law system will draw and use precedents from other common law jurisdictions. In this space, the most common case law precedents drawn by lawyers abroad are crypto/stock market trades executed by AI bots following the instructions of the users (operators).

Recent U.S. case law reinforces the idea that AIgenerated records are treated as evidentiary admissible  material rather than privileged communications. In United States v. Heppner, No. (2026), Judge Jed S. Rakoff held that a defendants AIgenerated documents and chathistory with a consumergrade AI tool (Anthropics Claude) were not protected by attorneyclient privilege and were admissible as evidence in the criminal proceeding. The court reasoned that the AI platform is not an attorney, owes no fiduciary duties, and operates under a privacy policy that explicitly allows data collection and disclosure in litigation, so the user could have no reasonable expectation of confidentiality. This decision signals that, in leading commonlaw jurisdictions, AItool conversations are likely to be treated as discoverable and admissible where relevant, especially to show knowledge, planning, or state of mind.

Due to the fast pace adoption of this technology, its suffice to say such similar claims shall emerge more in the future as cloud computing gets cheaper and more AI developers enter the market offering their own unique products.

The Cyber Security Act 2023 and draft National AI Policy 2026-2030 regulate data and ethics but ignore contractual personhood for AI, leaving it to be determined by a 150-year-old Contract Act as the sole guidepost.  Some may argue, for now it can be applied, interpreted and used, however given the pace of these emerging technologies and other ancillary developments, this can soon be deemed to be no fit for purpose requiring a more updated, easy to use and modern approach to handling such disputes.

First published by BD Opinion JuristsAgentic AI and the Boundaries of Contractual Liability: A (Hypothetical) Bangladesh  Perspective 

Written by

Shafqat Aziz

Barrister (of Lincoln's Inn)

LLM Corporate Law, NTU

Industry & Alumni Fellow, NTU

PGDL, UWE Bristol

LLB, BPP University

Accredited Civil-Commercial Mediator,

(ADR-ODR International)

https://www.linkedin.com/in/shafqat-aziz-29a3a5171/

Comments