Picture this: You're a corporate
lawyer racing against a deadline to draft a time sensitive legally binding
document or a financial accountant analysing balance sheets for a merger deal.
Your client, a high profile family-owned business is on the brink of a major merger
or acquisition. They share intricate details of their subsidiary assets, family
disputes, and strategic vulnerabilities trusting you as their go to
professional.
To reduce hours off the task, you
input lots of confidential details into an AI platform for a summary, or maybe
even the first draft. In a data centre thousands of miles away: a polished
draft is generated in seconds which would have taken hours, if not days if done
manually.
But what if, unbeknownst to you,
that AI company absorbs every word and later inadvertently reproduces fragments
of your client's confidential data when a rival entity queries on a similar
scenario or asks for strategy simulations? While not guaranteed, this risk of
data memorisation and disclosure is a real, non-trivial threat in every
professional's workflow as we rush to embrace AI without questioning its risks.
At the core of this crisis lies
the principle of fiduciary duty. Across numerous professions Lawyers, doctors,
accountants, bankers, therapists, practitioners are bound by an obligation of
confidentiality. This duty constitutes a relationship of trust, wherein the
private concerns of clients are safeguarded with the utmost discretion,
transforming personal disclosures into protected and sacred confidences.
Clients don't just share facts;
they bare souls, from a doctor's medical diagnosis to an accountant unearthing
a potential fraud trail. Yet generative AI models, these cloud-based marvels
operate more like sponges than safes absorbing every strand of data available
to improve their future outputs. When you upload a patient's chart or a divorce
settlement, the data travels to distant servers for processing, often fuelling
model improvements and training unless explicitly barred.
Many consumer-grade tools, particularly free tiers, do not guarantee zero retention or prohibit training on user data, meaning your client's confidences can morph into data centre algorithms. Professionals who engage with such tools without safeguards risk borderline malpractice, outsourcing a non-delegable duty to a machine that neither understands ethics nor swears oaths.
The Hidden Dangers of AI
"Learning" and Hallucinations
The real horror? Two distinct risks:
(a) "Hallucinations," where AI fabricates plausible-sounding but
false content, potentially misleading courts or clients, and
(b) data memorisation, where
models trained on user inputs can inadvertently reproduce snippets of
confidential information submitted by other users. Imagine a forensic
accountant inputting embezzlement data; in rare but documented cases, such
details have later surfaced in responses to other users querying similar
patterns.
The real horror?
"Hallucinations," where AI confidently spits fabricated facts,
blending one user's input into another's output. Imagine a forensic accountant
inputting embezzlement ledger; weeks later, the same model serves up those
exact figures to opposing entity asking about "common fraud pattern
tactics by X." Real-world precedents chill the spine: Tech engineers once
fed proprietary code into public chatbots, only to watch it regurgitate in
competitors' queries.
For fiduciaries, this isn't a quirky bug it's a disclosure event, piercing confidentiality with surgical precision. Professional regulating bodies, issue stark warnings: Professionals must audit AI tools, anonymize data, and supervise outputs. Yet in the heat of deadlines, many gloss over terms of service, mistaking "helpful assistant" for "sworn confidant."
Layer on data protection laws,
and the noose tightens. Under the EU's GDPR and UK's Data Protection Act,
professionals are clearly designated as "data controllers," liable
for every byte processed by "processors" like AI vendors or cloud
giants. Bangladesh's Personal Data Protection Ordinance 2025 (PDPO 2025) is
adopting a similar controller-processor architecture, though detailed
regulations and enforcement practice are still emerging. Principles of purpose
limitation, minimisation, and accountability demand risk assessments and
appropriate safeguards before any input. Pasting sensitive personal data into
public AI interfaces without robust due diligence, lawful basis, and
contractual protections is likely to breach data protection obligations and
leave professionals exposed—particularly where cross-border transfers and
vendor disclaimers create enforcement gaps.
If a confidentiality breach or
fabricated content appears in court filing due to AI use, regulators will
scrutinise your competence: Did you foresee risks? Secure consents? Document
impact assessments. Such incidents can trigger coverage disputes with
professional indemnity insurers, disciplinary sanctions from ethics boards, and
client lawsuits—potentially becoming career-defining. The margin for error is razor
thin.
Untangling the Web of
Accountability
When the vault cracks, who pays
the price? You, the professional, sit at the apex. Fiduciary duty doesn't
evaporate with "the AI did it" excuse. Just as you're liable for a
paralegal's slip, you're the gatekeeper for tools you unleash.
AI companies hide behind beta
disclaimers and "use at your own risk" & “AI can make mistakes,
please double check” clauses; cloud hosts plead infrastructure neutrality.
Shared liability glimmers misrepresented privacy could ensnare providers under
consumer laws but enforcement falters across jurisdictions.
Multinational behemoths wield
opaque terms, rendering litigation a fool's errand. Clients, the true victims,
foot the emotional bill: shattered trust in their hour of need. We've danced
this tango before with tech leaps: telephones, emails, the internet, each
birthing new ethics. AI demands we rediscover human custodianship amid the
code.
As a corporate lawyer navigating
Bangladesh's Company Act, where board governance and contract precision rule,
AI tempts with sifting case law or drafting notices. Doctors in overcrowded
clinics crave symptom triage; accountants chase audits faster. Geopolitics and
political and economic uncertainty adds urgency: leaked strategies in tense
regions invite exploitation. Efficiency seduces, but trust is eternal. One
breached resolution could torch careers, families, futures.
Drawing the Line: Safeguards
for a Secure Future
The ethical line? Bifurcate
ruthlessly
·
Anonymize without mercy: Strip
identifiers, generalize before input: names, dates, figures out.
- Use enterprise grade AI subscriptions: Paid
enterprise grade AI tools have policies not to retain data after a certain
number of days and delete them from their servers, minimizing the risk of
information leaks via AI hallucinations.
- Vet vendors, secure consents: Demand audited
privacy warranties; inform clients explicitly of AI's role.
- Embed human oversight: Triple-check outputs;
no autopilot for deliverables.
- Formalize assessments: Conduct and log
data-protection impact checks, per regulatory mandates.
- Build firm resilience: Train teams, craft
incident plans, consult insurers early and at regular intervals.
- Push for policy: Lobby for AI transparency: data flow disclosures, mandatory breach reports, crisp liabilities.
Policymakers must step up, crafting risk-based rules that foster innovation without fragility. In Bangladesh, aligning with global standards could safeguard our growing digital economy.
AI isn't the villain—it's a force
multiplier, sifting volumes of case law or medical journals in blinks. But
confidentiality isn't a relic to automate away; it's the profession's soul. The
client eyes you across the desk, not the screen. Efficiency kneels to trust;
betray it, and no algorithm absolves. Guard that human bond fiercely: our
future depends on it.
Written by:
Shafqat Aziz
Barrister
(Lincoln's Inn)
LLM Corporate
Law, NTU
Industry &
Alumni Fellow, NTU
PGDL, UWE
Bristol
LLB, BPP
University
Accredited
Civil-Commercial Mediator (ADR-ODR International)
https://barristershafqataziz.blogspot.com/
https://www.linkedin.com/in/shafqat-aziz-29a3a5171/
.png)






Comments
Post a Comment