District of Columbia AI Meeting Recording Laws (2026)

The District of Columbia's one-party consent law makes it one of the more straightforward jurisdictions for AI meeting recorder compliance. Under D.C. Code § 23-542, a person who is a party to a conversation may record it without the other participants' knowledge, and that same principle extends to AI tools activated by a consenting participant. But straightforward does not mean risk-free. DC's position as the hub of federal government introduces complications that no other jurisdiction shares: federal agency recording policies, the Privacy Act of 1974, and classified information protocols can all override the baseline consent framework for meetings involving government employees.
The penalties for getting it wrong are substantial. Criminal violations carry up to five years in prison and $12,500 in fines, while civil claims allow statutory damages that accumulate at $100 per day with a $1,000 floor. As of April 2026, the growing wave of litigation against AI meeting tools nationally, including the Otter.ai class action and the Ambriz v. Google ruling, means that even one-party consent jurisdictions like DC require careful attention to how AI recording tools are deployed. Consult an attorney for advice specific to your situation.
DC's One-Party Consent Framework
The Core Statute: D.C. Code § 23-542
D.C. Code § 23-542 prohibits the willful interception of any wire or oral communication. The statute makes it unlawful to intentionally intercept, disclose, or use the contents of any wire or oral communication obtained through unauthorized interception.
The critical exception for AI meeting recorders appears in subsection (b)(3). A person not acting under color of law may intercept a wire or oral communication where that person is a party to the communication, or where one of the parties to the communication has given prior consent to the interception. This exception has one important limitation: the interception cannot be carried out for the purpose of committing any criminal or tortious act, or for committing any other injurious act.
What "One-Party Consent" Means for AI Tools
Under DC law, if a meeting participant activates an AI recording tool, that participant's own consent satisfies the statute. The tool does not need separate permission from every other person in the meeting. This applies equally to in-person meetings, telephone calls, and virtual meetings conducted over platforms like Zoom, Microsoft Teams, or Google Meet.
The consent must come from a party to the communication. An AI tool that records a conversation without any participant's knowledge or authorization would violate § 23-542 because no party has consented. The distinction matters for automated recording features: a tool that auto-joins meetings based on a calendar integration still requires that the account holder (who is a meeting participant) has authorized the recording for that specific meeting or category of meetings.
The "Criminal or Tortious Purpose" Exception
DC's one-party consent exception contains a carve-out that can trip up AI meeting recorder users. If the recording is made for the purpose of committing a criminal act, a tortious act, or any other injurious act, the one-party consent exception does not apply. Recording a meeting to gather evidence for insider trading, to facilitate blackmail, or to engage in corporate espionage would fall outside the consent exception and constitute a criminal interception.
For typical business uses of AI meeting recorders (generating transcripts, action items, and meeting summaries), this exception is unlikely to apply. But organizations using AI meeting tools to conduct covert competitive intelligence or to build legal cases against employees should consider whether the purpose behind the recording could invalidate the consent exception.

AI Meeting Recorders Under DC Law
How AI Recording Tools Operate
AI meeting recording tools like Otter.ai, Fireflies.ai, Microsoft Copilot, and Zoom AI Companion work by capturing audio from virtual or in-person meetings, transmitting that audio to remote servers, and processing it through speech recognition models to produce transcripts, summaries, and other outputs. Some tools join meetings as visible bot participants. Others operate in the background through platform integrations.
Under DC's one-party consent framework, the legality of these tools hinges on whether a consenting party activated them. If the meeting host or a participant enabled the AI recorder, that person's consent satisfies § 23-542 for DC purposes.
When One-Party Consent Is Not Enough
DC's permissive consent standard does not override the laws of other jurisdictions. When a DC-based participant records a virtual meeting that includes participants in all-party consent states like California, Connecticut, or Illinois, the stricter consent requirements of those states apply to protect their residents. An AI meeting tool activated by a DC participant in a multi-state meeting must comply with the most restrictive applicable law.
This is the most common compliance pitfall for DC-based organizations. A government contractor headquartered in Washington, DC, conducting a Zoom meeting with team members in California (all-party consent under Cal. Penal Code § 632) and Illinois (all-party consent under 720 ILCS 5/14-2, plus BIPA voiceprint protections) cannot rely solely on DC's one-party standard.
Data Use and AI Training Concerns
Several AI meeting tools use recorded audio and transcripts to train and improve their machine learning models. The Otter.ai class action (Brewer v. Otter.ai, Inc., N.D. Cal., No. 5:25-cv-06911, filed August 2025) alleges that Otter retains conversational data indefinitely and uses it to refine its speech recognition technology without participant consent.
Even in one-party consent jurisdictions like DC, using intercepted communications for purposes beyond the original recording (such as AI model training) may implicate additional legal theories. The capability test established in Ambriz v. Google (N.D. Cal., Feb. 2025) held that a company's technical capability to use intercepted data for its own purposes was sufficient to state a wiretap claim under California law. While that ruling arose under California law, it highlights the expanding legal theories that could affect AI meeting tool providers and users regardless of jurisdiction.
However, the Ninth Circuit's August 2025 ruling in Popa v. Microsoft took the opposite approach, requiring plaintiffs to demonstrate that actual harm occurred rather than relying on the mere capability to cause harm. This tension between the Ambriz and Popa rulings remains unresolved as of April 2026, creating uncertainty for AI meeting tool providers.
Popular AI Meeting Tools and DC Compliance
DC's one-party consent standard makes most AI meeting tools legally compliant when activated by a participating party. The compliance differences between tools matter more for transparency and participant trust than for strict legal compliance in DC.
Otter.ai / OtterPilot
Otter's AI notetaker can auto-join meetings via calendar integration and appears as a visible participant named "Otter.ai." Under DC law, if the Otter account holder is a party to the meeting, that person's consent is sufficient. However, Otter's automatic joining without per-meeting authorization raises questions when the account holder forgets the tool is enabled or when Otter joins meetings where the account holder is absent.
Microsoft Teams / Copilot
Microsoft Teams provides built-in recording notifications and requires the meeting organizer to enable recording. When Copilot is active, Teams displays an indicator to all participants. This transparency exceeds DC's legal requirements, where no notification is necessary under one-party consent. The notifications serve as best practice rather than legal obligation in DC.
Zoom AI Companion
Zoom provides both visual and audio indicators when recording or AI features are active. Meeting hosts control whether AI Companion features are enabled, and participants see clear indicators. Like Teams, Zoom's notification approach exceeds what DC law requires.
Fireflies.ai
Fireflies joins meetings as a visible bot participant. The tool's presence in the participant list serves as de facto notification, though DC law does not require even this level of transparency. Users should still verify that their own consent to the recording satisfies the one-party requirement.
Google Meet Recording
Google Meet displays a recording indicator and announces when recording starts. Participants who object can leave the meeting. While Google Meet's approach is designed for all-party consent jurisdictions, it functions smoothly in DC's one-party framework as well.

Penalties for Violating DC Recording Laws
Criminal Penalties
D.C. Code § 23-542 imposes criminal penalties for three categories of violations:
| Violation | Maximum Imprisonment | Maximum Fine |
|---|---|---|
| Willful interception | 5 years | $12,500 |
| Willful disclosure of intercepted contents | 5 years | $12,500 |
| Willful use of intercepted contents | 5 years | $12,500 |
The fine amounts are set by D.C. Code § 22-3571.01, which establishes a fine schedule based on the maximum imprisonment for the underlying offense. For offenses carrying up to 5 years imprisonment, the maximum fine is $12,500.
Civil Remedies
D.C. Code § 23-554 creates a private right of action for any person whose wire or oral communication is intercepted, disclosed, or used in violation of the wiretap statute. Recoverable damages include:
| Damage Type | Amount |
|---|---|
| Actual damages | Amount suffered |
| Statutory damages | $100 per day of violation |
| Minimum recovery | $1,000 (if statutory damages would be less) |
| Punitive damages | Available at court discretion |
| Attorney fees | Recoverable by prevailing plaintiff |
The statute awards whichever is greater: actual damages or the statutory minimum. The $100 per day calculation means a recording that persists for 30 days could generate $3,000 in statutory damages per affected individual, even without proof of specific harm.
Government Liability
D.C. Code § 23-554 explicitly defines "person" to include the District of Columbia government and prohibits the District from asserting governmental immunity to avoid liability. This provision means DC government agencies that illegally intercept communications face the same civil liability as private actors.
Device Confiscation
Under D.C. Code § 23-544, any device used in violation of the wiretap statute is subject to seizure and forfeiture. For AI meeting recorders, this could theoretically extend to the computers, phones, or other hardware running the recording software.
Employer and Workplace Considerations
DC Private Sector Employers
DC does not have a standalone employee electronic monitoring statute comparable to Connecticut's § 31-48d or Delaware's 19 Del. C. § 705. Private employers in DC are governed by the general wiretap statute (§ 23-542), meaning one-party consent applies to workplace recordings. An employer who is a party to a workplace conversation can record it without notifying the employee, provided the recording is not for a criminal, tortious, or injurious purpose.
Despite the permissive legal standard, employment attorneys in DC generally recommend that employers adopt clear recording policies and provide notice to employees before deploying AI meeting tools. The absence of a legal requirement does not eliminate the practical risks of secret recording: employee morale, trust, and potential wrongful termination claims tied to how recorded information is used.
Federal Government Workplace Meetings
DC's unique position as the seat of the federal government creates workplace recording challenges found nowhere else. Many meetings in DC involve federal employees, and federal agencies maintain their own recording and surveillance policies that often exceed DC law's requirements.
Federal agencies generally prohibit employees from secretly recording workplace conversations, even in one-party consent jurisdictions like DC. Agency-specific policies frequently require express authorization before any recording occurs. An employee who secretly records a meeting using an AI tool may face disciplinary action up to and including termination, even if the recording itself is lawful under DC's wiretap statute.
The Privacy Act of 1974 (5 U.S.C. § 552a) adds another layer. Federal agencies that maintain systems of records containing personal information must comply with the Privacy Act's notice, access, and amendment requirements. AI meeting tools that create transcripts and store them in searchable databases could create Privacy Act obligations for the agency deploying the tool.

Government Contractors and Meetings with Federal Employees
Government contractors working in DC frequently participate in meetings with federal employees. These meetings may involve sensitive but unclassified information, controlled unclassified information (CUI), or classified material. AI meeting recorders are generally prohibited in meetings involving classified information, and CUI handling requirements may restrict the use of AI tools that transmit data to third-party servers.
Even for unclassified meetings, contractors should verify that their AI recording practices comply with the terms of their government contracts. Many federal contracts include clauses restricting the use of recording devices and requiring specific data handling protocols.
Classified and Sensitive Compartmented Information (SCI) Settings
AI meeting recording tools are categorically prohibited in Sensitive Compartmented Information Facilities (SCIFs) and other classified environments. Bringing any internet-connected recording device into a SCIF violates security protocols regardless of consent. This restriction applies to all smartphones, laptops, and other devices capable of running AI meeting recorders.
DC's AI Governance Framework
Mayor's Order 2024-028
On February 8, 2024, Mayor Muriel Bowser signed Mayor's Order 2024-028, establishing DC's Artificial Intelligence Values and creating strategic benchmarks for AI use in District government. The order articulates six core AI values: Clear Benefit to the People, Safety and Equity, Accountability, Transparency, Sustainability, and Privacy and Cybersecurity.
The order established an AI Taskforce within the executive branch and an Advisory Group on AI Values Alignment with public membership. DC agencies must submit AI strategic plans on a rolling schedule through fiscal year 2026, identifying how AI tools may improve performance and what safeguards will mitigate risks.
While Mayor's Order 2024-028 governs DC government's own use of AI rather than private sector AI use, it signals the District's policy direction. AI meeting recorders used by DC government agencies must align with these values, including privacy and transparency requirements that exceed the baseline wiretap statute.
Stop Discrimination by Algorithms Act (Proposed)
The DC Council has considered the Stop Discrimination by Algorithms Act in various forms since 2021. The proposed legislation would prohibit companies from using algorithms that produce discriminatory results in housing, education, employment, credit, and insurance decisions. It would require annual algorithmic audits and increase transparency about how personal data is used in automated decision-making.
As of April 2026, the Act has not been enacted. If passed, it could affect AI meeting tools that generate sentiment analysis, performance scores, or other outputs used in employment decisions. The proposed penalties include civil fines of up to $10,000 per violation and a private right of action allowing courts to award between $100 and $10,000 per violation.
Federal Law Context
The Federal Wiretap Act (18 U.S.C. § 2511)
18 U.S.C. § 2511 establishes the federal baseline of one-party consent. DC's wiretap statute mirrors this standard. Under D.C. Code § 23-556, the DC wiretap provisions supplement federal law and do not supersede or limit the federal statute except in cases of irreconcilable conflict.
This supplementary relationship means both DC and federal wiretap laws apply simultaneously. A recording that violates either the DC statute or the federal statute (or both) exposes the recorder to liability under each applicable law.
The Ambriz and Popa Decisions
Two 2025 federal court decisions frame the current legal landscape for AI meeting tools.
In Ambriz v. Google (N.D. Cal., Feb. 2025), the court ruled that Google's technical capability to use customer call data for AI training was sufficient to state a claim under California's Invasion of Privacy Act. The court denied Google's motion to dismiss, allowing the class action to proceed based on the "capability test," which asks whether the vendor had the ability to use intercepted data for its own purposes.
Six months later, the Ninth Circuit took the opposite approach in Popa v. Microsoft (9th Cir., Aug. 2025). The court held that the plaintiff lacked standing because she did not allege a concrete injury, rejecting the notion that a company's mere capability to misuse data constituted sufficient harm. The Ninth Circuit found the alleged injury was not analogous to recognized common-law privacy torts.
These conflicting rulings create uncertainty for AI meeting tool providers and users in DC. While DC applies one-party consent, the data handling practices of AI tools (particularly using meeting data for model training) could trigger federal claims regardless of state-level consent compliance.
The Otter.ai Class Action
The Otter.ai class action (Brewer v. Otter.ai, Inc., N.D. Cal., No. 5:25-cv-06911, filed August 2025) alleges that Otter's AI notetaker secretly records private conversations and repurposes them to train its machine learning models. The complaint asserts violations of the Electronic Communications Privacy Act (ECPA), the Computer Fraud and Abuse Act (CFAA), and California state privacy laws.
For DC users, the Otter.ai lawsuit illustrates that one-party consent to the initial recording does not necessarily authorize all downstream uses of the recorded data. Even if a DC participant lawfully consented to an Otter recording, the tool's alleged retention and AI training use of that data raises separate legal questions about data use, purpose limitation, and consent scope.

More DC Laws
Explore other District of Columbia law topics on Recording Law:
- DC Recording Laws
- DC AI Laws
- [DC Data Privacy Laws](/us-laws/data-privacy-laws/district-of-columbia-data-privacy-laws)
- DC Surveillance Camera Laws
This article provides general legal information about District of Columbia recording laws as they apply to AI meeting tools. Recording laws and their interaction with federal workplace policies are subject to change. Consult a DC-licensed attorney for advice specific to your situation.
Sources and References
- D.C. Code § 23-542 - Interception, disclosure, and use of wire or oral communications prohibited(code.dccouncil.gov).gov
- D.C. Code § 23-554 - Authorization for recovery of civil damages(code.dccouncil.gov).gov
- D.C. Code § 22-3571.01 - Fines for criminal offenses(code.dccouncil.gov).gov
- D.C. Code § 23-556 - Relation to Federal law on wire interception(code.dccouncil.gov).gov
- D.C. Code § 23-544 - Confiscation of intercepting devices(code.dccouncil.gov).gov
- Mayor's Order 2024-028 - DC AI Values and Strategic Plan(mayor.dc.gov).gov
- 18 U.S.C. § 2511 - Federal Wiretap Act(uscode.house.gov).gov
- AG Racine - Stop Discrimination by Algorithms Act(oag.dc.gov).gov
- Brewer v. Otter.ai Class Action (N.D. Cal., No. 5:25-cv-06911)(npr.org)
- Popa v. Microsoft - Ninth Circuit Ruling on AI Privacy Standing(wlf.org)