Article

The Hidden Dangers of AI Note-Taking Assistants: What Leaders Need to Know

Adrian Missy

March 19, 2025

In an era where technology is rapidly reshaping how we work, regardless of the industry, AI-powered tools promise efficiency and innovation. However, as with all advancements, there are hidden risks.

The other day I came across this incident, where a team encountered a startling situation involving an AI notetaker that underscored the potential dangers these tools pose to company security and prompted me to share my insights with you.

The way this particular AI behaved reminded me of the 90’s Pokemania sweeping the globe, so I sprinkled a little bit of Clefairy dust into this article –purely for the purpose of fun.



A wild AI Notetaker appears!

It began innocently enough during an internal Zoom meeting on a Tuesday afternoon. An AI notetaker — let’s call it Gary — joined the call. Despite the initial confusion, the participants decided to proceed, knowing that some departments were testing capabilities of different AI tools like this internally. After the meeting, Gary sent out a detailed meeting summary report to every participant and reminded everyone that this report was created on behalf of a teammate, let’s call them Misty.

Woah, how nice of you, Gary!

Misty.
Misty?
Misty, who was not part of the aforementioned AI testing squad?
Misty, who hadn’t even attended the call, but was simply invited?

This circumstance raised eyebrows, red flags and prompted a deeper investigation.



It’s time to do some exploring!

Reaching out to Misty, they were quite confused, as they had neither set up nor sent out any AI on a note taking journey — and most definitely not Gary!

They remembered attending a client meeting, where this note-taking AI attended. After the meeting, Misty got an email from Gary, offering a summary of the meeting. When they followed the link to access the report, the AI company wanted them to create an account and offered Single Sign-On (SSO) options to do so. As this came both from a client and a meeting Misty just attended, they used their company Google SSO account to access the post-meeting reports.

This simple action granted the AI sweeping permissions: accessing Misty’s calendar and enabling a default setting that unobtrusively added Gary to all future Zoom calls.

Woah, what a move, Gary!

The implications were alarming. The AI was now passively attending Misty’s meetings, recording proceedings, and generating notes distributed to participants, who were then compelled to create accounts themselves.

This snowball effect, using known contacts as references and highly contextual relevance (a recent meeting) to generate trust in order to spread, almost reminded me of a computer virus behavior combined with some clever social engineering strategy.



Be careful! There’s a time and place for everything.

Now, until this point, you might say:

But I get effortless, compelling meeting notes in no time and everyone is in the loop automatically, why is this concerning, Adrian?

Let’s look at the implications of this case from a couple different angles:

1. Third-Party Access:

Gary gained unchecked entry to most video calls, including sensitive discussions, posing significant security risks without Misty’s awareness.

2. Data Usage Policies:

A review of this particular AI service’s privacy policy revealed that data collected can be used to enhance Gary’s models. Since participants only agreed to the Zoom call being recorded, not their data being evaluated by a third party, this raised ethical and confidentiality concerns. The derived data includes ethnical features, attention rate, hierarchy (rank, department) and more.

3. Client Confidentiality:

With an AI notetaker participating meetings without requiring attendance of Gary’s user, client information could be exposed, which inadvertently could breach NDAs and damage trust.
Unlikely? Think about how many people in your company are added to meetings as optional. Now if anything confidential was mentioned during such a meeting (maybe by accident), this information now has inadvertently run through a third party service, is potentially stored on third party servers and was sent out as an overview report to any invited participant of that meeting.



Raise it well, and it will grow strong.

To safeguard your company and mitigate these risks, consider the following strategies:

Secure SSO Access:

Tighten permissions for third-party services linked to your SSO provider to prevent unauthorized access of confidential parts of your business, such as calendar or cloud storage.

Block Note-Taking Bots

This can be as simple as making sure every single company meeting has a waiting room in place and the host has to admit each participant, reducing the risk of unwanted bots. You can utilize available resources like this guide published by California College of Arts to harden security even more and prevent bots from sneaking into Zoom meetings.

Implement AI Policies

Most importantly, sit down and develop clear AI guidelines that align with your venture. Then educate your team on the responsible use of AI tools, ensuring alignment with your company security standards. Even if you are not using those tools (yet), your partners, clients or vendors could, which in turn affects your business faster than you might think.



Gotta check ’em all!

In our fast-paced tech landscape, companies must be vigilant. While AI tools offer massive productivity gains, their risks simply can’t be overlooked. Proactive measures such as AI policies, tightening your SSO security and checking third party permissions are essential to protect sensitive information and maintain your client’s trust.
To, sort of, quote Professor Oak again:

A world of dreams and adventures with AI awaits! Let’s go!

Adrian is taking robotic notes of note-taking robots at Livefront