Do you truly know who is joining your Teams calls?
With over 320 million users worldwide, Microsoft Teams is a dominating force within the communication and collaboration industry. But, among the millions of Teams calls happening each day, do senior leaders truly know who they’re speaking to?
When the world shifted almost overnight to remote work in 2020, platforms like Microsoft Teams became indispensable. Usage skyrocketed, Teams alone saw a 70% increase in users in April 2020, cementing its role as a digital meeting room for organisations of all sizes. This has now evolved into a permanent transformation in how businesses communicate.
Yet with this shift came a subtle but significant new vulnerability. In a physical meeting room, identity is rarely in doubt. You can see who walks through the door. In virtual spaces, however, leaders rely almost entirely on screen names, login credentials, and video, none of which reliably confirm who is actually behind the camera. Accounts can be shared, profiles can be impersonated, credentials can be compromised and in highly sensitive conversations involving confidential information, strategic plans, or financial decisions, ensuring the identity of the user behind the screen is crucial to preventing risk to your organisation.
This blog explores why this risk matters, and how integrating FARx into your organisation can instantly verify every user who joins your meetings, providing leaders with clear, real-time assurance about who they are truly talking to.
The problem
Identity fraud has become one of the most significant and costly threats facing organisations today. According to the UK’s leading fraud prevention service, 2024 saw a record-breaking 421,000 cases reported to the National Fraud Database, of which more than 249,000 involved identity fraud.
This upward trend highlights the growing sophistication of online fraud. Cybercriminals are continuously seeking new ways to infiltrate businesses and gain access to their operations and sensitive information. Thanks to the advances in AI, fraudsters can now generate entirely fake identities and bypass traditional verification systems. Deepfake technology has further accelerated the threat, enabling highly convincing video and audio impersonations that are even outpacing legacy biometric systems, make verifying who is really behind the screen harder, particularly within virtual environments.
In early 2024, an employee at UK engineering firm Arup authorised what appeared to be a routine transfer of $25 million after joining a video call with senior management. In reality, the employee had been speaking to an AI-generated deepfake impersonating Arup executives.
A similar attempt targeted the world’s largest advertising group, WPP. Fraudsters created a WhatsApp account using a publicly available image of CEO Mark Read, then used it to set up what looked like a legitimate Microsoft Teams meeting with him and another senior executive. During the call, attackers deployed both a voice clone and repurposed YouTube footage to impersonate leadership, encouraging an employee to set up a new business entity in an effort to extract money and sensitive information.
Fortunately, this attack was unsuccessful, but the incident prompted a company-wide warning. In an email to staff, the CEO stressed the growing threat, writing: “We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI and deepfakes.”
Yet despite this evolving risk, most organisations rely on basic cues to verify who has joined a virtual meeting, such as, login credentials, a name appearing on the screen, or a request for cameras to be turned on. But these methods were never designed to confirm identity, they only confirm access. Login credentials can be easily shared across a team or compromised through phishing, profile names and photos can be changed instantly, and although turning on a camera creates a sense of visibility, it offers no true validation that the person on screen is the legitimate owner of the account being used.
This evolving threat landscape enhanced by AI makes one thing clear, traditional methods of identity verification, such as passwords, PINs, one-time codes, or just your voice or face, are no longer enough. Organisations must adopt stronger ID verification like AI powered, fused-biometrics technology to provide continuous, multi-layered protection which outpace a new era in AI-generated threats like deepfakes and synthetic voice.
The solution
The strongest form of defence lies in continuous biometric verification of the user’s identity. At FARx, we’re revolutionising the delivery of secure online services. Our proprietary AI Biometric technology is designed to recognise humans the way humans recognise each other and ultimately protect against fraudsters by combining voice and face recognition to verify who is really on the end of each sentence.
A one-time verification is no longer sufficient. Continuous, multifactor biometric verification ensures that the same real human remains on the call for its entire duration. FARx not only verifies that you are the right person, but you are also a real person and not a recording or a deepfake.
Fused biometric authentication can protect organisations from impersonation, safeguarding sensitive information and preventing fraudulent transitions. In short, fused biometrics provide the trust and resilience businesses need in an age where synthetic identities are just a click away.
Here at FARx, we’re the future of human and computer interaction. To learn more about how fused voice-face biometrics can be used to know who is really joining your Teams call, get in touch with our expert team here.
