Time

June - Sep 2024

Industry

Hybrid Working

team

Cross Team Collaboration

Role

Researcher,

Product Designer

Loccus.ai, an ISV(Independent software vendor) affiliated with HP Future of Work Incubation, specializes in providing AI models for detecting voice deepfake.

Introducing the Alliance

Loccus.ai, an independent software vendor (ISV), specializes in developing AI models for detecting deepfake voice scams.

HP Future of Work Incubation, facilitates integrating ISVs like Loccus.ai into the HP ecosystem, enhancing hybrid work solutions.

Zoom partners with HP inc. and HP Poly to build a seamlessly integrated hybrid work ecosystem.

+

+

0.23

0.12

0.03

0.32

Deepfake Detected

Confidence = 0.98

how the model works?

The model from loccus.ai will monitor the sound stream and deliver a confidence score for detecting deepfakes from

0.0 ~ 1.0

desk research

I carried out desk research on usecases of deepfake voicing technology. This research emphasized two most harmful domain of deepfake where Loccus.ai might be crucial for detection. Therefore, I chose to concentrate on these two scenario to identify challenges and apply the technology.

Impersonation Scams

Attackers use deepfake technology to mimic voices or appearances, impersonating executives, employees, or individuals to deceive and commit fraud.

Target

Corporate environments, financial institutions, and internal communication systems.

Damage

Financial loss (millions of dollars in scams).

Loss of trust within organizations.

Potential legal and reputational harm.

Job interview scam

Misinformation

Deepfakes are used to create false media—videos or audio—to spread fake news, propaganda, or manipulate public opinion.

Target

Social media platforms, news websites, and political campaigns.

Damage

Erosion of public trust in media and institutions.

Social and political unrest.

Amplification of harmful narratives, disrupting public order.

Distilled into two painpoints

Meeting safety

The lack of real-time identity verification in virtual meetings allows deepfake attackers to exploit trust, bypass security measures, and manipulate decision-making.

Online Browsing Safety

Rapid content sharing amplifies deepfake misinformation, making it difficult to control the spread of deceptive narratives before they influence public perception.

Given the frequent exposure to deepfake content online, we need clear criteria to distinguish between safe and unsafe content, minimizing unnecessary interruptions to the user experience while aligning these risk levels with the HP AI assistant’s warning system.

RISK MANAGEMENT

I referenced the ‘CIS Alert Level Information’ system, which rates cybersecurity threat severity from low (Green) to severe (Red), to design an alert system tailored for social engineering attacks, aligned with the HP AI Assistant.

Alert System(Social Engineering)

Aligned with HP AI Assistant Alert

Prototype

The AI Assistant detects the risk level and displays alerts based on different situations. Deeper integration with software like Zoom enhances the user experience by providing seamless security measures.


To support different use cases, I designed two versions of the alert system, Zoom Plugin(Meeting APP integration) and HP AI Assistant Alert(system integration) to better support different use cases.

Feature 01

Meeting Safety

Zoom Plugin

Voice ID can confirm participants in the discussion

Easily remove the participants & report

Users receive system alerts when the application window is minimized

Feature 02

Online Browsing Safety

HP AI Assistant System Integration Alert

The AI will assess the content and trigger alerts at varying levels.

The outline clearly notify users of content containing deepfakes.

Impact & Learning

I collaborated with teams from Loccus.ai (HP-incubated startup), Zoom and HP AI Companion to design a Zoom plugin

that detects deepfake scams in meetings, enhancing user security and trust in virtual collaboration.


From a UX perspective, I learned that balancing security alerts is crucial to avoid user fatigue while still effectively communicating risks. Visual cues like glowing outlines proved valuable for quickly conveying deepfake threats without disrupting workflow. Additionally, users favored customizable alert sensitivity and transparency in AI decisions to build trust and engagement.

© Copyright 2025 • All Rights Reserved

+1 (323) 616 5096

haoranfuture@gmail.com