For quite some time, discussion around the dangers of deepfakes were mostly rooted in the hypothetical — focusing on the question of how these tools could be used to cause harm, rather than real-world instances of misuse.
However, it wasn’t long before some of those fears became realities. In January, a number of New Hampshire residents received a campaign call featuring a deepfaked voice simulation of President Biden urging voters to skip voting in the state’s Democratic primaries.
In a year in which nearly 40% of the world’s nations are holding elections, this AI-enabled technology is increasingly being seized upon as a means of manipulating the masses and tipping the scales of public opinion in service of particular political parties and candidates.
The Most Immediate Threats
With that said, perhaps the most oft-overlooked threat posed by deepfake technologies operates almost entirely outside the political realm — cybercrime. What’s worse, it may well be the most mature application of the technology to date.
In a recent report from the World Economic Forum, researchers reported that in 2022, some 66% of cybersecurity professionals had experienced deepfake attacks within their respective organizations. One noteworthy attack saw a slew of senior executives’ likenesses deepfaked and used in live video calls. The fake senior officials were used to manipulate a junior finance employee into wiring $25 million dollars to an offshore account under the fraudsters’ control.
In an interview with local media, the victim of the attack was adamant that the deepfaked executives were practically indistinguishable from reality, with pitch-perfect voices and likenesses to match. And who could blame a junior employee for not questioning the demands of a group of executives?
Whether it be voice, video, or a combination thereof, AI generated deepfakes are quickly proving to be game-changing weapons in the arsenals of today’s cybercriminals. Worst of all, we don’t yet have a reliable means of detecting or defending against them. And until we do, we will surely see a whole lot more of them to come.
The Only Viable Remedies (for Now)
Given the current state of affairs, the best defense against malicious deepfakes for both organizations and individuals alike is awareness and an abundance of caution. While deepfakes are seeing more coverage in the media today, given how quickly the technology is advancing and proliferating, we should be all but screaming warnings from the rooftops. Unfortunately, that will likely only happen after more serious societal damage is done.
However, at the organizational level, leaders have the ability to get in front of this problem by rolling out awareness campaigns, simulation training programs, and new policies to help mitigate the impact of deepfakes.
Looking back at the 25 million dollar wire fraud case, it’s not difficult to imagine the institution of policies — especially those that focus on division of power and clear chains of command — that could have prevented such a loss. No matter the size, profile, or industry, every organization today should begin the process of instituting policies that introduce stop-gaps and failsafes against such attacks.
Know Your Enemy Today, Fight Fire with Fire Tomorrow
Beyond the political and the criminal, we also need to consider the existential implications of a world in which reality can’t be readily discerned from fiction. In the same report from the World Economic Forum, researchers predicted that as much as 90% of online content may be synthetically generated by 2026. Which begs the question — when nearly everything we see is fake, what becomes the barrier for belief?
Thankfully, there is still reason to be hopeful that more technologically advanced solutions may be at hand in the future.
Already, innovative companies are working on ways to fight fire with fire when it comes to AI-generated malicious content and deepfakes. Early results are showing promise. In fact, we’re already seeing companies roll out solutions of this sort for the education sector, in order to flag AI-generated text submitted as original student work. So it’s only a matter of time until the market will see viable solutions specifically targeting the media sector that use AI to immediately and reliably detect AI-generated content.
Ultimately, AI’s greatest strength is its ability to recognize patterns and detect deviations from those patterns. So it’s not unreasonable to expect that the technological innovation that is already taking shape in other industries will be applied to the world of media; and the tools that stem from it will be able to analyze media across millions of parameters to detect the far-too-subtle signs of synthetic content. While AI-generated content may have crossed the uncanny valley for us humans, there is likely a much wider, deeper, and more treacherous valley to cross when it comes to convincing its own kind.
The post Deepfakes: An existential threat to security emerges appeared first on SD Times.
from SD Times https://ift.tt/YAm3qgi
Comments
Post a Comment