The probability that businesses – higher profile public figures will experience…the adverse effect of deepfake’s represents a formidable reputation risk. For most humans, businesses, and notables, the ‘deepfake’ risk manifests as an inability to distinguish images – video which have been technologically altered. As of last week (sic), the ‘news consuming public’, for the most part, presume what they see – hear is the original reality. In today’s fractious socio – economic and political environment, what constitutes – how one defines original reality is likely to be influenced by – filtered through one’s political leanings.
Numerous celebrities and politicians have already experienced the consequences…of the simpler – 1.0 versions of deepfakes software which businesses would be obliged to recognize – predict what’s will likely be on the horizon. In light of the burgeoning phenomenon of deepfakes, the probability – inevitability of their occurrence should not be construed or manifest as dismissiveness or ‘wait and see’! Far too much at stake.
A ‘deepfake’ is a technique which algorithmically combines (synthesizes) and superimposes …an existing image and/or video, using artificial intelligence – machine learning, onto source images – videos using ‘generative adversarial network’, i.e., GANS.
Conceptually and practically, as I understand it, GANS were (variously – commercially) developed…in their present form, and introduced (open source) in mid-2014by Ian Goodfellow and Yoshua Bengio, among other researchers, at the University of Montreal. There are numerous inferences – assumptions about other origins of GANS, i.e., the technology and code to produce present day deepfakes.
It’s not a role I wish to pursue here – now, to either give credence to or dispute…such inferences, as that information, i.e., assigning responsibility or fault, seems somewhat irrelevant at this point. Deepfakes are here to stay, with little, if any probability of turning the clock backwards, regulating their use, or placing limitations on their (adverse) capabilities.
Admittedly, I am aware of very few positives that may arise from deepfake software…aside from specific U.S. intelligence initiatives, presumably ‘new age art’, or as a tool to liven parties and family gatherings, i.e., a ‘technologized game of charades’. Of course, various forms of what could legitimately be considered deepfakes have been variously utilized in film since the late 1980’s by the likes of Steven Spielberg and George Lucas, et al.
Can the application of deepfakes become a life-threatening (dangerous) offensive weapon… perhaps not, but their application can, as we have already witnessed in numerous instances…
- sew immediate disinformation, distrust, and personal – institutional havoc that erroneously affects – influences – changes people’s perceptions of the original reality.
- of course, there are adverse costs, but, as-yet, they are largely incalculable because deepfake video, audio, images ‘look and sound like the real thing’.
The actualization of deepfakes, presents additional layers of challenge to reputation risk… mitigation when compared to more conventional (non-deepfake) reputation risks. We can presume reputation risks born by deepfakes, will likely be more costly and time consuming to try to reverse the inevitable adverse narrative that follows.
Some victims of deepfakes presume the prudent – most effective tool to mitigate…deepfake born reputation risk, be it video, audio, image, etc., is to…
- try to dominate and counter the subsequent adverse narrative through conventional strategies – methods, ala some sort of fact checking, etc., or
- identify and out the economic – competitive advantage – political – special interest adversary which funded – promoted the deepfake.
Still, it’s worth noting again, most reputation risk mitigation should include understanding how difficult it can be…for humans to reverse a perception – opinion that is tethered to an experienced reality. Afterall, that’s what deepfakes, by design, are intended to achieve.
Deepfakes have already gone far beyond the (exclusive) bailiwick of…government intelligence services’ producing propaganda or amateur hobbyists putting celebrities’ faces on porn stars’ bodies or political pranksters purposefully manipulating a politicians’ speech.
To the chagrin of those already adversely affected…and others, i.e., humans, businesses, and institutions, etc. which are likely to be adversely affected by deepfakes at some point, most anyone can download deepfake software today can create variously convincing – fake products in their spare time in the proverbial basement. It’s just not rocket
Forward looking reputation risk (mitigation) professionals…will have considered – factored the ease which deepfakes can be produced and publicly emerge, e.g., (a.) a fake national security – emergency (alert) warning that an attack by an adversary is imminent, (b.) producing a deepfake that targets a political candidate timed to ‘go public’ only days before voters go to the polls, or, (c.) on a different spectrum, malicious attempts to provoke adverse reactions against a political adversary or business executives’ marriage – family relationships by producing – publicizing a deepfake extra-marital liaison or sexual orientation.
Tim Hwang, director of the Ethics and Governance of Artificial Intelligence Initiative at the Berkman-Klein Center and MIT Media Lab…told CSO (chief security officer) Magazine recently, “I think that certainly the demonstrations (of deepfakes) that we’ve seen are disturbing and I think they are concerning, and they raise a lot of questions, but I’m skeptical they change the game in a way which a lot of people are suggesting.” Respectfully, I do not wholly agree with Dr. Hwang’s perspective.
Through my lens, a broader, worrisome, and problematic challenge underlying deepfakes…has to do with the ‘human reality’ that (1.) seeing – hearing is believing, and (2.) one’s truth lies in believing what is seen and/or heard, to which, most psychologists – psychiatrists concur…
- humans are innately inclined to seek information, perspective, and speech which supports a pre-existing view point and/or what they want to believe, and
- are likely to ignore – dismiss opposing
– differing perspectives.
- be it about the legitimacy of, or, conspiracy theories related to the existence – presence of UFO’s (unidentified flying objects), man landing – walking on the moon’s surface, the Loch Ness monster, ‘big foot’, if Speaker of the U.S. House of Representatives’ Nancy Pelosi actually slurred her speech during a televised Q&A at a national conference, or whether the body (head to toe) in a photograph actually ‘belongs’ to the person it claims.
Arguably, either human inclination, can be vulnerable to…technological alterations of reality, ala deepfakes. Less arguably, those inclined to use (deepfake) technologies in a malicious manner can (a.) quickly dominate the ‘always on’ news cycles, and (b.) acquiresignificant power to wrongfully influence opinion through deliberate (technologically manufactured) falsehoods which…
- spread at keystroke speed
- under the guise of truthful representations of reality.
Any presumption that a business can (just as quickly) counter – reverse fake…machinations through (conventional) well publicized pronouncements of fakery and fact checking, are likely to be subordinate to the human phenomenon of ‘seeing – hearing is believing’ which there are countless (sad, tragic) reminders ala PizzaGate, Sand Hook Elementary School, and the conspiracy theorists underlying much of the daily content delivered at InfoWars, etc.
Deepfakes exploit these human tendencies by…using GANS (generative adversarial networks) in which two ‘opposite thinking – doing’ ML (machine learning) models. One model focuses ‘its attention’ on specific data sets and then creates image-video forgeries; meanwhile, the other (opposing) model focuses its attention on specific aspects of that image-video which has – is being developed to detect the presence of fake – forged features.
The so-called forgery model continues to create – generate fake features until…the opposing model is less able to detect – distinguish the forgery from the original reality. The larger the data set from which the forgery model extracts – applies relevant features, it becomes less challenging for the forgery model to create a ‘believable’ (undetectable – undistinguishable) deepfake.
This is a primary reason why images – videos of former presidents and celebrities, etc…have frequently been targets of the still early, first generation of deepfake (software), i.e., there is an abundance of publicly available (open source) video footage and/or images to ‘train’ either model.
Business leadership is obliged to not be quick to dismiss – disregard deepfakes…even though it may appear initially as a low-tech doctored video – image, ala a ‘shallow fake’ because they too can be an effective medium for disinformation compared to more technologically sophisticated deepfakes, e.g.,
- the controversy surrounding the now proven doctored video of President Trump’s confrontation with CNN reporter Jim Acosta at a November 2018 (White House) press conference, makes clear. In this instance, the real – original video clearly shows a White House intern attempting to take the microphone from Acosta. Subsequent editing of the video (presumably with authorization from within the White House) made it appear that the CNN reporter physically pushed the intern away from her position to grasp the microphone.
This very public incident should accentuate concerns that…video can be relatively easily manipulated to discredit most any target of choice, be it a reporter, politician, business, institution, executive, or a specific brand.
However, unlike the more technologically sophisticated products of deepfakes…wherein machine learning software can ‘puts words in people’s mouths’, low-tech doctored video can and not infrequently is close enough to representing a reality, that it blurs most conventional lines between (distinguishing) what’s true from what’s patently false.
Generative adversarial networks (GANs) are deep network architectures…and are comprisedof two networks pitted against one another (hence the term ‘adversarial’). GANS were introduced (open source) by Ian Goodfellow and Yoshua Bengio, among other researchers at the University of Montreal in 2014.
The potential for GANS is perhaps infinite…in no small part, because these ‘robot artists’ can learn (be taught) to mimic the distribution of designated data to create images, music, speech, and/or prose, which is virtually indistinguishable from our individual realities.
For example, in the context of reputation risk, should an image – video, etc., suddenly emerge…of a corporate executive ‘appearing’ to be engaged in (a.) acts – behaviors – speech contrary to business principles, mission, or relevant law, or (b.) unfavorable to an existing product, service, or client, , but, (c.) subsequently determined to be deepfake…
- surely doesn’t require substantial imagination to recognize how this very probable scenario could have immediate and adverse impact on investors, consumers, and vendors,
- followed by cascading effects that reverse – adversely alter future orders, sales, R&D, marketing, and certainly business image and reputation.
An important component to mitigating (vulnerability, probability, and criticality of) reputation risk…in the AI (artificial intelligence – machine learning) era, is understanding GANS. Reputation risk practitioners are obliged to know how ‘the basics’ how generative algorithms work, and for that, contrasting them with discriminative algorithms is also instructive. Discriminative algorithms try to classify input data; that is, given the features of an instance of data, they predict a label or category to which that data belongs.
Michael D. Moberly St. Louis June 25, 2019 email@example.com, the ‘Business Intangible Asset Blog’ since May 2006, 650+ published (long form) blog posts, ‘where one’s attention span, business realities, intangible assets, and solutions converge’.
Readers are invited to explore other posts, video, position papers, and books at https://kpstrat.com/blog