Company reputations are increasingly vulnerable to asymmetric risks emanating from BOTS…exciting ‘BOT’ research is being conducted at Germany’s Friedrich-Alexander University, Department of Computer Science, that, among other things can verify and/or disprove the authenticity of reputation risk posed by BOT premised video.
Specifically, FAU’s research focuses on detecting bogus video content…this, of course, carries the potential to distinguish – mitigate reputation risks emanating from video sources, ala YouTube, etc.
BBC Radio has done a fine job of taking the proverbial ‘deep dive’ into the BOT phenomena…particularly, the program ‘Click’ hosted by Gareth Mitchell titled ‘Technology and Fake News’ which aired earlier this month. (https://www.bbc.co.uk/programmes/p002w6r2). The program featured subject matter experts, one of whom is a researcher at FAU. His remarks, in particular, attracted my attention primarily due to a demo video which can be viewed at ‘Face2Face: Real-time Face Capture and Re-enactment of RGB Videos’. (https://www.lgdv.cs.fau.de).
Bots and reputation risk…are variously rooted in an adverse act or characteristic of a company product, service, or c-suite personality (See Reputation Risk, Once It Materializes, It’s No Longer Merely A Public Relations Issue – https://kpstrat.com/2016/10/25/reputation-risk-once-materialized-is-not-a-public-relations-challenge/
With more frequency, reputation risks are initiated by…ideological, economic, and/or competitive advantage adversaries. From a reputation risk mitigation perspective, the obvious objective is to identify – mitigate risks at the earliest stage of materialization. It is this point where the FAU research produces the real-time relevance.
Reputation risk bots, as being developed at FAU….and, have the capability to detect bogus (video based) reputation risks. When fully operational, this technology will allow targets to repudiate reputation risk in real time. It will also allay time consuming fact checking, which evidence suggests, has little, if any, effect on dislodging – reversing previously formed opinions.
BOT research developed by FAU researchers is a relevant tool for foiling illicit video-based reputation risks…this technology is close to becoming operational for ‘enterprise risk management’ which is now routinely charged with mitigating reputation risk.
The new (FAU) technology may also have the capability to…prognosticate, assess, mitigate, and possibly foil video-based reputation risk(s) before they reach public materialization.
This is achieved through real-time facial capture and reenactment features…using, what is referred to as commodity (presumably off-the-shelf) webcams. One of this research projects’ goals is to determine the feasibility of animating specific and/or all facial expressions presented by a target, i.e., humans. Each human facial feature can be re-rendered – manipulated in a photo-realistic fashion from YouTube, etc.
Initially, the FAU researchers sensed their work would benefit…patients who have experienced a cleft lip and/or palate disorder by tracking head movement in pre – post surgery and rehabilitation. Now it is capable of re-rendering a (synthesized) human (target) face to correspond to suspicious video, that accommodates existing levels of illumination; obviously, an important element.
BOT generated authentication is a novel contribution to mitigating reputation risk…with its capacity to even detect edits to video footage designed to adversely affect reputation. This technology can also verify or disprove video authenticity.
Preferably, the functionality of these feature’s can operate at ‘keystroke speed’…because, reputation risks can materialize and magnify throughout social and conventional media at keystroke speeds.
This (research) product will also manifest as a robust and authoritative rejoinder to the…asymmetric demeanor of intentionally manifested reputation risks. It’s reasonable to assume this feature and product will become ‘standard equipment’ for company’s enterprise risk management units.
- As this and comparable technologies find reputation risk dominated by ideological, economic, and competitive advantage adversaries, the challenge to mitigate will surely elevate and complicate.
Potential for reputation risk targeting…techniques-technologies already exist which can capture and indistinguishably transfer human facial expressions and mouth movement. In other words, the face of a human, the lighting, the skin, and expressions can be modified – altered in addition to live – simultaneous dubbing in teleconferencing scenarios. Savvy readers of this post (blog) have probably already surmised these technologies – techniques have many potential uses – adaptations aside from those addressed here.
Reconstructing facial parameters – expressions and lighting conditions, etc…can be used to detect inconsistencies in video, i.e., fake. One example of which are the expressions a person exhibits and the unique – person specific transitions between facial expressions.
Fraudulent (fake) videos can be detectable by…analyzing expressions which have been tracked (in a video) and comparing them to a reference video similar to the art and science of handwriting analysis. Arguably, in the field of human – machine interaction, detection and tracking of physiological movements are becoming increasingly important elements.
Michael D. Moberly August 22, 2017 [email protected], the ‘Business Intangible Asset Blog, since May 2006, 650+ published blog posts, ‘where one’s attention span, businesses intangible assets, and solutions converge’.
Readers are also invited to explore other relevant blog posts, video, position papers, and books, at https://kpstrat.com/blog