Artificial Intelligence
Ensuring Authenticity With the Dawning of Artificial Intelligence (AI)
Securities.io is not an investment adviser, and this does not constitute investment advice, financial advice, or trading advice. Securities.io does not recommend that any security should be bought, sold, or held by you. Conduct your own due diligence and consult a financial adviser before making any investment decisions.

The adoption of AI in all walks of life is on a surge. Estimates suggest that the market is all set to increase threefold in a span of the next seven years.
From a size of nearly US$242 billion, it is expected to grow to almost US$740 billion by 2030. Different industries have started incorporating AI in their operations. Industries that have leveraged it the most so far have been healthcare, finance, manufacturing, business & legal services, etc.
To a lesser degree, AI has started altering operational paradigms in industries like media end, entertainment, security, retail, energy, etc.
With increased AI adoption, the challenge of ensuring authenticity has emerged as a significant issue that needs immediate addressing. We start today's discussion by looking at what these challenges are.
Click here to learn all about investing in artificial intelligence.
Authenticity Challenges in AI Adoption
What is AI's strength has emerged as its weakness. AI algorithms are capable of generating realistic images, videos, articles, and any other form of multimedia content that could be immensely difficult for an average viewer to distinguish from the original.
To the average eye, it may appear as authentic information from a reputable source, while in reality, it could be diametrically opposite: fake content generated for malicious propaganda. Several events have occurred in the recent past that lay bare this side of growing AI adoption.
AI-generated Fake Pentagon Explosion Image
In May 2023, news was published of a blast near the Pentagon in the United States. The image showed lots of black smoke coming out from the garden near the Pentagon Building. This image was shared by a verified Twitter handle, Bloomberg Feed, which had a blue tick. Soon after, many verified accounts with blue check marks also shared the image. Yet, it was an AI-generated image, and it fooled many. The recent policy by X to make blue ticks available for a monthly charge of US$8 further aggravated the problem.
Since anyone could have a verified account and verified accounts get more visibility, the news spread fast, raising concerns of impersonated public figures, government officials, and news sites, creating chaos and an authenticity crisis in the digital realm.
Although the image could hardly be recognized as AI-generated by a general user of social media X, some AI researchers could identify the flaws or irregularities.
According to Nick Waters, one such AI researcher:
“The ‘unusual' melding of the fence of the building into the crowd barriers showed that the image could have been artificially made or manipulated.”
However, the forgery achieved its purpose, even though for a brief moment, by creating chaos and confusion on social media and beyond.
Thousands of Fake Images on Adobe Stock
According to a report published in the Washington Post, AI-concocted stock images are rampant. A search on Adobe Stock with the term ‘Ukraine War' resulted in over 15,000 fake images of the conflict. The report also noted a heavy presence of hundreds of AI images of people at Black Lives Matter protests that never happened.
Dozens of AI-generated images were also available for the Maui Wildfires. Many of these images resembled real-life photos captured by photojournalists so closely that fake ones could hardly be separated from the real ones.
The Case of Seemingly Real-Life News Clips
According to another report published by Forbes on AI-generated fakes, Tiktok and YouTube star Krishna Sahay is one among several social media users who are using generative AI to create seemingly real news clips of top anchors from major mainstream news media outlets whose work carries a lot of credibility.
From CBS to CNN to BBC, many anchors of these news agencies fall on this list. These videos are also known as deep fakes, where powerful generative AI algorithms help create audiovisual content that looks and sounds like the anchors themselves.
According to the report, Krishna Sahay alone has an audience of millions. Sahay's videos include problematic commentary of AI-manipulated fake anchors on issues that are sensitive, such as school shootings, terrorist attacks, criminal attacks, and more.
According to Hany Farid, a deep fake expert and professor at UC Berkeley, these videos exploit the popularity and credibility of the known news anchors as a “compelling vessel for delivering disinformation.”
According to Prof. Farid: “In many cases, the anchors are known to viewers and trusted, and even if not, the general news format is familiar and therefore more trusted.” He believes that stopping this menace would require us “to get more serious about protecting the rights of the people whose likeness and voice are being co-opted.”
Deep Fakes from the Gaza War
The Internet also saw a significant invasion of deep fake videos related to the Gaza war. In response to this challenge, Jean-Claude Goldenstein, CEO of CREOpoint, a tech company specializing in AI, has been instrumental in assessing the validity of such content. Recognizing the gravity of the situation, the company has developed a database of the most viral deep fakes that have emerged from Gaza, marking a crucial step in addressing this digital threat.
According to Goldenstein:
“Pictures, video and audio: with generative AI, it's going to be an escalation you haven't seen.”
These videos often repurpose content from older conflicts. They seek to generate a strong emotional response by accentuating the intensity of the disaster.
Apart from knowing generative AI in and out, these deep fake creation teams contain people who are aware of human psychology and how to target people's deepest impulses and anxieties.
In the hands of propagandists, conspiracy theorists, terrorists, or scammer organizations, fake content-generating AI tools have evolved as one of the deadliest weapons. The need to keep them under check is increasing each passing day.
One of the most effective ways to combat this growing menace would be to ensure the authenticity of the source. Research and solution-building efforts to ensure authenticity are in full swing. Following are a few real-life examples.
How do you ensure content authenticity in the Age of Artificial Intelligence?
Sony's In-Camera Authentication Technology
Sony Electronics is about to come up with a novel solution. It has completed the second round of testing of its in-camera authenticity technology in collaboration with the Associated Press.
The purpose of this technology is to empower the clicked images with a birth certificate that will authenticate the content's origin. More specifically speaking, it will be a machine-based digital signature to be produced inside the camera at the very moment of image capture in the hardware chipset.
According to Neal Manowitz, the President and COO of Sony Electronics, the “in-camera authenticity technology has shown valuable results,” and the company “will continue to push its development towards a wider release.”
Sony Electronics and AP's joint field test reached its completion in October this year. The purpose of this round of testing was to assess the efficiency of capture authentication and related workflow.
The third collaborator in this project was CameraBits, known for creating the industry-standard workflow tool PhotoMechanic.
The role played by CameraBits in this collaboration involved offering the technology to preserve the camera's digital signature through the metadata editing process.
According to David Ake, the Director of Photography of the Associated Press, “Fake and manipulated images are a major concern for news organizations. Not only do they contribute to mis- and disinformation, but ultimately, they erode the public's trust in factual, accurate imagery.”
The in-camera signature and authentication technology will be released as a firmware update in multiple models in the Spring of 2024.
Sony Group Corporation (SONY -0.27%)
Sony Group Corporation has a market capitalization of $106.35 billion, a P/E ratio of 19.49, and in 2022, it posted $82.64 billion in revenue.
Technology to Combat Deep Fake Voice Manipulations
AI-based technologies prompted developments in realistic speech synthesis. In its originally intended form, the technology can do much good. It can help devise personalized voice assistants and other communication tools that enhance accessibility. Yet, on the flip side, many are using it to create voices for deep fake videos.
To respond to this menace, Ning Zhang, assistant professor of computer science and engineering at the McKelvey School of Engineering at Washington University, has developed a tool named AntiFake. It is not a mitigating tool that helps in detection post-attack. Rather, it is a proactive tool aimed at nipping the problem in the bud.
According to Prof. Zhang:
“AntiFake makes sure that when we put voice data out there, it's hard for criminals to use that information to synthesize our voices and impersonate us.”
In an interesting move to combat sophisticated cyber criminals, Prof. Zhang has decided to pay them back in their coins. According to him:
“The tool uses a technique of adversarial AI that was originally part of the cybercriminals' toolbox, but now we're using it to defend against them.”
The core operating principle of AntiFake is to distort the recorded audio signal slightly. The nature of the distortion is so minutely calibrated that while it sounds right to human ears, it transmits as something completely different to AI.
According to publicly available data, AntiFake has exhibited a protection rate of more than 95%. It has also proved its efficacy in being accessible to diverse populations.
Use of Blockchain in Ensuring Authenticity
OARO Media solutions leverage the immutability properties of blockchain to fight against the deep fake menace. It helps create an immutable data trail for businesses, governing authorities, and individual users to authenticate any photo or video.
The way Oaro Media works on a user's mobile phone is simple yet game-changingly effective. The user starts by accessing the mobile phone camera via an SMS web link or an insurer's app. Next, a certificate is issued to the visual content, which includes an unfakeable record of user ID, content, timestamp, and GPS coordinates. This process helps the insurance industry authenticate claims and distinguish AI-generated fakes from the real.
Stopping the Use of Deepfakes as Part of Information Warfare
Sentinel's anti-deepfake solutions and services help democratic governments, defense agencies, and enterprises safeguard their resources from the impacts of maliciously generated AI content. It has partnered with the European Union and the Republic of Estonia's Ministry of Economic Affairs and Communications. However, it is a post-attack solution that helps detect deep fakes once it has come into circulation.
The way it works is a simple flow of four steps. In the first step, the user uploads the digital media through Sentinel's website or API. The Sentinel system automatically analyzes the media to detect AI forgery. It then offers a verdict on whether the media is a deep fake or not. And finally, to make users aware of how the process takes place, it shows a visualization of the manipulation.
Ensuring Authenticity Amidst AI: Thriving Innovation Scenario for the Future
AI-generated unauthentic content is a potent tool for distorting reality and disseminating vicious propaganda. Such content, even when created for seemingly benign reasons like a joke or social media likes, can incite large-scale violence.
Its impact extends to increasing distrust in various areas, from diplomatic relations between states to interactions between service enterprises and their users and even personal relationships. Thus, it's a positive sign that tech researchers and scientists globally have acknowledged the dangers of unchecked, fake content. We have already highlighted several initiatives addressing this issue.
There are many more. Sensity, for instance, is developing a visual threat intelligence platform. Its API is already capable of detecting the latest AI-based manipulation and synthesis techniques. Another startup, Quantum Integrity, has come up with SaaS AI solutions that can detect image and video forgery.
Cybercriminals will indeed keep coming up with newer techniques to deceive us. But many enterprises will be capable of preempting and thwarting them. With the innovation continuing at a steady pace, the fight is not a David vs. Goliath anymore.
Click here to learn all about investing in five converging technologies.