人工知能
UNITE AI

Scientists are now tackling the AI problem with AI itself. Researchers from UC Riverside have created a UNITE model to address the grave problem of deepfakes. “People deserve to know whether what they’re seeing is real,” said Rohit Kundu, a PhD candidate from UCR’s Marlan and Rosemary Bourns College of Engineering, who led the paper ‘Towards a Universal Synthetic Video Detector: From Face or Background Manipulations to Fully AI-Generated Content.’ “And as AI gets better at faking reality, we have to get better at revealing the truth.” The researchers have collaborated with scientists from Google, an Alphabet company, to develop a new AI model that detects video tampering and exposes fake content, which is being used to spread disinformation and incite harm. The study noted: “The rapid spread of misinformation, particularly during critical periods such as elections, highlights the need for generalizable detection models capable of identifying diverse manipulations, including face, background, and fully AI-generated T2V/I2V content with/without human subjects.” The model is capable of detecting both partially manipulated and fully synthetic videos. Rather than focusing just on the face, as most conventional detectors do, this model analyzes entire frames, regardless of whether a human subject is present in the videos. This makes it a powerful tool that can be used by fact-checkers, educators, editors, social media platforms, and others to prevent doctored videos from going viral.
The Rise of AI and the Resulting Synthetic Overload

- R&D
- Software Engineering
- Marketing and Sales
- Customer Operations
While the impact of the technology is forecasted to be significant across all sectors, tech and banking could see the most impact as a percentage of their revenues from gen AI. Goldman Sachs is of the same view, expecting a 7% increase in global GDP from AI. The bank’s economists, Joseph Briggs and Devesh Kodnani, at the time noted: “Despite significant uncertainty around the potential for generative AI, its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects.” However, this same capability of the computer system to perform tasks like learning, problem-solving, and decision-making that typically require human intelligence, which is all set to shake up the world, is also sending the world into chaos. The more sophisticated the tech is getting, the blurrier the line between what’s real and what’s not is becoming.
Why Old Deepfake Detectors No Longer Work
| Company | Tool | Detection Focus | Limitations |
|---|---|---|---|
| UC Riverside + Google | UNITE | Full-frame (face, background, T2V/I2V) | Still under development |
| Microsoft | Video Authenticator | Face-based manipulations | Outdated vs. modern generative AI |
| Intel | FakeCatcher | Authenticity via physiological signals | Requires high-quality facial footage |
| OpenAI | Text Watermarking | Text-based AI output | Limited for visual content |
| SynthID | AI-generated watermark detection | Only works with Google AI models |
Over the past few years, advances in AI have led to an unprecedented surge in synthetic media. Estimates suggest that more than half of longer LinkedIn posts are currently written by AI. Then there’s ‘AI slop’, which refers to low-quality, mass-produced AI-generated content. But most concerning of all are deepfakes, which are images, videos, or audio recordings that have been generated or altered using AI. It’s fabricated content that uses AI to present a false representation as realistic. Today, this kind of content is everywhere, permeating all corners of the Internet. These hyper-realistic digital media are causing confusion and spreading misinformation. It is also posing a threat to people’s privacy and security. Cybercriminals are utilizing AI to up their game, conducting phishing scams and identity thefts with alarming precision. According to Kundu: “It’s scary how accessible these tools have become. Anyone with moderate skills can bypass safety filters and generate realistic videos of public figures saying things they never said.” In one such incident, cybercriminals posed as a company’s chief financial officer (CFO) during a Zoom meeting, resulting in a $25 million loss. This is just the beginning, though, as Deloitte predicts that fraud losses from such incidents will hit $40 billion in the US by 2027, up from $12.3 bln in 2023. A US Treasury report has also found that “existing risk management frameworks” adopted by firms “may not be adequate to cover emerging AI technologies.” That’s not to say that there are no tools to help detect AI content and protect oneself against the technology’s risks. There are actually many tools available on the market. The very same companies that are launching new AI tools to make it easy to create new content are also introducing ways to help spot synthetic data. Back in 2020, tech giant Microsoft announced a Video Authenticator to analyze a still photo or video to provide a confidence score in order to determine if the media is artificially manipulated. The tool works by detecting the deepfake’s blending boundary and subtle fading that the human eye may not be able to detect. At the time, it also introduced technology to identify forged content and confirm the authenticity of the media people are interacting with. It included a tool that enables a creator to add digital hashes and certificates to their content, which lives within it as metadata. A reader, meanwhile, was introduced to check the certificates and match the hashes for content authenticity. The tech giant did warn of the tech’s short-term utility in the AI-fueled age. Since deepfakes are generated by AI that continuously learns, it’s only a matter of time before they surpass traditional detection methods. Around the same time, Facebook, a Meta company, also kicked off a competition to develop a deepfake detector using the data that researchers didn’t previously have access to. A few years ago, Intel came up with a FakeCatcher, a real-time deepfake detector that it claims to have an accuracy of 96%. The tool made use of OpenVino to run AI models for face and landmark detection algorithms, while computer vision blocks were optimized with its Integrated Performance Primitives and OpenCV. As for its hardware, the platform can run more than seventy different detection streams at the same time on its 3rd-gen Xeon Scalable processors. Instead of trying to find what’s wrong, FakeCatcher looks for authentic clues by assessing what makes us human and then having algorithms translate those signals into spatiotemporal maps, and finally, using deep learning to instantly detect whether a video is real or fake. Last year, OpenAI also announced that it was researching tools to help with content authenticity. This includes text watermarking, which it noted is effective against localized tampering but not so much against globalized tampering. It also stated that it could “disproportionately impact” groups like non-native English speakers. This update came after the Wall Street Journal reported that the company has already developed a tool that watermarks and detects ChatGPT-generated text with “high accuracy” for some time, but has yet to come to a decision to release it. Additionally, OpenAI has joined the Steering Committee of C2PA (the Coalition for Content Provenance and Authenticity), a widely used standard for digital content certification. The company adds C2PA metadata to all the images created and edited by all of its services, as part of image detection tools. Now, this year, Google also came up with its own AI-generated text, image, audio, and video detection tool called SynthID Detector. The tool from Google, however, only works for content that’s been generated using the tech behemoth’s own AI services like Gemini, Imagen, Veo, and Lyria. This is because the tool basically identifies the presence of a “watermark” that Google’s products have embedded in their output. A watermark is a unique, machine-readable element that’s embedded in content. Unrecognizable by us humans, it can be detected and extracted by algorithms built for that purpose.
Inside the Tech Powering UNITE’s Breakthrough

Investing in AI-based Detection
In the AI realm, Palantir Technologies is known for its AI-powered data integration, pattern recognition, and anomaly detection. The company operates through four main software platforms: Gotham, Foundry, Apollo, and AIP. Apollo is a single control layer that coordinates configuration, security updates, and delivery of new features to ensure the continuous operation of critical systems. Gotham allows users to identify patterns hidden deep within datasets, while Foundry serves as the operating system for effective asset and risk management. AIP enables firms to run LLMs and other models with full control.
Palantir Technologies (PLTR )
Palantir boasts deep ties with the US government, military, and intelligence agencies. This year, it obtained a $30 mln contract to bring AI analysis to immigration records. With a market cap of $372 billion, PLTR shares are currently trading at $157.72, up a whopping 109.35% YTD, thanks to AI demand, massive retail interest, and expanding government contracts. Its EPS (TTM) is 0.23, and the P/E (TTM) is 687.90.
(PLTR )
Financially, Palantir reported a 39% YoY increase in revenue to $884 million in Q1 2025. Its US revenue, meanwhile, grew 55% YoY to $628 million, including $255 million in US commercial revenue and $373 million in US government revenue. During this period, the company booked its highest quarter of US commercial total contract value, with the remaining deal value at $2.32 billion. Palantir’s customer count in 1Q25 increased by 39% YoY. Its GAAP earnings per share were $0.08, and adjusted EPS was $0.13. Cash, cash equivalents, and short-term US Treasury securities were $5.4 billion at the end of the quarter. “We are delivering the operating system for the modern enterprise in the era of AI. We are in the middle of a tectonic shift in the adoption of our software, particularly in the U.S.” – CEO Alexander C. Karp
Latest Palantir Technologies (PLTR) Stock News and Developments
Conclusion
The advent of artificial intelligence has completely changed the world, with both individuals and organizations increasingly embracing the technology to improve productivity and enhance decision-making. Projected to contribute trillions to the world economy, AI isn’t without its peril, though. Deepfakes and their usage to misinform and defraud people are one of the most critical hazards of AI’s widespread adoption. As it becomes harder to differentiate between what’s real and what’s synthetic, tools like UNITE become all the more important and urgent. With this generalizable AI model as the protection against forged content, we may be able to mitigate the negative impact of AI while augmenting and enjoying its positive effects on our work and lives. Click here to learn all about investing in artificial intelligence.
References:
1. Kundu, R.; Xiong, H.; Mohanty, V.; Balachandran, A.; Roy‑Chowdhury, A. K.; et al. Towards a Universal Synthetic Video Detector: From Face or Background Manipulations to Fully AI‑Generated Content. arXiv preprint arXiv:2412.12278 (2024). https://doi.org/10.48550/arXiv.2412.12278










