Opinion by: Roman Cyganov, founder and CEO of Antix
Within the fall of 2023, Hollywood writers took a stand towards AI’s encroachment on their craft. The concern: AI would churn out scripts and erode genuine storytelling. Quick ahead a yr later, and a public service advert that includes deepfake variations of celebrities like Taylor Swift and Tom Hanks surfaced, warning towards election disinformation.
We’re just a few months into 2025. Nonetheless, AI’s meant consequence in democratizing entry to the way forward for leisure illustrates a speedy evolution — of a broader societal reckoning with distorted actuality and big misinformation.
Regardless of this being the “AI period,” practically 52% of Individuals are extra involved than enthusiastic about its rising position in every day life. Add to this the findings of one other latest survey that 68% of customers globally hover between “considerably” and “very” involved about on-line privateness, pushed by fears of misleading media.
It’s not about memes or deepfakes. AI-generated media essentially alters how digital content material is produced, distributed and consumed. AI fashions can now generate hyper-realistic photos, movies and voices, elevating pressing considerations about possession, authenticity and moral use. The power to create artificial content material with minimal effort has profound implications for industries reliant on media integrity. This means that the unchecked unfold of deepfakes and unauthorized reproductions with out a safe verification technique threatens to erode belief in digital content material altogether. This, in flip, impacts the core base of customers: content material creators and companies, who face mounting dangers of authorized disputes and reputational hurt.
Whereas blockchain expertise has typically been touted as a dependable answer for content material possession and decentralized management, it’s solely now, with the appearance of generative AI, that its prominence as a safeguard has risen, particularly in issues of scalability and shopper belief. Think about decentralized verification networks. These allow AI-generated content material to be authenticated throughout a number of platforms with none single authority dictating algorithms associated to consumer conduct.
Getting GenAI onchain
Present mental property legal guidelines aren’t designed to deal with AI-generated media, leaving crucial gaps in regulation. If an AI mannequin produces a bit of content material, who legally owns it? The particular person offering the enter, the corporate behind the mannequin or nobody in any respect? With out clear possession data, disputes over digital belongings will proceed to escalate. This creates a risky digital surroundings the place manipulated media can erode belief in journalism, monetary markets and even geopolitical stability. The crypto world just isn’t immune from this. Deepfakes and complex AI-built assaults are inflicting insurmountable losses, with reviews highlighting how AI-driven scams targeting crypto wallets have surged in latest months.
Blockchain can authenticate digital belongings and guarantee clear possession monitoring. Every bit of AI-generated media could be recorded onchain, offering a tamper-proof historical past of its creation and modification.
Akin to a digital fingerprint for AI-generated content material, completely linking it to its supply, permitting creators to show possession, firms to trace content material utilization, and customers to validate authenticity. For instance, a sport developer might register an AI-crafted asset on the blockchain, making certain its origin is traceable and guarded towards theft. Studios might use blockchain in movie manufacturing to certify AI-generated scenes, stopping unauthorized distribution or manipulation. In metaverse purposes, customers might preserve full management over their AI-generated avatars and digital identities, with blockchain appearing as an immutable ledger for authentication.
Finish-to-end use of blockchain will finally forestall the unauthorized use of AI-generated avatars and artificial media by implementing onchain identification verification. This could be sure that digital representations are tied to verified entities, lowering the chance of fraud and impersonation. With the generative AI market projected to achieve $1.3 trillion by 2032, securing and verifying digital content material, notably AI-generated media, is extra urgent than ever by means of such decentralized verification frameworks.
Current: AI-powered romance scams: The new frontier in crypto fraud
Such frameworks would additional assist fight misinformation and content material fraud whereas enabling cross-industry adoption. This open, clear and safe basis advantages inventive sectors like promoting, media and digital environments.
Aiming for mass adoption amid current instruments
Some argue that centralized platforms ought to deal with AI verification, as they management most content material distribution channels. Others imagine watermarking methods or government-led databases present ample oversight. It’s already been confirmed that watermarks could be simply eliminated or manipulated, and centralized databases stay susceptible to hacking, knowledge breaches or management by single entities with conflicting pursuits.
It’s fairly seen that AI-generated media is evolving sooner than current safeguards, leaving companies, content material creators and platforms uncovered to rising dangers of fraud and reputational injury.
For AI to be a software for progress slightly than deception, authentication mechanisms should advance concurrently. The most important proponent for blockchain’s mass adoption on this sector is that it offers a scalable answer that matches the tempo of AI progress with the infrastructural help required to take care of transparency and legitimacy of IP rights.
The subsequent part of the AI revolution will probably be outlined not solely by its capability to generate hyper-realistic content material but additionally by the mechanisms to get these techniques in place on time, considerably, as crypto-related scams fueled by AI-generated deception are projected to hit an all-time excessive in 2025.
With no decentralized verification system, it’s solely a matter of time earlier than industries counting on AI-generated content material lose credibility and face elevated regulatory scrutiny. It’s not too late for the {industry} to contemplate this side of decentralized authentication frameworks extra severely earlier than digital belief crumbles underneath unchecked deception.
Opinion by: Roman Cyganov, founder and CEO of Antix.
This text is for common data functions and isn’t meant to be and shouldn’t be taken as authorized or funding recommendation. The views, ideas, and opinions expressed listed below are the creator’s alone and don’t essentially replicate or symbolize the views and opinions of Cointelegraph.