OpenGradient Says AI Needs Verifiability, Not Just Performance, to Earn Trust
Summary
- OpenGradient said it is building verifiable AI infrastructure that separates computation from verification to ensure the integrity of results.
- The company said applying AI models to a Uniswap V4 AMM to adjust dynamic fees reduced liquidity providers’ impermanent loss (IL) by about 16%% to 17%%.
- OpenGradient said it plans to introduce a token economic model using the OPG token to slash nodes that submit incorrect computation results, improving network reliability.
Forecast Trend Report by Period



“As AI takes on more decision-making, what matters most is not performance but verifiability. Without verification, users have no way to know whether a cheaper model has been substituted or data has been distorted. We are building infrastructure that guarantees the integrity of results by separating computation from verification.”
BUIDL Asia 2026 was held at Sofitel Ambassador Seoul on April 16. In a keynote address, OpenGradient co-founder Matthew Wang said verifiability is becoming a core infrastructure requirement as AI becomes more deeply embedded in personal data and corporate decision-making.
He said users should be able to verify which model and data were used when an AI agent generates a research report. Without a verifiable structure, he added, the output is difficult to trust.
To address that, OpenGradient has introduced what it calls a “node specialization” architecture. Computation is performed on an infrastructure layer made up of GPU nodes and trusted execution environment, or TEE, nodes. Results are then verified on a blockchain layer.
Wang said the setup allows computation to run in high-cost environments while verification is handled on low-cost blockchains, delivering both efficiency and trust. A TEE is a secure, isolated environment that protects data while computation is performed.
The company also shared a use case. OpenGradient said it applied AI models to a Uniswap V4 automated market maker, or AMM, to adjust dynamic fees, reducing impermanent loss for liquidity providers by about 16% to 17%.
In the Filecoin ecosystem, the technology is also being used to improve incentive allocation by evaluating node performance with AI models. Impermanent loss, or IL, refers to the potential loss liquidity providers face from price swings compared with simply holding the assets.
OpenGradient’s model hub has expanded into a platform offering more than 4,000 AI models. Wang said the company is building an environment in which developers can easily use a range of models and integrate them into decentralized applications in a verifiable format.
The company also introduced personalized AI services, including wallet-based portfolio management agents and services that combine user data to create a digital twin. Wang said TEE-based computation keeps personal data fully protected and accessible only to the user.
OpenGradient also unveiled a token economic model aimed at strengthening security. The company plans to use its OPG token to slash nodes that submit incorrect computation results. Wang said economic incentives that enforce accurate computation and verification will improve network reliability.
“What matters more than making AI smarter is making AI trustworthy,” Wang said. “Verification and privacy need to be solved at the infrastructure layer for AI to be used safely across finance and industry.”

Minseung Kang
minriver@bloomingbit.ioBlockchain journalist | Writer of Trade Now & Altcoin Now, must-read content for investors.





