Why zk-ML?

The importance of zk-ML on Bittensor and Omron

Zero Knowledge Machine Learning is core to the Omron subnet, and it's our main focus at Inference Labs for the development and expansion of Proof-of-Inference to Bittensor and beyond.

A practical use case

The exploration of cryptographic techniques in deep neural networks dates back to 2017, a pioneering effort documented in various academic works, such as the paper available at https://dl.acm.org/doi/pdf/10.5555/3294996.3295220. Despite the early introduction of these methods, it's only recently that their practical applications have started to crystallize, underscoring the evolving narrative of technology application in real-world scenarios. Implementing practical use cases is vital as it not only validates the technology but also demonstrates its relevance and potential to solve actual problems.

A quintessential example provided by Omron is the optimization of staking and restaking processes. The Omron subnet generates predictions that impact external staking and restaking operations. Given the critical nature of these predictions, it is imperative that the integrity of the output remains uncompromised. To ensure the outputs are tamper-proof, zk-ML (zero-knowledge machine learning) is utilized to transform models into a verifiable circuit. This transformation allows the production, proving, and verification of outputs using specific keys generated during the circuit's creation. This robust verification mechanism ensures that network validators and external entities can confirm that the predictions are produced by a designated model and are authentic reflections of that model. The integration of these verifiable outputs significantly enhances the reliability of data provided to users engaged in staking and restaking services, showcasing a practical and impactful application of new cryptographic technologies in deep neural networks.

Proven Output

Proof-of-Inference is more than just proving the inference itself. Below are some of the key invariants Proof-of-Inference bring to the table.

An AI model exists

This may seem self evident; how does one know with certainty an AI model exists unless it is open source and your have the weights and biases on your machine? Proof-of-Inference confirms the existence of an AI model without disclosing all it's details publicly.

An AI model was run

At a base level, Proof-of-Inference proves that inferences are conducted using an AI model. Regardless of inputs or outputs, Proof-of-Inference proves with incredibly high certainty that the data was processed via an AI model and not looked up from other sources, copied, or calculated via other logic.

The selected AI model completed the inference

Proof of Inference also prove that the model selected by the requestor was the exact model that was run against the input data. Verifications will inherently fail if the model selected was not the model run against the input data.

The model has not been tampered with

Along the same lines as verifying that the model selected was the model run - Proof of Inference provides inherent tamper proofing. If any changes have been made to the model after circuitizaton, proof verifications will fail because they are no longer valid proofs.

The correct data was provided to the model

Of core importance to the security Proof of Inference provides, input data must not be tampered with. During proof verification, input (and output) data is checked against the proof provided by the circuit. If input data doesn't match the requested inputs, proof verification will fail.

The correct output is being provided from the model

Further to verification of input data provided to the model, Proof of Inference also verifies the output data coming from the model. Thereby allowing end users to verify that the value they are relying on is correct and was generated faithfully.

A multitude of applications

Though Omron has targeted specifically staking and restaking models as it's first demonstration of the power and benefit zk-ML provides, the applications of zk-ML are limitless, especially when paired with the decentralized network provided by Bittensor.

High value operations

In addition to the high value applications of staking and restaking strategy models, there are many further models such as scoring or rating systems which would benefit from Proof-of-Inference. Having verification backed by zk-ML for any of these high value operations removes the ability for bad actors to interject or hijack the prediction and craft malicious responses across all subnets.

Security agents

AI agents operating in the realm of cybersecurity need the integrity that zk-ML provides via Proof-of-Inference to ensure their models haven't been tampered with, and are producing results that can be verified either at generation time or in the future if the need arises.

Public facing inferences

To integrate AI into decentralized protocols such as DAOs, AI must be run trustlessly and in the open to ensure a fair and accurate output. Proof of Inference integrates perfectly with this use case because it provides a publicly verifiable proof that the inference was not tampered with, and allows anyone to independently verify that the model was run correctly.

... and many more

Perfect integration with Bittensor

We believe that zkml and Proof of Inference combined with Bittensor's incentive network will create a new era of distributed, verifiable AI compute that incentivizes positive results in terms of both AI and zkml. Through Omron, we plan to exercise this intersection to it's fullest by applying provable to AI to problems across bittensor itself and third parties while promoting incentives that will advance both AI and zkml as a whole.

Last updated