Berkeley RDI & Polyhedra Unveil Production-Ready zkML: 4 Years After Pioneering the Concept

Trustlessness has always been the focus of research and development at Polyhedra. Whether intentional or not, humans are responsible for the majority of incidents involving technology. “We founded Polyhedra to build systems that can operate without human intervention, that are verified by math, and are cryptographically secure,” CTO Tiancheng Xie shared.

Four years ago, when artificial intelligence was just starting to become mainstream, our team was quick to acknowledge the power of AI. But with great power comes great responsibility, and we also realized that without proper precautions in place the opaque nature of AI could lead to disastrous outcomes as unverifiable outputs are used to make critical decisions.

That’s why, in 2020 as part of Berkeley RDI our Chief Scientist, Jiaheng Zhang, advisors Yupeng Zhang and Dawn Song published “Zero Knowledge Proofs for Decision Tree Predictions and Accuracy.” In this paper we defined for the first time the idea of zkML and the ability to prove that a model computes its prediction on a specific data sample or achieves a certain accuracy on a dataset without revealing any information about the model itself.

verifiable AI

At the time, our research was theoretical. Four years later, we are excited to announce that we have developed a production-ready compiler that lets any AI developers utilize zkML without any expertise in zero-knowledge proofs.

What is zkML?

zkML, a subset of verifiable computation, addresses trust in the context of AI by enabling proof of the correctness of model inference. Through zero-knowledge proofs, a service provider can demonstrate that a specific output was genuinely produced by running a given model on an input. Whether it’s a decision tree, neural network, or another machine learning model, zkML ensures verifiability without exposing underlying data or models.

Extended Capabilities of zkML

The potential of zkML extends beyond simple inference verification. It can be applied to:

  • Data Origin Verification: Tracing and proving the lineage and authenticity of training data.
  • Authenticated Data Labeling: Ensuring that data labeling processes are genuine and unaltered.
  • Training Process Verification: Demonstrating that a model’s training adhered to predefined protocols and requirements.

Prior work on zkML has been purely theoretical due to the high computational demands; however, advancements in zero-knowledge technology now make production-ready solutions a reality. Expander proof system, which has broken many prover performance records, is the foundation of our zkML technology. This progress paves the way for zkML applications that can build trust without compromising on privacy or compliance.

“Berkeley RDI and Polyhedra are setting a new standard for trust and transparency in artificial intelligence with innovative zero-knowledge machine learning (zkML) technology, a groundbreaking approach combining machine learning with cryptographic verification,” Dawn Song, Director of Berkeley RDI, commented. “We are excited to work together with Polyhedra to push the AI industry forward ensuring that the benefits brought by AI do not come at the expense of trust and safety.”

The Future of Verifiable AI

As we continue developing zkML alongside our Expander proving system, our vision is to make AI systems more transparent, accountable, and reliable. The synergy between zkML and blockchain technology unlocks opportunities for:

  • Secure AI Model Deployment
  • Verifiable Training Processes
  • Configurable Privacy Controls
  • Decentralized AI Ecosystems
  • Innovative AI Applications and Services

We are excited about the role zkML will play in shaping the next generation of AI applications, empowering trust and verifiability in ways that will redefine industry standards. This is just the beginning of what zkML can offer, and we look forward to sharing more as this technology evolves.