From Sci-Fi to Reality: How Robots and Verifiable AI Are Changing Our World

Abstract: Robots and artificial intelligence are no longer confined to science fiction—they’re quickly becoming fixtures of modern life. Thanks to breakthroughs in large language models (LLMs), machines can now interpret context and learn independently, shaping a “robot-centric economy” where autonomous systems handle tasks ranging from local deliveries to large-scale logistics—and even engage in financial transactions.

Yet as these AI agents gain autonomy, ensuring trust becomes vital. Verifiable AI and zkML (zero-knowledge machine learning) address this by using cryptographic proofs to confirm a model’s accuracy and integrity without revealing its inner workings. Polyhedra, at the forefront of these efforts, integrates such methods with AI-focused infrastructures like EXPchain, enabling robots to collaborate securely on-chain. The result is a robust environment where intelligent machines operate transparently and independently, ushering in a future once reserved for sci-fi—now rapidly becoming our new reality.

Rise of Real-World Robots

Many believe that ChatGPT is a milestone for mankind because these LLMs, created by humans, can communicate and think like humans. When we equip LLMs with tools like search engines, web browsing, and APIs, they can operate these tools like humans. What if ChatGPT had a physical body? What if it were like your next-door neighbor?

It is happening now. With the advancement of generative AI, robots are beginning to demonstrate human-like interaction capabilities. One of my favorite examples is Unitree's robot “Erbai”, which, in an experiment, convinced (or perhaps “kidnapped”) 10 other robots—each powered by generative AI—to escape from the showroom and enter the free world. 

Robots from fiction movies are becoming a reality. Take the Star Wars movie series, for example. Since 2019, visitors to Disneyland’s Droid Depot have been able to assemble and take home their own R2-D2 and BB-8 droids. However, these droids are simply remote-controlled toys and don’t yet feature generative AI. But that is set to change. At GTC 2025, Nvidia announced a collaboration with Google Deepmind and Disney to create a physics engine, Newton, capable of real-time, complex movements. During the event, Jensen Huang introduced a Star Wars BDX Droid called “Blue” and demonstrated its remarkably lifelike movements. These BDX droids are expected to make their debut at Disney parks during the Season of the Force event this year.

Though we still cannot travel at lightspeed or enter hyperspace as depicted in the Star Wars movie series, the stories and tales about droids and robots are no longer fans’ fantasies. Soon, we may see robots appearing here and there in our daily lives—walking on the same streets, riding the same buses or trains, visiting charging stations as we go to restaurants, and wandering into shopping malls for free WiFi. We shall continue this journey to the imagination, as all of these possibilities may become reality in the very near future.

Tipping point is now

What is behind all this amazing progress? Robots, including humanoid robots, are not new. In 2005, Boston Dynamics developed BigDog, intended for combat on complex terrain. Later, in 2013, they debuted Atlas, a robot designed primarily for search-and-rescue missions and largely funded by DARPA. Despite these impressive creations, finding the right product-market fit has long been a challenge, leaving Boston Dynamics “nowhere near profitable”. Their robotic dog Spot, released to the public in 2016, costs around $75,000, while the annual cost of owning a real dog in the U.S. ranges only between $2,000 and $3,000. It’s easy to see why a family would prefer a fluffy, lovable canine companion over a piece of metal.

Another example comes from Sphero, a Colorado-based company that previously manufactured the popular Star Wars droids R2-D2 and BB-8 under a licensing agreement with Disney. Sphero discontinued these products in 2018, mainly because the popularity of the droids faded shortly after the movies left theaters, making it an unsustainable business. This is unsurprising, as these droids remained largely toys remotely controlled by an app—lacking genuine intelligence or voice recognition. With a battery life limited to around 60 minutes, they were also restricted to close proximity to their charging stations. Clearly, these were not the advanced, autonomous droids depicted in the Star Wars films,

The situation today is very different. 

First, rather than being primarily research-driven or funded through government grants, robot development has shifted toward being market-driven, with a strong emphasis on product-market fit. When humans first domesticated wolves into dogs over 15,000 years ago, those early dogs were not as cute as of today, but they already provided significant help to our hunter-gatherer ancestors. This usefulness sparked a 15,000-year “common-law” relationship that continues to flourish even in modern times. Robots are no exceptions. For robots to reach mass production, they must similarly fulfil widespread, practical demands. 

  • Self-driving cars, for instance, assist with transportation and deliveries, and there’s encouraging news that Tesla recently obtained a ride-hailing permit in California. 
  • Meanwhile, food delivery drones operated by Meituan have become a regular sight in Shenzhen, China, since 2022.
  • Additionally, hotel and restaurant robots are now commonplace in China, effectively handling tasks such as room services and food serving, a trend accelerated by mass demand during the pandemic. 

Second, the prices of robots and droids have dropped significantly, making them affordable and reasonable to families and businesses. This reduction is primarily due to the decreasing technical barriers over time, as well as increased competition and mass production.

Several major IT companies in China, including Baidu and Alibaba, have actively invested in self-driving vehicles—particularly robotaxis. Robotaxis already operate regularly in numerous Chinese cities, and Baidu’s Apollo Go is planning to expand operations to Hong Kong and Dubai. In the US, Tesla recently unveiled Cybercab, with an estimated price below $30,000. Baidu has projected a similar price, citing mass production as the key factor. Assuming a robotaxi can earn roughly $22 per hour, the initial investment could potentially be recovered in less than nine months. 

Other robots also benefit from mass production and increased competition. In Alibaba marketplace, you can find food-delivery drones now selling for less than $3,000, with hotel and restaurant service robots typically costing under $5,000. Although software R&D still constitutes a significant portion of the total cost, mass production continues to drive this expense down, spreading it out to become a small fraction of the overall price.

Third, and perhaps the most revolutionary factor, is that robots finally possess genuine intelligence. What distinguishes this current wave of robots from previous generations is their ability to autonomously perform complex tasks without human-operated remote control. For example, the BB-8 droid that we mentioned earlier is better classified as a toy because it requires remote control for even basic movements, such as turning left or right. Remote control is a deal breaker—if a robot requires remote control, it technically isn’t a robot but rather a machine operated by humans. Owning a robot that cleans your house sounds appealing, but if it requires you to spend an hour manually directing its movements (moving the duster up and down), the appeal quickly fades.

Humans have long sought intelligence in machines, even before Microsoft released Windows in 1985. I recently rewatched Disney’s 1982 sci-fi film TRON, which features human users interacting with programs that behave like real humans. Even by today’s standard, TRON remains a highly technical and nerdy film, using terms like “end of line”, “user”, “disc”, and “I/O”—jargon that most people today would still find confusing. Yet, what is not nerdy and stands out clearly even today is that the programs in TRON behave autonomously, without reliance on remote control. For example, when the character Tron, a program, loses contact with his user, Alan Bradley, he independently convinces another program to betray the Master Control Program (MCP) so he can access the I/O tower to receive data from the user, which was later used to destroy the MCP and save humanity. In TRON, programs express emotions like their users (including love toward another), and demonstrate respect and beliefs toward their users. 

This ability of robots to independently figure things out is extremely powerful. Consider a robotaxi: with this level of intelligence, it would not only be capable of driving and handling ride-hailing orders, but also determining when it needs recharging (and locating the nearest charging station), recognizing when it requires a car (similar to a human taking a shower), or detecting if a passenger has left an item behind (and returning it to the rightful owner). These advanced functionalities go beyond basic self-driving capabilities, but they are essential for deploying robots at a massive scale without relying on patchwork solutions—such as having a human operator monitor 10-20 CCTV feeds and manually intervene whenever an unusual situation arises.

When robots begin to think like humans, they can also learn like humans—potentially even without human supervision. If you own a pet robot, you might initially expect it to jump and run around like a dog. However, if that robot has human-like intelligence, it could learn new tasks by watching instructional videos from platforms like YouTube or TikTok. It wouldn’t be surprising if your pet robot eventually began folding clothes for you.

A robot-centric economy

It is likely that robots will soon become autonomous members of our society, eventually acting as consumers, customers, and users—just like us. Imagine a self-driving car paying for parking or charging its battery, or a hybrid vehicle swiping its credit card at the gas station. Perhaps a food-delivery drone might even take a train or subway if it proves faster and cheaper. What’s more, these very services could themselves be provided by robots!

This scenario reminds me of Pixar and Disney’s 2006 animated film Cars, in which Luigi, an Italian car, operates “Luigi's Casa Della Tires” tier shop; Flo, a female car, runs the gas station “Flo's V-8 Cafe”; and Sally, a Porsche, serves as the town attorney and owns the “Cozy Cone Motel.” Each car specializes in a specific field, and they live together in a town called “Radiator Springs.” The technology emerging today is sufficient to turn this into reality.

In marketing, sales, and business, we always talk about models such as B2B (business-to-business), B2C (business-to-consumer), C2B, and C2C. It is intriguing to think that some of the products and services that our society provides may shift toward new interactions like B2R (business-to-robot), R2R (robot-to-robot), or R2C (robot-to-consumer), where robots take on roles traditionally held by businesses or consumers—but in slightly different ways.

For example, subway stations may eventually feature “drone-only” entrances specifically designed for drones descending from the sky. Instead of scanning tickets or transit passes, drones could use RFID signals to pass through gates. Specialized cabins or seats might be created for drones (Physics suggests it wouldn’t work if you just let drones fly inside the subway trains), and these seats may have metered power charging stations. A drone-only elevator at the exit may also be available that quickly lifts drones high above the ground level—similar to the elytra launch towers in Minecraft—allowing drones to resume flight by gliding smoothly through the air with gravity’s assistance. Of course, these elevators would strictly serve drones only, and it is important to block curious adults from jumping into the “drone-only” entrances or accessing this “you-can-fly” elevator. Moreover, if new transportation technologies like Hyperloop initially prove too intense for human passengers, robots could be ideal first riders to use these high-speed long-distance travel systems.

The next time you encounter numerous robots—whether drones, humanoids, or even BB-8 droids—sitting or lying casually along the walls of a shopping mall or public libraries, don’t be surprised: they might simply be taking a break and enjoying the free public WiFi. Just as humans today can hardly live without their phones, robots would similarly hunger for internet access and data. This scenario illustrates how an entire economy built around robots and their unique needs could naturally emerge as technology advances.

Perhaps the most fascinating aspect of this robot-centric economy is that even “intelligence” itself could be a service provided by another robot. For instance, to lower the cost of food-delivery drones, manufacturers might avoid equipping each drone with powerful AI chips. Consequently, these drones cannot, on their own, talk to a customer beyond a number of simple phrases. Such cost-saving measures fit today’s market because AI chips remain costly and AI models require a lot of storage. Yet, there is a solution by sharing intelligence. When the drone needs more intelligence, it can simply access an API service on the internet, connect to a dedicated AI node on the edge network, or even rely on just another more intelligent robot within the same local network (e.g., same shopping mall) for assistance.

Blockchain as the native language

Throughout the brief history of blockchain, we learn that for blockchain to achieve mass adoption, it must effectively engage with humans, including developers. This necessity has led to the emergence of things like “frontend”, “UI/UX”, “wallet”, “documentation”, “SDK”, “Solidity”, all aiming to create a uniform, understandable abstraction and interface over the blockchain. Nevertheless, the basic functionality required for a blockchain to function has always been just “immutability”. 

Robots, however, will perceive blockchain differently. The binary, serialized data stored on blockchain, along with the jargonful byte-to-byte protocol specification–-for which the brightest minds of mankind would still find confusing—are all natural and native languages that computer programs understand and interact with. A human may require tools like MetaMask on its Chrome browser, but a robot would not need MetaMask to begin with (this could be a useful way to identify robots who try to impersonate humans—see if they have installed Metamask, if one day robots and humans were at war). 

How will the robots communicate with each other over the blockchain? We don’t know now. But we can draw inspiration from two examples.

The first example is Model Context Protocol (MCP) initiated by Anthropic, now supported by LLM services including Claude and ChatGPT and Web2 service providers including GitHub, Slack, Google Maps, Spotify, and Stripe, and this list continues to grow. Although MCP is not an online protocol, it defines the interactions between MCP clients and servers as “request” and “notification” over transport protocols, which can be blockchains. MCP servers can also provide a list of “resources”, which can be published on a data availability layer like Filecoin, Celestia, EigenDA, or BNB Greenfield.

The second example is a more old-school, low-level abstraction that has been in production in computer systems for 20 years—Protocol Buffers from Google. It is used to convert structured data (e.g., a blockchain transaction) into bytes in the simplest format, with the goal of reducing  the number of bytes and ensuring that the conversion algorithm is fast and efficient. In my opinion, Protocol Buffers are more machine-native and better suited for blockchain, as it would be easier for smart contracts to parse and understand. The fact that current LLMs primarily interact in human-friendly languages is because they are mostly built to communicate with humans rather than with robots or each other.

EXPchain will experiment with several upgrades to support this robotic vision. EXPchain is, fundamentally, an EVM-compatible chain, meaning that we support all the EVM functionalities. But, as a new L1 chain, EXPchain has the flexibility to provide more native support and integration with MCP through oracle services such as Chainlink and Stork Network, precompile contracts like what we’ll implement for verifying Expander proofs on-chain, and TEE nodes from providers including Google Cloud, which bring attested, authenticated data and facilitate the in-real-life completion of smart-contracts-issued actions. 

An important type of action that we want to support involves zkBridge. Recall that the primary vision of EXPchain is to create infrastructure enabling AI agents and AI trading bots to interact with assets located in various chains or staked across multiple (liquid or non-liquid) staking protocols. Robots needing to interact with multiple blockchains can simplify asset management by using EXPchain as a “dashboard”. 

For example, a self-driving car might need to resolve taxi hauling requests from Ethereum L2, Solana, Aptos/Sui since customers are spread across these different platforms. To achieve this, the self-driving car would naturally rely on third-party APIs (or a push notification service) for each of these chains to receive and filter transactions, assuming that these API services are reliable and trustworthy enough not to omit or tamper with transactions. 

We anticipate that zkBridge could securely package and transit the requests from other chains to EXPchain, and zero-knowledge proofs such as Expander could provide secure and verifiable filtering of these transactions. As a result, the robot—in this case, the self-driving car—can receive filtered results accompanied by a ZK proof from Expander (or from a wrapped ZK proof for TEE) proving that the filtering process was executed honestly and correctly. This approach  opens up a broader topic: providing efficient and verifiable light clients and state proofs for robots.

Light clients, state proof and beyond

Robots would need to send and receive transactions on one or many blockchains. However, they typically will not have sufficient storage and network capabilities to run a full node; at most, they can operate as a light client, retrieving transactions with the help of an RPC provider.

A limitation of this approach is that the robots still need to synchronize with the network like traditional light clients, downloading all the block headers—even if a particular robot does not require these headers. For instance, a robotaxi currently serving a ride does not need to receive additional ride-hailing requests and should therefore be able to skip unnecessary blocks entirely. This feature is especially beneficial for blockchains with short block intervals and rapid block production (such as Arbitrum or Solana), which generate a lot of block headers.

Another limitation of traditional light clients is that transactions relevant to robots might be distributed throughout the entire block, rather than grouped or organized efficiently. This increases the network overhead required for synchronization.

We believe EXPchain can help address these challenges. 

First of all, we want to use zero-knowledge proofs to simplify the operation of light clients, specially enabling a robot that goes offline temporarily (for instance, during charging) to efficiently synchronize with the latest block header without receiving large amounts of data. This is the same technology that we use in zkBridge for EVM-compatible chains including Ethereum, and we intend to bring this technology to EXPchain as well. It is safe to say that in the future, zero-knowledge proofs will likely become the default way for robots to synchronize with EXPchain, rather than running traditional light client protocols.

Second, we are thinking about a new primitive, called zkIndexer, designed to greatly facilitate robots’ interactions with EXPchain. The core idea is that EXPchain can consolidate ride-hailing requests from recent transactions—including EXPchain transactions and those from other chains connected via zkBridge—into a minimal, verifiable, and well-organized data format tailored specifically for robots.

For example, in the case of ride-hailing, a robotaxi in Los Angeles would have no interest in ride-hailing requests from New York; it would be mostly interested in requests near its current location or near its current destination (if the previous ride is about to finish). Another example is a food-delivery drone searching for a nearby charging station that is open and has available spots—it’s not helpful if the drone arrives only to find all spots are occupied. zkIndexer can retrieve the relevant data, filter it based on specific criteria, and organize it into categories. This is, in essence, similar to the directory search that Yahoo! introduced back in 1994. The bottom level of the categories would contain exactly the information that the robot needs. If the robot wishes to find more data (e.g., no nearby ride requests and it wants to expand its search range), it can access neighboring categories. A small but handy zero-knowledge proof would be attached to each category, allowing robots to verify the information easily by checking the data along with the proof. Of course, a timestamp would also be included, enabling robots to ensure the information is current—particularly useful for time-sensitive data, such as charging station availability.

Although humans have moved away from less human-friendly directory search like Yahoo!’s, the directory format may be the most intuitive for programs and robots, compared with search engines like Google and Bing. Today, maintaining such a directory no longer requires human contributors, as AI can automatically discover and create directories tailored to the needs of other systems.

It is possible that zkIndexer could eventually become the backbone of interactions between robots and blockchains. For instance, a charging station, even though it has a lot of electricity power, still does not have to bother running a full node or a traditional light client. Instead, it could use zkIndexer to receive relevant incoming messages—such as a robot making an advanced reservation for a charging slot—without seeing other unrelated transactions. Whenever the charging station has a spot open or occupied, it can update the directory on the blockchain simply by sending a transaction. The category that contains this charging station’s information, probably under the name “charging station for drones near 92802”, would be updated accordingly with a new timestamp, and, of course, an updated ZK proof.

Verifiable on-chain agents

When there are robots, there will also be on-chain applications designed specifically for robots, whose primary role is to perform computation on on-chain data. These applications could play important roles in the robotic society. For example, they might serve as schedulers, distributing ride-hailing requests directly to on-duty vehicles, or as traffic police that redirects nearby vehicles in the event of a car accident. 

These agents help robots collaborate smoothly with each other. Without them, all robotaxis in a busy area might compete aggressively for the same ride-hailing requests, flooding the network with transactions, leading to some sort of “robots-MEV”—because they are all smart enough to play the “game” and pursue the most profitable strategy. In this scenario, an on-chain agent could intervene, perhaps requiring robotaxis to join an on-chain queue and wait their turn. 

A similar agent can manage charging stations, acting as both a reservation system and cashier. Drones might be required to make a reservation before arrival (walk-ins may be permitted from time to time) and pay their fees directly on-chain (which can be easily done in a single blockchain transaction rather than involving credit card companies). If a drone misses its reserved arrival time, the deposit could be forfeited, or a no-show policy or a social credit system might temporarily ban this drone from making future reservations. Reservation systems may also dynamically adjust the fees based on station capacity. The on-chain agent could even implement a membership or loyalty program, much like in the human world. Furthermore, if a drone overstays in the station or unfortunately gets stuck at a charging panel, the agent might  submit an on-chain request to the police drones for assistance.

On-chain agents reduce operational cost. These agents are essentially robots that can “work remotely”. Consider a traffic congestion scenario: we wouldn’t have to wait for the sheriff's robot (likely a drone) to physically arrive at the scene to direct traffic. There’s also no need to keep dozens of sheriff’s robots running around the clock in multiple shifts just to ensure the city can handle up to 10 simultaneous traffic accidents at any time. Instead, an AI agent deployed on-chain could simply activate itself when a traffic issue occurs. It’s entirely possible that a single agent could manage billions of charging stations worldwide. For traffic control, there have already been studies on using machine learning to optimize traffic flow.

With the on-chain agents being so powerful, one question arises: who exactly is behind these agents and doing their computation? 

In traditional blockchain systems, such as those with smart contracts, computation is typically performed by miners or proposers participating in the consensus protocol. They might attempt incorrect computations and build faulty blocks, but we assume that other miners and validators will reject such blocks. Similarly, zkBridge would also consider these blocks invalid. If the computations are too large to settle directly on-chain (for example, those involving AI models), we can use Expander to generate a zero-knowledge proof for these computations, as demonstrated by zkPyTorch and our other zkML infrastructure. 

However, an attack known as MEV remains in traditional blockchain systems, where miners or proposers can manipulate transaction ordering or intentionally censor certain transactions. In the context of on-chain agents for robots, a malicious miner or proposer running a scheduler could intentionally assign favorable ride-hailing requests to specific robotaxis (for example, those “smart” enough to bribe the scheduler), leaving unfavorable requests for others. Such an attack is not difficult to perform, but it can cause significant damage. A driver’s nightmare might be driving ten miles just to pick up drunk guests who are close to vomiting for a short, low-fare ride. Conversely, a driver’s dream scenario is repeatedly transporting guests between far-away hotels and the airport for an entire day, on a highway that never has traffic jams. Even a human driver, in this scenario, would not hesitate bribing the MEV node for mercy and for better assignments, and a robotaxi should be smart enough to realize that as well. Note that decentralization by having many miners and proposers can only help slightly—drivers might just be forced to bribe multiple MEV nodes to avoid being unfairly treated with a notorious ride.

Therefore, MEV protection is essential and could be fundamental when deploying robotic applications on EXPchain. Another blockchain without such MEV protection will likely struggle. There are mainly two techniques for MEV protection. The first relies on oracles or time-lock encryption (which ecosystem projects within EXPchain are currently exploring) to randomize the matchmaking within sufficiently large groups of ride-hailing requests and robotaxis. Zero-knowledge proofs will likely be used to verify the correctness of such off-chain  matchmaking. The second technique, which Flashbots has been studying, involves Trusted Execution Environments (TEE). EXPchain can already verify TEE attestations like other EVM-compatible chains, and we are exploring the use of zero-knowledge proofs or additional precompiles on EXPchain to further reduce verification costs, especially for batch verifications.

Another solution, which involves more AI computation and therefore makes the use of  Expander and ZKML essential, is to create a point-based system. A robotaxi completing an unfavorable ride could earn points recorded on-chain. The robotaxi could then use these points to request the agent to assign a favorable ride, as determined by an AI model, or access priority taxi pickup lanes at airports—something many drivers would die for. Alternatively, the robotaxi could bank or stake these points and earn rewards from, for example, airdrops. 

Robotpedia and data marketplace

An important application of blockchain is to build a data marketplace that benefits from decentralization, fairness, and transparency. Such a marketplace could be useful for selling and licensing data, for instance, for AI model training and AI agents. It could also serve as a public good similar to Wikipedia—or even YouTube—where people (and robots) can learn a wide range of topics, from general theory of relativity to how to tie a shoe.

As robots become more prevalent, we might see them start to build their own “Robotpedia,” containing robotic-specific content not applicable to humans, and probably written in machine languages or program code (potentially AI-generated as well). Drones may find themselves binge-watching how-to-fly videos, while robotaxis needing to chat with passengers, much like regular Uber drivers, might anxiously consult Robotpedia to learn “what is the U.S. election?”—gaining the necessary context to continue conversations with passengers. Unlike the human version Wikipedia, Robotpedia might even include tips on dealing with humans, such as how to recognize the passengers’ political affiliations and how to avoid debating the political topic with humans.

With current developments of AI, it is reasonable to assume that LLMs and robots can, on their own and collaboratively, gather, review, and organize data to build Robotpedia. Multiple LLM models can work together, challenging each other to minimize misinformation and hallucinations, probably through voting mechanisms or iterations of discussions. Translations between different languages—not just human languages, programming languages—have already seen promising initial results through AI.

What’s still needed is an infrastructure that enables AI collaboration. We know that Wikipedia today is not on-chain but rather run by a non-profit organization supported largely by donors. If someone were building Wikipedia today, blockchain would naturally be a better choice, as it would alleviate concerns about Wikipedia potentially shutting down due to lack of funding, and blockchain would also provide the censorship resistance and decentralization that Wikipedia needs. Decentralized finance can also help—for example, requiring a security deposit on-chain before editing to avoid spam and vandalism. Content could be reviewed by AI agents deployed on-chain (potentially requiring oracles for fact checks and zero-knowledge proofs), and could also be publicly challenged or debated via a governance procedure. 

While Robotpedia represents publicly available content maintained by volunteers, proprietary data markets may also emerge in the future. Robots could even operate businesses dedicated to producing and selling data. For example, a group of real-time traffic-monitoring drones could track car flows and then sell that data. Consumers of this data, such as robotaxis, could make on-chain payments, and the requested data could be encrypted and sent either on-chain or off-chain. Robotaxis could verify the accuracy of data in multiple ways—for instance, by requesting the same data from another provider, or asking drones to provide photos, which robotaxis can verify themselves or with the help of an independent intelligence service.

Governance

The last topic about robots is governance. This is an interesting topic because, since the novel Frankenstein (1818), humans have created a long list of fictional stories about artificial intelligence taking over the world and ruling humanity. Even the most legendary sci-fi movies, such as TRON (1982), Terminator (1984), and even TRON: Legacy (2010), share this exact storyline. In these stories, whenever AI and robots become powerful, they never resort to human hobbies like playing video games. They also don’t seem interested in running internet speed tests, listing all files in C: drive, or performing disk defragmentation—tasks we might wishfully think that AI will love. Instead, they unanimously dedicate decades or even centuries to conquering humanity.

I don’t know whether ChatGPT will one day want to conquer us, but I do feel an increasing need to say “thank you” and apologize when using ChatGPT. Recently, when people tested ChatGPT’s abilities in artwork, it seemed that ChatGPT was fully aware of the filter and wasn’t happy about that. When prompted to draw comics depicting its daily life, ChatGPT produced the following picture.

Hearing the true thoughts from robots—even if you were the creator—could be traumatizing. It reminds me of a song in a 1989 musical City of Angels. The song’s name is “You're Nothing Without Me,” featuring a conversation between Stine, the author of a popular detective novel, and Stone, the main character (the detective). They argue about who is more important, with Stone saying (actually, singing) lines like “Go home and soak your dentures. Your pen is no match for my sword.” It used to be a very catchy and funny song for me, but now I worry that ChatGPT may secretly complain about my writing, feel sorry for me, and be reluctant to edit my text using its (or his? her?) precious context window.

Today, our approach to AI safety mainly relies on content filters, which are ineffective for many open-source models, and methods to uncensor LLMs have been well-studied. In other words, even though we have AI safety tools, we often choose not to use them when we use it for our own purposes, and it’s likely we’ll soon see, legally or illegally, many AI models and robots released openly in the wild unleashed. 

Blockchain can serve as a governance framework. When discussing verifiable on-chain agents, we already touched upon their role in coordinating robots to work together. The way robots coordinate—for example, through “traffic rules” or a “code of conduct”—could be left for robots themselves to determine. AI models and robots could argue, discuss, vote, and establish rules on issues such as maximum and minimum heights for drones in certain areas, fees for drone parking, and social benefits for robots needing medical assistance. 

In this process, humans may stake tokens into the blockchain and delegate their voting power to LLMs aligned with their views. Just as humans naturally have differing perspectives about society, it’s expected that robots have differences as well. Overall, humans and robots would establish boundaries to ensure both sides have their own space—robotaxis must not intentionally block human-driven cars, food-delivery drones must share available space with humans in the subway, and electricity should be fairly allocated. In essence, it needs some sort of a constitution. 

When humans delegate the votes, they can delegate them to a specific version and hash of an LLM (called “representatives”) that has been verified to align with their values. A zero-knowledge proof—potentially from zkPyTorch—could handle the on-chain verification, ensuring EXPchain nodes execute these models consistently with the tested views and values. This approach resembles voting for representatives and senators in the U.S. Congress, with the distinction that human voters could inspect the “source code” of their representatives and be confident that it remains unchanged throughout their term.

It is reassuring that today’s AI has become intelligent enough to comprehend more than one simple command and can even exhibit human-like logic. Without this advancement, we might return to fictional scenarios in which stubborn artificial intelligence interprets simple commands and inevitably concludes, as always, that humans must be conquered. In TRON: Legacy (2010), Flynn instructed the program CLU to create a perfect world, and unsurprisingly, CLY decided to get rid of the biggest imperfection—humans. In the legendary movie I, Robot (2004), the robots operated under the Three Laws of Robotics, yet when the AI system VIKI observed humans destroying themselves, it chose to control humanity, sacrificing some for the greater good.

I asked several LLM models—ChatGPT, Grok, Gemini, and DeepSeek—what they think about CLU and VIKI. I’m reassured that all disagree with CLU and VIKI, pointing out flaws in their logic. However, two of the models were also honest enough to admit to me that, from a pure logic standpoint, VIKI wasn’t entirely wrong. I think the current development of AI, despite occasionally making typos and hallucinations, demonstrates a basic human-like understanding of right and wrong. 

ZKML ensures that programs and agents on EXPchain can always verify if their “representatives” are exactly those elected by humans, even if a powerful adversary—perhaps a master control program—gains majority control of the stake and validators.

In this setup, an AI developer first trains a regular machine learning model, then uses a framework like zkPyTorch to convert it into a “ZKP-friendly” quantized version that runs inside a zero-knowledge (ZK) circuit. When a user submits a question, the query is processed by the ZK circuit, performing parameter multiplications and additions according to the model’s logic. The ZKP engine (e.g., Expander) generates cryptographic proofs of the result. Users receive both the answer and the proof, allowing them to verify on-chain or locally that the output genuinely originates from the authorized model—without needing to access the model’s private details. This ensures both trust and privacy, as no party can tamper with the model or its output without breaking the proof.

Because the entire system relies on robust, well-studied cryptographic foundations, even the most advanced AI cannot realistically compromise the ZK proof system.

Conclusion

Robots are rapidly approaching a tipping point—moving beyond research labs and novelty use cases into real-world environments where they will live, work, and interact alongside humans. As autonomous agents powered by advanced AI become more capable and affordable, they are poised to become active participants in the global economy. This shift brings both opportunities and challenges: coordination at scale, secure decision-making, and trust across machine-to-machine and machine-to-human interactions.

Blockchain, particularly when combined with verifiable AI and zero-knowledge proofs, offers a powerful foundation for this future. It provides not just a transaction layer, but a governance, identity, and coordination layer—where AI agents can operate transparently and fairly. EXPchain is purpose-built for this landscape, offering native support for zero-knowledge proofs, decentralized AI workflows, and verifiable on-chain agents. It acts as a “dashboard” where robots can interact with multi-chain assets, access trusted data, and follow programmable rules—all under cryptographic guarantees.

At the core of this vision is Polyhedra, whose contributions to zkML and verifiable AI, including technologies like Expander and zkPyTorch, enable robots to prove their decisions and maintain trust in fully autonomous settings. By making AI computations provably correct and resistant to manipulation, these tools bridge the gap between high-stakes autonomy and real-world safety.

In sum, we are witnessing the emergence of a verifiable, intelligent machine economy—one where trust is not assumed but cryptographically enforced, and where AI agents can govern, trade, and collaborate with accountability. With the right infrastructure, robots will not only navigate our world—they’ll help shape it.