diff --git a/.github/workflows/latexmk.yml b/.github/workflows/latexmk.yml index 54fe084..5569f27 100644 --- a/.github/workflows/latexmk.yml +++ b/.github/workflows/latexmk.yml @@ -9,11 +9,11 @@ jobs: - name: Compile LaTeX document uses: dante-ev/latex-action@latest with: - root_file: zk-vin-whitepaper.tex + root_file: zklayer-whitepaper.tex - name: Save artifact uses: actions/upload-artifact@v4 with: name: whitepaper.pdf - path: zk-vin-whitepaper.pdf + path: zklayer-whitepaper.pdf diff --git a/README.md b/README.md index d222de0..545da1a 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,11 @@ -# Zero Knowledge Verified Inference Network +# zklayer.ai - Verified Inference Network ## Overview This repository contains the LaTeX source code for our whitepaper. This document presents a comprehensive analysis of our plans to build a network solely for verified artificial intelligence inferences and operations. ## File Structure -- `zk-vin-whitepaper.tex`: The main LaTeX file that compiles the entire whitepaper. +- `zklayer-whitepaper.tex`: The main LaTeX file that compiles the entire whitepaper. - `figures/`: Directory containing figures and images used in the whitepaper. ## Prerequisites diff --git a/whitepaper.pdf b/whitepaper.pdf deleted file mode 100644 index 442233b..0000000 Binary files a/whitepaper.pdf and /dev/null differ diff --git a/zklayer-whitepaper.pdf b/zklayer-whitepaper.pdf new file mode 100644 index 0000000..09f2cb8 Binary files /dev/null and b/zklayer-whitepaper.pdf differ diff --git a/zk-vin-whitepaper.tex b/zklayer-whitepaper.tex similarity index 73% rename from zk-vin-whitepaper.tex rename to zklayer-whitepaper.tex index 1c1b3ab..66097f6 100644 --- a/zk-vin-whitepaper.tex +++ b/zklayer-whitepaper.tex @@ -51,7 +51,7 @@ T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}} \begin{document} -\title{{Zero Knowledge Verified Inference Network} \\ +\title{{ZKLayer - A Verified Inference Network} \\ {\footnotesize} } @@ -62,7 +62,7 @@ {Inference Labs Inc. - \url{inferencelabs.com}}\\ %E-mails: investor@inferencelabs.com } -\IEEEauthorblockN{13 April 2024 (v1.0)}%, xyz i\IEEEauthorrefmark{1} and xyzi \IEEEauthorrefmark{1} and xyzz\IEEEauthorrefmark{1} and xyz\IEEEauthorrefmark{1}} +\IEEEauthorblockN{13 April 2024 (v1.1)}%, xyz i\IEEEauthorrefmark{1} and xyzi \IEEEauthorrefmark{1} and xyzz\IEEEauthorrefmark{1} and xyz\IEEEauthorrefmark{1}} } \maketitle @@ -80,7 +80,7 @@ In the Web3 environment, AI systems, like humans and other machines, should operate within a decentralized framework. The promising integration of AI capabilities within the decentralized framework of Web3 heralds the future, as this combination has the potential to unlock unprecedented levels of innovation, efficiency, and democratization. By leveraging this synergy, we can shape a more equitable and sustainable digital ecosystem for generations to come. However, while AI and Web3 each present their distinct challenges, their convergence brings about complex issues. Ensuring the responsible and ethical deployment of AI, preserving the integrity of inference, and safeguarding the intellectual properties of AI within the decentralized ecosystem of Web3 require addressing multifaceted concerns such as fairness and guarantees in payment, user privacy, and more. -This paper introduces zk-VIN, a decentralized network facilitating the deployment of AI systems on a Web3 infrastructure. In zk-VIN, we utilize cryptographic technologies such as zero-knowledge proofs (ZKP), fully homomorphic encryption (FHE), and multi-party computation (MPC) to safeguard the integrity and privacy of users or/and AI developers. Furthermore, a blockchain-based architecture ensures payment assurances and democratic governance within the system. While some challenges have been addressed, our team remains committed to continuously updating zk-VIN with new technologies to tackle emerging concerns. +This paper introduces ZKLayer, a decentralized network facilitating the deployment of AI systems on a Web3 infrastructure. In ZKLayer, we utilize cryptographic technologies such as zero-knowledge proofs (ZKP), fully homomorphic encryption (FHE), and multi-party computation (MPC) to safeguard the integrity and privacy of users and/or AI developers. Furthermore, a blockchain-based architecture ensures payment assurances and democratic governance within the system. While some challenges have been addressed, our team remains committed to continuously updating ZKLayer with new technologies to tackle emerging challenges in Decentralized AI. \end{abstract} @@ -100,21 +100,21 @@ \subsubsection{Artificial Intelligence (AI)} Artificial Intelligence (AI) stands at the forefront of modern technological advancements, revolutionizing industries and reshaping the way we interact with technology. With its ability to mimic human cognitive functions, AI enables machines to learn from data, adapt to new inputs, and perform tasks that traditionally required human intelligence. From personalized recommendation systems to autonomous vehicles, AI applications permeate various aspects of our daily lives, offering solutions to complex problems and unlocking new possibilities for innovation. -Machine Learning (ML), a subset of AI, focuses on the development of algorithms that allow computers to learn from and make predictions or decisions based on data. Through the iterative process of training on large datasets, ML algorithms can recognize patterns, extract insights, and improve their performance over time without being explicitly programmed. This capability is driving advancements in fields such as healthcare, finance, and cybersecurity, where ML techniques are being utilized to enhance diagnosis accuracy, optimize financial trading strategies, and detect anomalies in network traffic, among other applications. ML's versatility and effectiveness in handling large volumes of data make it a pivotal component of the AI landscape, propelling the evolution of intelligent systems and fueling the growth of data-driven decision-making across industries. +Machine Learning (ML), a subset of AI, focuses on the development of algorithms that allow computers to learn from and make predictions or decisions based on data. Through the iterative process of training on large datasets, ML algorithms can recognize patterns, extract insights, and improve their performance over time without being explicitly programmed. This capability is driving advancements in fields such as healthcare, finance, and cybersecurity, where ML techniques are being utilized to enhance diagnostic accuracy, optimize financial trading strategies, and detect anomalies in network traffic, among other applications. ML's versatility and effectiveness in handling large volumes of data make it a pivotal component of the AI landscape, propelling the evolution of intelligent systems and fueling the growth of data-driven decision-making across industries. \subsubsection{Web3} -Web3 represents the next phase of the internet, emphasizing decentralization, transparency, and user empowerment. Unlike its predecessor, Web2, which is characterized by centralized platforms and gatekeepers, Web3 aims to distribute power and control back to individual users through blockchain technology and decentralized protocols. In the Web3 ecosystem, users have greater ownership and control over their data and digital assets, facilitated by cryptographic principles and smart contracts. This shift towards decentralization not only reduces reliance on intermediaries, but also fosters trust and security by design, as transactions are recorded on a transparent and immutable ledger. As a result, Web3 enables new models of digital interaction, including decentralized finance (DeFi), non-fungible tokens (NFTs), and decentralized autonomous organizations (DAOs), which offer innovative ways for individuals to engage, transact, and collaborate online. +Web3 represents the next phase of the Internet through the emphasis on decentralization, transparency, and user empowerment. Unlike its predecessor, Web2, which is characterized by centralized platforms and gatekeepers, Web3 aims to distribute power and control back to individual users through blockchain technology and decentralized protocols. In the Web3 ecosystem, users have greater ownership and control over their data and digital assets, facilitated by cryptographic principles and smart contracts. This shift towards decentralization not only reduces reliance on intermediaries but also fosters trust and security by design, as transactions are recorded on a transparent and immutable ledger. As a result, Web3 enables new models of digital interaction, including decentralized finance (DeFi), non-fungible tokens (NFTs), and decentralized autonomous organizations (DAOs), which offer innovative ways for individuals to engage, transact, and collaborate online. -Moreover, Web3 holds the potential to democratize access to information and resources, empowering individuals across the globe to participate in the digital economy on their own terms. By leveraging decentralized networks and peer-to-peer interactions, Web3 can circumvent censorship, promote financial inclusion, and facilitate cross-border transactions without the need for traditional intermediaries. Additionally, Web3 technologies such as decentralized storage and identity solutions provide avenues for individuals to regain control over their digital identity and secure their online presence. As the Web3 ecosystem continues to evolve and mature, it has the capacity to redefine the internet as a more open, inclusive, and equitable space, where individuals have greater agency and autonomy in shaping their digital experiences and interactions. +Moreover, Web3 holds the potential to democratize access to information and resources, empowering individuals across the globe to participate in the digital economy on their own terms. By leveraging decentralized networks and peer-to-peer interactions, Web3 can circumvent censorship, promote financial inclusion, and facilitate cross-border transactions without the need for traditional intermediaries. Additionally, Web3 technologies such as decentralized storage and identity solutions provide avenues for individuals to regain control over their digital identity and secure their online presence. As the Web3 ecosystem continues to evolve and mature, it has the capacity to redefine the Internet as a more open, inclusive, and equitable space, where individuals have greater agency and autonomy in shaping their digital experiences and interactions. -\subsection{The rationale for combination of AI and Web3} +\subsection{The rationale for the combination of AI and Web3} The combination of AI and Web3 technologies holds immense potential to revolutionize various aspects of our digital landscape. By integrating AI with decentralized protocols and blockchain systems, we can leverage the strengths of both domains to create more efficient, secure, and transparent systems. One significant benefit lies in the realm of DeFi, where AI algorithms can analyze vast amounts of data from blockchain networks to optimize trading strategies, detect fraudulent activities, and provide personalized financial services. Additionally, AI-powered smart contracts can automate complex processes, such as loan approvals or asset management, without the need for intermediaries, enhancing the speed and accuracy of transactions while reducing costs and human errors. Furthermore, the combination of AI and Web3 technologies can foster greater privacy and data ownership for users. AI algorithms can be deployed within decentralized applications (dApps) to provide personalized experiences while preserving user anonymity and data sovereignty. For instance, AI-driven recommendation systems can suggest content or products based on user preferences without compromising privacy by processing data directly on the user's device or utilizing privacy-enhancing techniques. Moreover, integrating AI with decentralized identity solutions can enhance identity verification processes while ensuring user control over their personal data. This symbiotic relationship between AI and Web3 not only enhances the efficiency and security of digital interactions but also empowers individuals to reclaim ownership of their digital identities and data in an increasingly decentralized and interconnected world. -\subsection{Overvieweing all challenges} +\subsection{Overviewing all challenges} Combining AI with Web3 technologies presents a complex and multifaceted challenge. Integrating AI algorithms into decentralized systems requires careful consideration of compatibility, interoperability, and scalability. AI models often rely on vast amounts of data for training and inference, posing significant challenges in terms of data privacy, storage, and access within decentralized networks. Moreover, ensuring the transparency and fairness of AI-driven processes in a decentralized environment is inherently difficult, as traditional centralized oversight mechanisms may not be applicable. Additionally, the dynamic and rapidly evolving nature of both AI and blockchain technologies introduces complexities in maintaining compatibility and synchronization between the two domains. Addressing these challenges requires interdisciplinary expertise spanning AI, cryptography, blockchain, and distributed systems, making the integration of AI and Web3 a formidable endeavor that demands innovative solutions and collaborative efforts from experts across various fields. @@ -122,7 +122,7 @@ \subsection{Overvieweing all challenges} Similarly, Web3 introduces its own set of challenges, particularly concerning privacy and data protection. Decentralized networks aim to empower users with greater control over their data and digital identities. However, achieving privacy in a transparent and auditable blockchain environment presents inherent tensions. While cryptographic techniques such as zero-knowledge proofs offer promising solutions, they also introduce complexities in terms of implementation, scalability, and usability. Additionally, navigating regulatory frameworks and compliance requirements in decentralized ecosystems poses challenges in ensuring legal and regulatory compliance while preserving user privacy and data sovereignty. -Despite the individual challenges posed by AI and Web3, the combination of the two further compounds the complexity. Integrating AI algorithms into decentralized networks requires addressing the unique challenges of both domains while ensuring compatibility, security, and usability. The interoperability between AI and Web3 technologies necessitates novel approaches to data privacy, algorithmic transparency, and governance mechanisms. As such, achieving a seamless and effective fusion of AI and Web3 requires concerted efforts to overcome technical, regulatory, and societal challenges, making it a highly intricate and demanding undertaking. +Despite the individual challenges posed by AI and Web3, the combination of the two further compounds the complexity. Integrating AI algorithms into decentralized networks requires addressing the unique challenges of both domains while ensuring compatibility, security, and usability. The interoperability between AI and Web3 technologies necessitates novel approaches to data privacy, algorithmic transparency, and governance mechanisms. As such, achieving a seamless and effective fusion of AI and Web3 requires a concerted effort to overcome technical, regulatory, and societal challenges, making it a highly intricate and demanding undertaking. \subsection{Ensuring Practicality: A Comprehensive Solution} @@ -135,7 +135,7 @@ \subsection{Ensuring Practicality: A Comprehensive Solution} \item \textbf{Privacy Preservation:} Ensuring the protection of sensitive data and preserving user privacy through encryption, anonymization techniques, and privacy-enhancing technologies. - \item \textbf{Security Measures:} Incorporating comprehensive security protocols to safeguard against data breaches, cyber attacks, and unauthorized access, bolstering the overall resilience of the system. + \item \textbf{Security Measures:} Incorporating comprehensive security protocols to safeguard against data breaches, cyber-attacks, and unauthorized access, bolstering the overall resilience of the system. \item \textbf{Scalability Solutions:} Developing scalable architectures and protocols to accommodate the growing volume of data and transactions processed by AI-powered applications on Web3 platforms. @@ -165,16 +165,16 @@ \subsection{Ensuring Practicality: A Comprehensive Solution} \subsection{Our contribution} -Designing a system that effectively addresses all the mentioned challenges and provides all the capabilities outlined is undoubtedly a formidable task. However, we recognize the complexity of this endeavor and acknowledge that achieving the ideal solution may not be feasible from the outset. Instead, our approach entails starting with a system that addresses some of the pressing issues and then iteratively updating and enhancing it with new technologies and functionalities over time. By adopting this iterative approach, we aim to incrementally improve the system's capabilities and resilience, ultimately working towards the overarching goal of providing a comprehensive solution that fulfills all requirements and effectively integrates AI and Web3 technologies. This adaptive strategy allows us to navigate the complexities of technological evolution while ensuring that the system remains agile and responsive to emerging challenges and opportunities. +Designing a system that effectively addresses all of the mentioned challenges and provides all of the capabilities outlined is undoubtedly a formidable task. However, we recognize the complexity of this endeavor and acknowledge that achieving the ideal solution may not be feasible from the outset. Instead, our approach entails starting with a system that addresses some of the pressing issues and then iteratively updating and enhancing it with new technologies and functionalities over time. By adopting this iterative approach, we aim to incrementally improve the system's capabilities and resilience, ultimately working towards the overarching goal of providing a comprehensive solution that fulfills all requirements and effectively integrates AI and Web3 technologies. This adaptive strategy allows us to navigate the complexities of technological evolution while ensuring that the system remains agile and responsive to emerging challenges and opportunities. -Our proposed system, zk-VIN, aims to leverage blockchain technology to facilitate the exchange of inferences between model developers and users in a secure and transparent manner. Through the zk-VIN network, model developers can offer their trained models to users seeking specific predictions or analyses. Users, in turn, can access these inferences by paying with tokens, thereby creating a decentralized marketplace for AI services. To ensure the integrity and trustworthiness of the exchanged inferences, model developers utilize zero-knowledge proofs (ZKPs) to provide verifiable evidence that the requested computations were indeed performed correctly without revealing any sensitive information about the underlying model or data. By integrating ZKPs into the transaction process, our system enhances transparency and trust, enabling users to confidently engage with model developers while preserving the confidentiality of their data and ensuring the authenticity of the provided inferences. Through this innovative blockchain-based solution, we aim to empower both model developers and users to participate in a secure and efficient marketplace for AI services, fostering a decentralized ecosystem that promotes fairness, transparency, and collaboration. +Our proposed system, ZKLayer, aims to leverage blockchain technology to facilitate the exchange of inferences between model developers and users in a secure and transparent manner. Through the ZKLayer network, model developers can offer their trained models to users seeking specific predictions or analyses. Users, in turn, can access these inferences by paying with tokens which then creates a decentralized marketplace for AI services. To ensure the integrity and trustworthiness of the exchanged inferences, model developers utilize zero-knowledge proofs (ZKPs) to provide verifiable evidence that the requested computations were indeed performed correctly without revealing any sensitive information about the underlying model or data. By integrating ZKPs into the transaction process, our system enhances transparency and trust, enabling users to confidently engage with model developers while preserving the confidentiality of their data and ensuring the authenticity of the provided inferences. Through this innovative blockchain-based solution, we aim to empower both model developers and users to participate in a secure and efficient marketplace for AI services, fostering a decentralized ecosystem that promotes fairness, transparency, and collaboration. \section{Technological Foundations: Exploring Key Technologies} -To integrate AI and Web3 and provide a comprehensive list of capabilities, we explore various technologies in this section. Some of these technologies are utilized in designing zk-VIN version 1, while others are not yet practical for implementation. However, it's essential to note that zk-VIN will continually update to incorporate any advancements in technologies to evolve into a more practical solution over time. +To integrate AI and Web3 and provide a comprehensive list of capabilities, we explore various technologies in this section. Some of these technologies are utilized in designing ZKLayer version 1, while others are not yet practical for implementation. However, it's essential to note that ZKLayer will continually update to incorporate any advancements in technologies to evolve into a more practical solution over time. -\subsection{Technologies used in zk-VIN 1.0} +\subsection{Technologies used in ZKLayer 1.0} -This subsection delves deeper into describing the technologies utilized in designing zk-VIN 1.0. +This subsection delves deeper into describing the technologies utilized in designing ZKLayer 1.0. \subsubsection{Blockchain} @@ -184,11 +184,11 @@ \subsubsection{Blockchain} \subsubsection{Smart Contract} -Smart contracts, a pivotal innovation enabled by blockchain technology, are self-executing contracts with the terms of the agreement directly written into code. Operating on decentralized networks like Ethereum, smart contracts automate and enforce the execution of contractual agreements without the need for intermediaries, thereby reducing reliance on traditional legal processes and enhancing efficiency. These programmable contracts can execute predefined actions automatically when specific conditions are met, facilitating a wide range of applications such as tokenization of assets, DeFi, supply chain management, and more. By leveraging cryptographic security and decentralization, smart contracts ensure trust and immutability, as transactions are recorded on a tamper-proof blockchain ledger. This transformative technology has the potential to revolutionize the way agreements are made and executed, offering increased transparency, speed, and reliability in various industries and sectors. +Smart contracts, a pivotal innovation enabled by blockchain technology, are self-executing contracts with the terms of the agreement directly written into code. Operating on decentralized networks like Ethereum, smart contracts automate and enforce the execution of contractual agreements without the need for intermediaries, thereby reducing reliance on traditional legal processes and enhancing efficiency. These programmable contracts can execute predefined actions automatically when specific conditions are met, facilitating a wide range the tokenization of assets, DeFi, supply chain management, and more. By leveraging cryptographic security and decentralization, smart contracts ensure trust and immutability, as transactions are recorded on a tamper-proof blockchain ledger. This transformative technology has the potential to revolutionize the way agreements are made and executed, offering increased transparency, speed, and reliability in various industries and sectors. \subsubsection{Blockchain interconnection} -Blockchain interconnection, as a critical component for designing zk-VIN, refers to the capability of various blockchain networks or platforms to communicate and interact seamlessly. This interoperability facilitates the smooth flow of data, assets, or transactions between disparate blockchain systems, fostering cross-chain functionality and collaboration. There are multiple approaches to achieving blockchain interconnection, including cross-chain communication protocols, atomic swaps, wrapped tokens, sidechains, and oracles. Such interconnection is pivotal for unlocking the complete potential of blockchain technology, enabling collaboration, scalability, and interoperability across diverse blockchain networks and platforms. +Blockchain interconnection, as a critical component for designing ZKLayer, refers to the capability of various blockchain networks or platforms to communicate and interact seamlessly. This interoperability facilitates the smooth flow of data, assets, or transactions between disparate blockchain systems, fostering cross-chain functionality and collaboration. There are multiple approaches to achieving blockchain interconnection, including cross-chain communication protocols, atomic swaps, wrapped tokens, sidechains, and oracles. Such interconnection is pivotal for unlocking the complete potential of blockchain technology, enabling collaboration, scalability, and interoperability across diverse blockchain networks and platforms. \subsubsection{ZKP} @@ -196,27 +196,27 @@ \subsubsection{ZKP} \subsubsection{ZKML} -In the realm of cutting-edge cryptographic technologies, Zero-Knowledge Machine Learning (ZKML) emerges as a game-changer, combining the power of zero-knowledge proofs with machine learning algorithms. Not all challenges in AI and ML but some of them could be addressed through ZKP. +In the realm of cutting-edge cryptographic technologies, Zero-Knowledge Machine Learning (ZKML) emerges as a game-changer by combining the power of zero-knowledge proofs with machine learning algorithms. Though ZKML cannot address all challenges in AI and ML, it provides a necessary layer of security and authenticity not found in today's AI and ML solutions. One significant challenge within the Machine Learning as a Service (MLaaS) industry pertains to the integrity of inferences, where clients seek assurance that the model developer has genuinely executed the requested model for their response. For example, consider a client purchasing a premium account from OpenAI to utilize ChatGPT 4. They may question whether OpenAI could opt to use the cheaper ChatGPT 3 instead, thereby saving costs but potentially compromising the quality of responses to a level that the customer cannot distinguish. Similarly, when a patient consults an AI doctor for health predictions, concerns arise about the authenticity of the executed model. Even if the AI doctor acts with honesty, the risk of system compromise, where a genuine model may be replaced with a poisoned one by hackers, poses a serious threat. In both instances, maintaining the integrity of the model remains a paramount concern for the customer. A naïve solution would involve the model developer sharing the model with the customers, who would then run the model locally to ensure its integrity. However, this approach is not practical due to the large size of models and the limited computational resources available to customers. As an alternative, Ghodsi et al.~\cite{Ghodsi2017SafetyNetsVE} proposed, for the first time, the use of ZKP to design a solution where the model developer, or a third party like a cloud service, runs the model but generates a proof using ZKP for the customer to ensure that the genuine model has been executed. -While Ghodsi's solution offers benefits, it necessitates the sharing of the model with customers. However, in certain scenarios, model developers may be reluctant to share their models due to concerns about protecting intellectual property. Consequently, in addition to ensuring the integrity of the model, preserving the privacy of the model becomes another significant concern. Thus, the overarching question arises: How can model developers assure customers that the model has been genuinely executed (ensuring the integrity of inference) without divulging any information about the model itself (preserving model privacy)? To address this new complicated problem, Lee et al.~\cite{Lee2020vCNNVC} proposed a ZKP-based solution which provide both concerns. Subsequently, other researchers have proposed more efficient ZKP-based solution to address these two concerns~\cite{Liu2021zkCNNZK},~\cite{Feng2021ZENAO},~\cite{Ju2021EfficientSP}. +While Ghodsi's solution offers benefits, it necessitates the sharing of the model with customers. However, in certain scenarios, model developers may be reluctant to share their models due to concerns about protecting intellectual property. Consequently, in addition to ensuring the integrity of the model, preserving the privacy of the model becomes another significant concern. Thus, the overarching question arises: How can model developers assure customers that the model has been genuinely executed (ensuring the integrity of inference) without divulging any information about the model itself (preserving model privacy)? To address this new complicated problem, Lee et al.~\cite{Lee2020vCNNVC} proposed a ZKP-based solution that aims to address both concerns. Subsequently, other researchers have proposed a more efficient ZKP-based solution to address these two concerns~\cite{Liu2021zkCNNZK},~\cite{Feng2021ZENAO},~\cite{Ju2021EfficientSP}. -% FIXME: bad wording "provide both concern" -\section{zk-VIN} -Zero-Knowledge Verified Inference Network (zk-VIN) offers a transformative approach for AI operators seeking to transition their off-chain AI models onto blockchain networks while safeguarding their proprietary algorithms. This framework streamlines the intricate process of model conversion, enabling rapid deployment across multiple blockchain ecosystems. It serves as a seamless bridge between the off-chain world of AI and the on-chain realm, ensuring intellectual property remains veiled through the use of zero-knowledge cryptography. By providing a secure and efficient payment infrastructure, zk-VIN facilitates atomic value exchange for AI services, paving the way for a new era of autonomous AI agents interacting within the blockchain space. +\section{ZKLayer} -For consumers of AI predictions, zk-VIN offers an additional layer of assurance by eliminating trust assumptions. They confidently rely on the network to validate inputs are processed using the correct and intended AI model, with a cryptographic guarantee of faithful execution. As a result, consumers of AI services benefit from a transparent, trust-minimized environment where AI predictions are verified, reducing the need for blind trust in the operators' execution. +Zero-Knowledge Layer (ZKLayer) offers a transformative approach for AI operators seeking to transition their off-chain AI models onto blockchain networks while safeguarding their proprietary algorithms. This framework streamlines the intricate process of model conversion, enabling rapid deployment across multiple blockchain ecosystems. It serves as a seamless bridge between the off-chain world of AI and the on-chain realm, ensuring intellectual property remains veiled through the use of zero-knowledge cryptography. By providing a secure and efficient payment infrastructure, ZKLayer facilitates atomic value exchange for AI services, paving the way for a new era of autonomous AI agents interacting within the blockchain space. + +For consumers of AI predictions, ZKLayer offers an additional layer of assurance by eliminating trust assumptions. They confidently rely on the network to validate that the inputs are processed using the correct and intended AI model, with a cryptographic guarantee of faithful execution. As a result, consumers of AI services benefit from a transparent, trust-minimized environment where AI predictions are verified, reducing the need for blind trust in the operators' execution. \subsection{Technical Architecture} -To address current blockchain limitations and challenges of running on-chain Neural Networks, zk-VIN is designed to serve as a conduit between off-chain and on-chain architectures. Taking a forward-looking approach, the zk-VIN architecture is inherently modular, a design philosophy which allows each component of the system to be individually updated or replaced. +To address current blockchain limitations and challenges of running on-chain Neural Networks, ZKLayer is designed to serve as a conduit between off-chain and on-chain architectures. Taking a forward-looking approach, the ZKLayer architecture is inherently modular, a design philosophy for each component of the system to be individually updated or replaced. \begin{figure}[!ht] \centering @@ -226,7 +226,7 @@ \subsection{Technical Architecture} \node[block] (mr) {Model Registry\\[2mm]Model A\\Model B\\Model C}; \node[block, right=1.7cm of mr] (np) {Inference Market\\(workload queue)}; -\node[block, right=1.7cm of np] (ua) {zk-VIN SDK\\$1:1$ verifier contract per\\model\\Verifying Keys\\Verified Output Data}; +\node[block, right=1.7cm of np] (ua) {ZKLayer SDK\\$1:1$ verifier contract per\\model\\Verifying Keys\\Verified Output Data}; \node[block, below=3cm of mr] (mr1) {Node Pool\\(workers)}; \node[block, right=1.7cm of mr1] (np1) {zk-ML proving circuits\\(model dependant)}; @@ -257,7 +257,7 @@ \subsection{Technical Architecture} \draw[-triangle 45] (z)--(cy); } } - \caption{zk-VIN Overview} + \caption{ZKLayer Overview} \label{fig:Fig 1} \end{figure} @@ -265,7 +265,7 @@ \subsection{Technical Architecture} \subsection{Off-Chain Architecture} -\textit{Node Pools: }The off-chain infrastructure and computational power of zk-VIN is based around node pools. +\textit{Node Pools: }The off-chain infrastructure and computational power of ZKLayer is based around node pools. \begin{figure}[!ht] \centering @@ -332,24 +332,24 @@ \subsection{Off-Chain Architecture} \subsection{Persistent Storage} -Due to the considerable size of input and outputs from AI models, external persistent storage is required. Depending on the ultimate end use case of the output, storage within zk-VIN may not be required. An example is an NFT image generated with a diffusion model. The hash of the image can be verified and stored onchain with the image being stored on Arweave~\cite{Arweave} or other decentralized storage networks. +Due to the considerable size of input and outputs from AI models, external persistent storage is required. Depending on the ultimate end use case of the output, storage within ZKLayer may not be required. An example is an NFT image generated with a diffusion model. The hash of the image can be verified and stored on-chain with the image being stored on Arweave~\cite{Arweave} or other decentralized storage networks. \subsection{Aggregation Circuits} -As the complexity of a model increases, so does the size of its associated zk-circuit, resulting in larger proofs. To manage this, aggregation circuits are utilized to amalgamate multiple proofs into a singular, more concise proof that can be submitted on-chain, along with the corresponding output data. This technique also permits the batching of related inferences, enhancing efficiency and reducing the on-chain data storage footprint. +As the complexity of a model increases, so does the size of its associated zk-circuit which results in larger proofs. To manage this, aggregation circuits are utilized to amalgamate multiple proofs into a singular concise proof submitted on-chain, along with the corresponding output data. This technique also permits the batching of related inferences, enhancing efficiency and reducing the on-chain data storage footprint. \subsection{On-chain Architecture} -The on-chain component of the zk-VIN system acts as the interface for end users and dApps. Users or dApps submit workloads, which include all necessary details like input, precommitment, and destination. This on-chain architecture consists of three main components: The Inference Market, Model Registry, and Verifier Contracts. +The on-chain component of the ZKLayer system acts as the interface for end users and dApps. Users or dApps submit workloads, which include all necessary details like input, precommitment, and destination. This on-chain architecture consists of three main components: The Inference Market, Model Registry, and Verifier Contracts. \subsection{Inference Market} -The ecosystem is anchored by an Inference Market. The protocol has a native queue of AI/ML workloads. A workload can be thought of as an end to end AI/ML Inference. Each workload specifies all required details to complete it, such as input data, specific AI model for execution, output data requirements or on-chain execution. Workloads posted to the network are priced according to their computational complexity. +The ecosystem is anchored by an Inference Market. The protocol has a native queue of AI/ML workloads. A workload can be thought of as an end-to-end AI/ML Inference. Each workload specifies all required details to complete it, such as input data, specific AI model for execution, output data requirements or on-chain execution. Workloads posted to the network are priced according to their computational complexity. \subsection{Model Registry} -After circuitizing a model with the zk-VIN SDK, its creator will register it on the network. This defines the required input and output data format, computational cost of inferences on the model (proportional to cost of compute for an inference) and the verification key for use in a verification contract upon completion of each inference from the model. +After circuitizing a model with the ZKLayer SDK, its creator will register it on the network. This defines the required input and output data format, computational cost of inferences on the model (proportional to cost of compute for an inference) and the verification key for use in a verification contract upon completion of each inference from the model. \subsection{Model Node Pool Registration} @@ -381,17 +381,17 @@ \subsection{Model Node Pool Registration} \label{fig:Fig 3} \end{figure} -The network implements sets of blocks, called epochs, in which a registered node must be available. Nodes which register in the current epoch are activated during the following epoch. Model nodes commit compute units per unit of time to the network. Since the compute units for a workload is known ahead of time (see “Transactional Cost” section) the network delegates workloads to fill but not exceed its compute capacity. +The network implements sets of blocks, called epochs, in which a registered node must be available. Nodes that register in the current epoch are activated during the following epoch. Model nodes commit compute units per unit of time to the network. Since the compute units for a workload are known ahead of time the network delegates workloads to fill but not exceed its compute capacity. It is the expectation that a model node will complete delegated workloads during a registered epoch. Model nodes which fail to complete work while registered or otherwise shown to be unavailable will face a penalty. \subsection{Model Vetting} -As zk-VIN will be an open permission-less network, no party (or even Inference Labs) can decide which models should or shouldn’t be available on the network. Instead an economic system determines how “good” a model is. This is crucial to retain an open and fair censorship free network. +As ZKLayer will be an open permission-less network, no party (or even Inference Labs) can decide which models should or shouldn’t be available on the network. Instead an economic system determines how “good” a model is. This is crucial to retain an open and fair censorship free network. -Verified backtesting is published by the model creator and made available to the public. Users get a guarantee the model will perform a certain way under set circumstances rather than relying on blind trust in published accuracy, precision and recall values. While the provided examples may not be representative of real world use cases as it is self published by the creator, this is clearly a move in the right direction. Users also submit inferences one at a time, with no upfront commitments or complicated setup to quickly verify the usefulness of the model for their application. +Verified backtesting is published by the model creator and made available to the public. Users get a guarantee the model will perform a certain way under set circumstances rather than relying on blind trust in published accuracy, precision and recall values. While the provided examples may not be representative of real-world use cases as it is self-published by the creator, this is clearly a move in the right direction. Users also submit inferences one at a time, with no upfront commitments or complicated setup to quickly verify the usefulness of the model for their application. -Aggregating onchain historical usage of a model results in a proof of its usefulness. How “good” a model is can be answered by its frequency of use, inference by a diverse set of applications and users, and repeat use of a model by a user. In the same way an open source software package can be evaluated by the number of other projects which depend on it (and subsequently how “good” those packages are). +Aggregating on-chain historical usage of a model results in a proof of its usefulness. How “good” a model is can be answered by its frequency of use, inference by a diverse set of applications and users, and repeat use of a model by a user. In the same way an open-source software package can be evaluated by the number of other projects which depend on it (and subsequently how “good” those packages are). The network implements a non-zero registration fee for models to prevent flooding of the network with unusable or non-existent models. @@ -525,9 +525,9 @@ \section{Current Execution and Deployment} The evaluation of properties and performance across various ZKML projects has revealed that EZKL emerges as the most feature-complete and efficient framework for this purpose, which explains its widespread adoption within the community. As contributors to the EZKL project, Inference Lab strives to integrate EZKL with blockchain to explore new ideas. Meanwhile, our team also works on other ZKML projects to enhance accessibility and further refine the final product. -ZKML uses Halo2 to generate proving and verification keys. While many proof systems need a trusted setup during the circuit creation process, Halo2 is a ZKP protocol that enables the construction of proofs without a trusted setup. It aims to facilitate the recursive composition of zk-SNARKS, allowing for more scalable and efficient proofs~\cite{ZcashHalo2GH}. Halo2 represents the next generation of zk-SNARK technology after the original Halo protocol. +ZKML uses Halo2 to generate proving and verification keys. It aims to facilitate the recursive composition of zk-SNARKS, allowing for more scalable and efficient proofs~\cite{ZcashHalo2GH}. Halo2 represents the next generation of zk-SNARK technology after the original Halo protocol. -EZKL is a library and command-line tool for doing inference for deep learning models and other computational graphs in a zk-snark (ZKML). EZKL works as follow~\cite{ZconduitEZKLGH}: +EZKL is a library and command-line tool for doing inference for deep learning models and other computational graphs in a zk-snark (ZKML). EZKL works as follows~\cite{ZconduitEZKLGH}: \begin{figure}[!ht] \centering @@ -577,26 +577,26 @@ \section{Current Execution and Deployment} \begin{enumerate} - \item Firstly a neural network is defined in form of a computational graph using pytorch or tensorflow. + \item Firstly a neural network is defined in form of a computational graph using PyTorch or TensorFlow. \item Using training data, the defined model will be trained and the final model will export as an .onnx file. - \item Point ezkl to the .onnx and input of the model (as .json file) to generate a ZK-SNARK circuit which will work as the following figure. From here a nearly 1:1 representation of the model is outlaid in a circuit. + \item Point ezkl to the .onnx and input of the model (as a .json file) to generate a ZK-SNARK circuit which will work as the following figure. From here a nearly 1:1 representation of the model is outlaid in a circuit. \end{enumerate} -\section{Remained Challenges and future work} +\section{Remaining Challenges and future work} -This section first discusses remaining concerns such as IP risk and data privacy, then it explores future enhancements for zk-VIN. We anticipate zk-VIN's potential to support emerging technologies like FHEML, verifiable FHE, MPCML, and others. While these technologies currently pose computational challenges and are not yet practical, advancements in technology suggest that efficient solutions will become available over time. Consequently, zk-VIN remains adaptable for updates to accommodate new technologies and services. +This section first discusses remaining concerns such as IP risk and data privacy, then it explores future enhancements for ZKLayer. We anticipate ZKLayer's potential to support emerging technologies like FHEML, verifiable FHE, MPCML, and others. While these technologies currently pose computational challenges and are not yet practical, advancements in technology suggest that efficient solutions will become available over time. Consequently, ZKLayer remains adaptable for updates to accommodate new technologies and services. \subsection{Security evaluation} -The following provides an overview of the cybersecurity concerns present in the current version of zk-VIN. However, these concerns are not significant enough to render zk-VIN useless. The aim is to offer a comprehensive understanding of the advantages and disadvantages of zk-VIN, providing potential customers with a clearer understanding. +The following provides an overview of the cybersecurity concerns present in the current version of ZKLayer. However, these concerns are not significant enough to render ZKLayer useless. The aim is to offer a comprehensive understanding of the advantages and disadvantages of ZKLayer, providing potential customers with a clearer understanding. \subsubsection{Reverse Engineering Risk} -One of the most valuable aspects of zk-VIN is the aggregation of inferences. Having a clear picture of how often and by whom models are being utilized is a whole industry on its own. However this may create a new form of IP risk yet to be seen at scale. With a sufficient set of inputs to outputs from a particular model, a sophisticated 3rd party could train a similar or competing model using published data. Similar approaches have been seen by crowdsourcing prompt-to-response datasets from ChatGPT and then fine tuning GPTv2 to achieve surprisingly good results. +One of the most valuable aspects of ZKLayer is the aggregation of inferences. Having a clear picture of how often and by whom models are being utilized is a whole industry on its own; however this may create a new form of IP risk yet to be seen at scale. With a sufficient set of inputs to outputs from a particular model, a sophisticated 3rd party could train a similar or competing model using published data. Similar approaches have been seen by crowdsourcing prompt-to-response datasets from ChatGPT and then fine tuning GPTv2 to achieve surprisingly good results. \textbf{* IP Replication Risk} @@ -610,7 +610,7 @@ \subsubsection{Security Risk} \textbf{* Trusted Setup Risk} -There are a few methods to mitigate this which are in early development. Recently ahead of the Unirep v2 launch, a call to the public was made to assist in a public trusted setup generation process (which Inference Labs proudly participated in) and the tools are open source to repeat this process. This process can be replicated at scale and at the protocol level, allowing nodes on the network to contribute to the process as new models are registered on the network and provide incentives for nodes to participate. This also further increases the security of the setup process and the overall network. +There are a few methods to mitigate this which are in early development. Recently ahead of the Unirep v2 launch, a call to the public was made to assist in a public trusted setup generation process (which Inference Labs proudly participated in) and the tools are open source to repeat this process. This process can be replicated at scale and at the protocol level. When new models are registered, nodes contribute to the process and are incentivized for their participation. This also further increases the security of the setup process and the overall network. \textbf{* Age of ZK} @@ -622,17 +622,17 @@ \subsubsection{Security Risk} \subsubsection{Data Privacy} -When a user sends a query to a server and expects the inference of an AI model, they inevitably expose their data to the server. While the user may trust the model's integrity, as the server can generate proof that the requested model was indeed used, their data's privacy remains compromised. Additionally, the server gains access to the output of the AI model, which may concern the user. For example, imagine a patient sending their CT scan to an AI doctor for diagnosis. The patient may be uncomfortable with the server having access to their scan and knowing the resulting diagnosis and potential illnesses. This privacy concern is prevalent in all current ZKML solutions, including our product, zk-VIN. +When a user sends a query to a server and expects the inference of an AI model, they inevitably expose their data to the server. While the user may trust the model's integrity, as the server can generate proof that the requested model was indeed used, their data's privacy remains compromised. Additionally, the server gains access to the output of the AI model, which may concern the user. For example, imagine a patient sending their CT scan to an AI doctor for a diagnosis; the patient may be uncomfortable with the server having access to their scan and knowing the resulting diagnosis and potential illnesses. This privacy concern is prevalent in all current ZKML solutions, including our product, ZKLayer. \subsection{Potential Technologies for future versions} -This subsection offers an overview of potential technologies that could be integrated into future versions of zk-VIN. While these technologies may not be currently practical due to factors such as expensive computation or scalability issues, advancements are occurring rapidly. We are actively seeking updated technologies to enhance zk-VIN and improve its performance. +This subsection offers an overview of potential technologies that could be integrated into future versions of ZKLayer. While these technologies may not be currently practical due to factors such as expensive computation or scalability issues, advancements are occurring rapidly. We are actively seeking updated technologies to enhance ZKLayer and improve its performance. \subsubsection{More improvement in ZKML} -Current solutions in ZKML face various limitations, such as lengthy proof generation times, non-succinct proofs, and susceptibility to quantum attacks. While each scheme endeavors to mitigate some of these challenges, none have fully addressed all concerns comprehensively. However, numerous teams, groups, and startups are actively investing in addressing these issues. Despite these limitations, the current solutions remain practical and effective for addressing specific problems. We can continue to update the zk-VIN system to incorporate advancements in this field as they emerge. +Current solutions in ZKML face various limitations, such as lengthy proof generation times, non-succinct proofs, and susceptibility to quantum attacks. While each scheme endeavors to mitigate some of these challenges, none have fully addressed all of the concerns comprehensively. However, numerous teams, groups, and startups are actively investing in addressing these issues. Despite these limitations, the current solutions remain practical and effective for addressing specific problems. We can continue to update the ZKLayer system to incorporate advancements in this field as they emerge. \subsubsection{FHE and verifiable FHE} @@ -640,9 +640,9 @@ \subsubsection{FHE and verifiable FHE} The concept of homomorphic encryption dates back to the 1970s, with the foundational work of Rivest, Adleman, and Dertouzos on partially homomorphic encryption. Over the years, researchers including Craig Gentry made significant breakthroughs in the development of fully homomorphic encryption, culminating in Gentry’s groundbreaking work in 2009~\cite{Gentry2009FullyHE}. Since then, there has been ongoing research to improve the efficiency and practicality of FHE systems~\cite{Dijk2010FullyHE},~\cite{Brakerski2012LeveledFH},~\cite{Gentry2013HomomorphicEF},~\cite{Cheon2017HomomorphicEF}. Despite much improvement and research in FHE, computational complexity and overhead of the current FHE solutions prevent wide industry adoption of the scheme. However, science is improving daily and ongoing advancements in FHE algorithms and implementations hold the promise of making practical deployments of this technology increasingly feasible, unlocking new possibilities for secure and privacy-preserving data processing in different industries such as AI. -In addition to concerns regarding complexity and performance, there are other considerations in FHE-based systems, particularly when applied to MLaaS. Similar to the integrity inference concern previously discussed, what if the model developer does not execute the genuine model? This raises the concept of Verifiable Fully Homomorphic Encryption (VFHE). VFHE expands on the capabilities of FHE by allowing parties to verify the accuracy of computations performed on encrypted data without decryption~\cite{Viand2023VerifiableFH},~\cite{Chatel2022VerifiableEF}. This introduces an extra layer of trust and assurance in applications where the integrity and accuracy of computations are crucial. By merging the privacy-preserving attributes of FHE with the privacy-preserving and verifiability of cryptographic proofs (like ZKP), VFHE presents a potent tool for enhancing data security, integrity, and trust across a wide array of applications, spanning from secure outsourcing and MLaaS to decentralized finance and beyond. +In addition to concerns regarding complexity and performance, there are other considerations in FHE-based systems, particularly when applied to MLaaS. Similar to the integrity inference concern previously discussed; what if the model developer does not execute the genuine model? This raises the concept of Verifiable Fully Homomorphic Encryption (VFHE). VFHE expands on the capabilities of FHE by allowing parties to verify the accuracy of computations performed on encrypted data without decryption~\cite{Viand2023VerifiableFH},~\cite{Chatel2022VerifiableEF}. This introduces an extra layer of trust and assurance in applications where the integrity and accuracy of computations are crucial. By merging the privacy-preserving attributes of FHE with the privacy-preserving and verifiability of cryptographic proofs (like ZKP), VFHE presents a potent tool for enhancing data security, integrity, and trust across a wide array of applications, spanning from secure outsourcing and MLaaS to decentralized finance and beyond. -However, this intriguing idea is not yet practical. While researchers strive to enhance both FHE and VFHE schemes, the computational requirements for the current solutions remain prohibitively high, rendering these schemes impractical~\cite{Atapoor2024VerifiableFV}. Nevertheless, our team consistently monitors advancements in new schemes to identify any potential improvements in this field and update zk-VIN accordingly. This underscores the importance of ensuring that our design possesses the capability for updatability, allowing it to be seamlessly updated with any new advancements. +However, this intriguing idea is not yet practical. While researchers strive to enhance both FHE and VFHE schemes, the computational requirements for the current solutions remain prohibitively high, rendering these schemes impractical~\cite{Atapoor2024VerifiableFV}. Nevertheless, our team consistently monitors advancements in new schemes to identify any potential improvements in this field and update ZKLayer accordingly. This underscores the importance of ensuring that our design possesses the capability for updatability, allowing it to be seamlessly updated with any new advancements. In the future, we anticipate the introduction of practical and efficient VFHE solutions. Subsequently, it will become feasible to design a system wherein a user encrypts their own data and sends it to the server. The server will then execute the AI model to generate proof, with the entire process being verified using VFHE. This approach ensures that neither the input nor the output of the AI model is readable by the server, while also proving the integrity of the model. Finally, the verified and encrypted output will be sent back to the user, who can decrypt it and verify the proof. Such a theoretical solution would effectively address the privacy concerns that currently plague AI solutions. @@ -651,18 +651,18 @@ \subsubsection{MPC} Multi-party computation (MPC), also referred to as secure multi-party computation (SMPC), was pioneered by Andrew Yao in 1982~\cite{Yao1982ProtocolsFS}. This revolutionary concept, illustrated by Yao's Millionaires' Problem, enables two millionaires to ascertain which holds a greater value without divulging their actual wealth to each other. In a broader sense, MPC facilitates multiple parties to collectively compute a function over their individual private inputs without revealing any information about those inputs to one another. Verifiable Multi-Party Computation (VMPC) extends the capabilities of MPC by introducing mechanisms to verify the correctness and integrity of computed results without compromising the privacy of the inputs~\cite{Schoenmakers2015UniversallyVM},~\cite{Laud2014VerifiableCI}. -The combination of MPC and ML offers various applications, one of which involves multiple financial institutions utilizing MPC techniques to collectively assess loan applicants' creditworthiness without compromising sensitive customer data. Each institution securely shares their trained ML model parameters, which are then aggregated through MPC to generate joint predictions on new loan applicants' data. This collaborative approach enables institutions to collectively make decisions regarding loan approvals or denials while upholding the privacy and security of customer data. Leveraging MPC allows for effective collaboration on inference tasks while ensuring data confidentiality and compliance with privacy regulations. Additionally, exploring the integration of ML and VMPC presents another promising avenue for future revisions of zk-VIN. +The combination of MPC and ML offers various applications, one of which involves multiple financial institutions utilizing MPC techniques to collectively assess loan applicants' creditworthiness without compromising sensitive customer data. Each institution securely shares their trained ML model parameters, which are then aggregated through MPC to generate joint predictions on new loan applicants' data. This collaborative approach enables institutions to collectively make decisions regarding loan approvals or denials while upholding the privacy and security of customer data. Leveraging MPC allows for effective collaboration on inference tasks while ensuring data confidentiality and compliance with privacy regulations. Additionally, exploring the integration of ML and VMPC presents another promising avenue for future revisions of ZKLayer. \subsubsection{Integrated solutions} -We anticipate the development of a sophisticated cryptographic system in the coming years, offering various capabilities. For instance, patients may encrypt their health data and transmit it to an AI doctor. Subsequently, the AI doctor will execute an AI model for inference and generate a proof, assuring the patient of the genuine execution of the model. Neither the patient nor the model developer will be required to divulge personal data or model information to each other. Such a system promises data and model privacy, coupled with model integrity. However, the design of such a system is intricate, and no solution currently exists. We foresee its development in the near future, prompting our team to prioritize the updatability capability for the zk-VIN system. +We anticipate the development of a sophisticated cryptographic system in the coming years, offering various capabilities. For instance, patients may encrypt their health data and transmit it to an AI doctor. Subsequently, the AI doctor will execute an AI model for inference and generate a proof, assuring the patient of the genuine execution of the model. Neither the patient nor the model developer will be required to divulge personal data or model information to each other. Such a system promises data and model privacy, coupled with model integrity. However, the design of such a system is intricate, and no solution currently exists. We foresee its development in the near future, prompting our team to prioritize the updatability capability for the ZKLayer system. -\subsection{Potential Usages of zk-VIN} +\subsection{Potential Usages of ZKLayer} -The Inference Lab Inc. is actively pursuing Privacy Enhancement Technologies (PETs) and investigating potential applications of zk-VIN. One evident application of zk-VIN, as discussed in the paper, is when a designer seeks assurance that a prompt was executed by the designated server. Moreover, zk-VIN holds promise in addressing numerous challenges related to responsible AI beyond this specific scenario. +Inference Labs Inc. is actively pursuing Privacy Enhancement Technologies (PETs) and investigating potential applications of ZKLayer. One evident application of ZKLayer, as discussed in the paper, is when a designer seeks assurance that a prompt was executed by the designated server. Moreover, ZKLayer holds promise in addressing numerous challenges related to responsible AI beyond this specific scenario. -The significance of responsible AI has come to the forefront in recent years, especially following the startling revelations about ChatGPT, Sora, and other AI models. These incidents have prompted widespread discussions among people and governments, raising concerns about the future implications of AI. With the increasing reliance on AI for decision-making and various tasks, there is a growing apprehension about the potential misuse of AI by companies and governments. Ensuring the correctness and reliability of AI systems becomes crucial in this context. While one simplistic solution may involve making all AI models publicly accessible for scrutiny, this approach could deter companies from investing in products that would be disclosed. Many prefer to safeguard the details of their models as intellectual property. Responsible AI emerges as a promising solution to address these concerns comprehensively. It encompasses principles and practices aimed at fostering ethical and accountable development, deployment, and use of AI systems, thereby promoting transparency, fairness, and trustworthiness in AI technologies. +The significance of responsible AI has come to the forefront in recent years, especially following the startling revelations about ChatGPT, Sora, and other AI models. These incidents have prompted widespread discussions among people and governments, raising concerns about the future implications of AI. With the increasing reliance on AI for decision-making and various tasks, there is growing apprehension about the potential misuse of AI by companies and governments. Ensuring the correctness and reliability of AI systems becomes crucial in this context. While one simplistic solution may involve making all AI models publicly accessible for scrutiny, this approach could deter companies from investing in products that would be disclosed. Many prefer to safeguard the details of their models as intellectual property. Responsible AI emerges as a promising solution to address these concerns comprehensively. It encompasses principles and practices aimed at fostering ethical and accountable development, deployment, and use of AI systems, thereby promoting transparency, fairness, and trustworthiness in AI technologies. Responsible AI refers to the ethical and accountable development, deployment, and use of artificial intelligence systems. It encompasses principles and practices aimed at ensuring AI systems operate in a manner that respects human rights, diversity, fairness, transparency, and privacy, while also minimizing potential biases and unintended consequences. Responsible AI involves robust governance frameworks, clear guidelines for ethical decision-making, ongoing monitoring and evaluation, and meaningful engagement with stakeholders throughout the AI lifecycle. By prioritizing responsible AI practices, organizations and developers can build trust with users, mitigate risks, and maximize the societal benefits of AI technologies. @@ -674,26 +674,26 @@ \subsection{Potential Usages of zk-VIN} To address this issue, companies should demonstrate the fairness of their AI models. A naïve solution would be for these companies to publicly disclose their algorithms. However, this approach conflicts with their intellectual property rights. Therefore, ZKML proposes a solution whereby companies can prove that they are using a specific algorithm for all users without revealing any information about their models. Kang et al.~\cite{TensorPlonkMedium}. have provided insights into the ZKML system, which operates using GPU acceleration (GPA). The use of GPA can accelerate the proof generation process by over 1000 times. Consequently, they suggest that Twitter could generate proofs for 1\% of the 500 million tweets per day from its users for approximately \$21,000 per day. Given that this cost represents less than 0.5\% of Twitter's annual infrastructure expenses, it is feasible for Twitter to demonstrate the fairness of its feed AI models. -In the future, as AI models increasingly handle decision-making and various tasks, responsible AI will become even more critical than it is today. Simultaneously, with a shift towards decentralization, most communications and transactions are expected to occur on Web3. In such an environment, zk-VIN could play a vital role by enabling AI model operators to broadcast proofs of honesty on Web3 without compromising the confidentiality of their model details. +In the future, as AI models increasingly handle decision-making and various tasks, responsible AI will become even more critical than it is today. Simultaneously, with a shift towards decentralization, most communications and transactions are expected to occur on Web3. In such an environment, ZKLayer will play a vital role by enabling AI model operators to broadcast proofs of honesty on Web3 without compromising the confidentiality of their model details. \section{Conclusion} -The Zero-Knowledge Verified Inference Network (zk-VIN) presents a comprehensive solution to the challenges of integrating AI and blockchain technology. It provides a de-centralized protocol that enables secure, off-chain AI model inferences while preserving intellectual property through zero-knowledge cryptography. This innovative approach not only enhances privacy and security but also ensures the integrity and authenticity of AI models. The zk-VIN architecture is designed to be modular and adaptable, supporting rapid deployment across multiple blockchain ecosystems. This work reflects a significant step towards realizing a decentralized, secure, and privacy-preserving foundation for AI-enhanced blockchain systems, potentially revolutionizing the way AI operates in the blockchain space and contributing to the broader adoption of web3 technologies. +The Zero-Knowledge Layer (ZKLayer) presents a comprehensive solution to the challenges of integrating AI and blockchain technology. It provides a decentralized protocol that enables secure, off-chain AI model inferences while preserving intellectual property through zero-knowledge cryptography. This innovative approach not only enhances privacy and security but also ensures the integrity and authenticity of AI models. The ZKLayer architecture is designed to be modular and adaptable, supporting rapid deployment across multiple blockchain ecosystems. This work reflects a significant step towards realizing a decentralized, secure, and privacy-preserving foundation for AI-enhanced blockchain systems, potentially revolutionizing the way AI operates in the blockchain space and contributing to the broader adoption of web3 technologies. As we progress we will maintain a strong focus on upholding these fundamental principles: \textbf{* Decentralization and Democratization of AI} -zk-VIN aims to enable the decentralization and democra-tization of AI, aligning with core Web3 values. By facilitating privacy-preserving and verifiable AI services on public blockchains, zk-VIN makes advanced AI accessible beyond large tech firms with proprietary data silos. This expands opportunities for innovation, collaboration, and value creation with AI systems operated transparently on open networks. +ZKLayer aims to enable the decentralization and democratization of AI, aligning with core Web3 values. By facilitating privacy-preserving and verifiable AI services on public blockchains, ZKLayer makes advanced AI accessible beyond large tech firms with proprietary data silos. This expands opportunities for innovation, collaboration, and value creation with AI systems operated transparently on open networks. -\textbf{* Developer experience centric modular system design} +\textbf{* Developer experience-centric modular system design} -With a focus on simplicity and modular architecture, zk-VIN streamlines the integration of cryptographically verified AI into decentralized applications. The system design centers on enhancing the developer experience through abstraction of complex zero-knowledge cryptography and seamless blockchain interoperability (zk-ML). Cost-reduction and flexibility are built into the core framework to accommodate rapid evolution in the AI and blockchain landscape. +With a focus on simplicity and modular architecture, ZKLayer streamlines the integration of cryptographically verified AI into decentralized applications. The system design centers on enhancing the developer experience through abstraction of complex zero-knowledge cryptography and seamless blockchain interoperability (zk-ML). Cost-reduction and flexibility are built into the core framework to accommodate rapid evolution in the AI and blockchain landscape. -\textbf{* Open source protocol for secure and composable systems} +\textbf{* Open-source protocol for secure and composable systems} -As an open source protocol, zk-VIN fosters transparency, collective ownership, and community-driven development. Following the ethos of permissionless innovation, zk-VIN creates infrastructure for AI-enhanced dApps to compose securely with minimal trust. By combining verified AI and blockchain building blocks within an open ecosystem, zk-VIN aspires to be a public good facilitating the creation of services with embedded privacy, security and autonomy. +As an open-source protocol, ZKLayer fosters transparency, collective ownership, and community-driven development. Following the ethos of permissionless innovation, ZKLayer creates infrastructure for AI-enhanced dApps to compose securely with minimal trust. By combining verified AI and blockchain building blocks within an open ecosystem, ZKLayer aspires to be a public good facilitating the creation of services with embedded privacy, security and autonomy. -In summary, zk-VIN implements the responsible and ethical application of AI within Web3 by making artificial intelligence both decentralized while protecting value creation. Through it’s innovative technical architecture and commitment to openness, zk-VIN seeks to lay the foundations for the next generation of AI-powered decentralized applications. +In summary, ZKLayer implements the responsible and ethical application of AI within Web3 by making artificial intelligence both decentralized while protecting value creation. Through it’s innovative technical architecture and commitment to openness, ZKLayer seeks to lay the foundations for the next generation of AI-powered decentralized applications.