For the past 20 years, the term digital transformation has been one of the top priorities for C-suites in most forward-looking organizations. Yet, while many organizations claim to have succeeded in their path to digital glory and are able to produce incredibly detailed information about their assets and the efficiency of their operations, interacting with other organizations tends to happen on very inglorious traditional rails and standards: email, pdf, and excel.
Most cross-organization processes today are still highly manual, and the ones that aren’t, involve strongly “locked-in” dependencies. This is partly due to the hurdle of documenting and proving justifiable investment in process changes and infrastructure. The impact of these gaps between digital islands is massive at the level of costs to these enterprises, including back-office efforts, reconciliation of data, etc. Most importantly, it represents a missed opportunity for competitive advantage and augmenting the top line. After spending millions of dollars and euros in consultants, IT, and technology gurus, why are so many of our organizations still operating as individual intranets that are opaque and old-fashioned when the time comes to share data?
This post is meant to provide an overview of the current difficulties digitizing our economy, the importance of interoperability in the context of supply chains, and the technologies that will enable the next qualitative jump. All three are empirically validated by the interoperability showcase sponsored by the US Department of Homeland Security.
A TRADE TRUST LAYER
Today’s Internet protocols allow us to confidently transport data from one place to another, however they provide few guarantees as to:
- The data being truthful and verifiable
- The binding data to specific identities
- Interpreting trade data in a programmable way
To be more explicit, we are missing on the one hand a general-purpose trust layer for verifying information independently across contexts, and on the other the ability to express business information for consumption by unknown future digital systems.
Documents like a Bill of Landing or Certificate of Origin today do not conform to any digital standard: every organization has their own forms, and even if the information is sent in digital format, it is rarely consumable or verifiable without human intervention. The same happens to Mill Certificates in the steel industry, Delivery Tickets in the Oil and Gas industry or other types of Certificates of Origin. This is not a new concept and some industries have succeeded in implementing some of the fundamentals, as is the case to a certain extent in the automotive industry realizing in the 90s enabling faster processing of information and onboarding.
Today something similar is happening with the digitization and scalable efficiency gains in the supply chain traceability. Customs clearance and import brokerage are making great gains in their ability to understand and trust the history of a product. Products can be tracked with the equivalent of a digital passport, verifiably tracking, say, CO2 emissions end-to-end at the per-product level, or programming trade finance, using underlying products as collateral. Imagine how the capitalization process would change if you could give your clients verifiable insights into your supply chain, anticipating when a shipment will be late and automatically rerouting and load-balancing across multiple different suppliers discretely. Most of the innovations enabled by these standards have likely not been imagined yet, much as before the general adoption of TCP/IP, very few people could have anticipated Google Maps, Uber or the like.
So how do we enable a layer of digital trading operability at large scale? The answer is DIDs for as many actors as possible and VCs for any transactions that need to be verifiable and auditable.
THE RISE OF VERIFIABLE CREDENTIALS (VCs) AND DECENTRALIZED IDENTIFIERS (DIDs)
With the development of blockchain, a new kind of distributed PKI security infrastructure is emerging: Decentralized Identifiers (DIDs). DIDs are a type of globally unique identifiers that are resolvable and allow owners of an identifier to cryptographically prove ownership interactively or programmatically. These identifiers enable verifiable digital identity without a centralized authority [1]. In essence, they allow a subject to provide an identity document attached to an identifier, which is resolvable in a variety of ways decoupled from centralized providers and platforms. The mature specification for these kind of identifier systems is currently in the W3C Candidate Recommendation stage.
Interactive control proofs allow an entity to segment identities and interaction domains, maintaining control over what organizational data or confidential business insights are revealed in each. A DID functions as an address that resolves to a “DID document” outlining authentication mechanisms and other routing information. For more information about Decentralized Identifiers please refer to [DID SPEC].
While organizational identities are secured by these portable and self-managed identifiers, transactions and data points are secured by “Verifiable Credentials”, which are machine-readable, ontologically-anchored and privacy-preserving representations of presenting real world credentials. In their most simple form, they are a set of claims made by an entity presented in an “envelope” containing semantic references, identity references, and a digital signature. These claims are immutable and depending on the signature scheme used, they can even be disclosed selectively. [we should add an example here]
DIDs and VCs were initially conceived as an answer to centralized ID, intended to give the identity owner control of their own data, creating a systems design movement commonly referred to as self-sovereign identity (SSI). Builders of organizational applications for supply chain, product tracking, and commercial trade have found these decentralized technological building blocks useful to non-human use-cases as well.
NEW OPPORTUNITIES EMERGE
The most immediate application is being able to identify specific individual products and their origination across a large network of stakeholders, as if they essentially had a digital passport, which collects the birth of the product and all the major stations along its journey in a digitally verifiable way. This technology, together with the use of immutable timestamps (for example, those recorded on a blockchain) and an ownership registry (avoiding the “double spending” of assets) makes it very hard for any supply chain actors to misreport, counterfeit or misrepresent information.
In the medium term, this technology is expected to allow “digital twins” that span organizational and jurisdictional boundaries. Beyond just origination, VCs and DIDs bring many other clear process improvements for cross-border operations.
Specifically, Mavennet is using these technologies to track Oil, Gas and Electricity verifiably. Energy is the largest commodity market globally after currencies, and is expected to transform substantially in the coming years. Digitizing this space is helping in many ways, such as:
- Verifying Country of origin
- Prototyping immutable “digital twin” system for commodity tracking
- Objectively documenting environmental footprint to lower cost of capital
- Understanding true end to end environmental impact
- Streamlining settlements
Encouraging healthier market and lower barrier to entry by circumventing centralized tracking
INTEROP TESTING OVERVIEW
So how do we make this technology mainstream sooner? The Silicon Valley Innovation Program (SVIP), an incubation program backed by the US Department of Homeland Security, has been sponsoring efforts to not only support the standardization path of these technologies but also to demonstrate provable interoperability between pioneering vendors. As part of this initiative, eight organizations gathered to test interoperability between their stacks in different verticals. This meant going beyond data model standards and empirically validating actual interoperability across stacks.
This interoperability plugfest included organizations from locations from all over the world — a truly global talent pool.
For the scope of this description, we will focus on the digital trade organizations, focusing on digitization of cross-border processes across the following industries:
- Steel (Transmute)
- Agriculture (Mesur.io)
- Oil and Gas (Mavennet Systems Inc.)
- eCommerce (Spherity)
The goal of the exercise on the digital trade side was focused on proving interoperability at 3 levels:
- Industry-agnostic Verifiable Credential Interoperability: Show the exchange and proper handling of standards-conformant, confidentiality-preserving portable data across industry verticals, specifically testing common issuance and verification processing.
- Interoperability across common supply chain elements: Show semantic interoperability across low-level business information common across industries even if it is typically expressed differently, such as a bill of lading (BOL).
- Smooth data transfer between vendors and Cross Border Patrol (CBP) systems. While Customs and Border Patrol does not currently operate this kind of data infrastructure, small-scale pilots validate assumptions about availability, accuracy, and access for government systems.
Being able to conduct this exercise was the tip of the iceberg building standards to make this possible. This work is being conducted in under the W3C umbrella in the open through the Credentials Community Group (W3C-CCG) and is the expectation is that many more organizations will continue to join and support the initiative as the standard matures. Mainly the bulk of Mavennet’s work has focused on the following standards:
- VC-HTTP-API: A minimal, vendor-neutral API that allows for the construction and verification of Verifiable Credentials and Presentations for general-purpose scenarios.
- A Traceability vocabulary: A Linked Data vocabulary used to construct Verifiable Credentials related to supply chain traceability of commodities and products.
- RevocationList2020/StatusList2021: A Standard to provide a mechanism for publishing status information about issued credentials.
VC-HTTP-API
The VC HTTP API is a primarily internal/backend API standardizing basic functions for the construction and verification of VCs and VPs. It supports and tests for the following features:
- Construction of W3C standard-conformant Verifiable Credentials
- Construction of W3C standard-conformant Verifiable Presentations (I.e., interactive holder-proofs of 1 or more VCs)
- Verification of Credential/Presentation “proofs”, I.e. checking signatures and hashes against the resolution material of issuer and/or holder)
- Support for multiple DID methods
- Issuance and verification of revocable credentials using the RevocationList2020 scheme (soon to be updated to StatusList2021)
- Selective Disclosure using BBS+ signatures.
As is always the case with rapidly growing specifications the VC-HTTP-API is undergoing some changes, including an overhaul of its Documentation, a more refined Roadmap, and a rapidly expanding interoperability and test suite to allow for wider and more robust interoperability.
The 2020 and 2021 Interoperability plugfests used the VC-HTTP-API as scaffolding through which to test each company’s conformance with the VC data model, i.e., the VC “core spec” incubating in the W3C. The tests were structured to allow each company a matrix of optional features to support beyond the core functionality.
There are a number of issues with the current test suite including a lack of true end–to-end testing by taking a credential issued by one company and automatically testing it against other companies’ infrastructure, rather the tests are structured to test each individual company’s compliance with static fixtures. The test harness also includes some mandatory tests that are not strictly in scope of the VC data model, such as support for various DID methods.
These issues are currently being addressed and a more robust test-suite is currently being built, in order to have true enterprise end-to-end interoperability.
TRACEABILITY VOCABULARY
The traceability vocabulary bridges the gap between existing record-keeping systems and the verifiable exchange of supply chain information across organizations envisioned by proponents of these data portability technologies. This vocabulary has been built with the following defining characteristics:
- Built based on existing Schema.org and GS1 vocabulary elements to maximize composability and reusability
- Currently focused on five main market segments: Agriculture, E-Commerce, Oil, Gas, and Steel and Metals
- Hosted transparently under the W3C-CCG umbrella and open to contributions
- GS1-compatible extensible representations of Organization, Bill of Lading, and Inspection Reports
- Standardizes the creation of Verifiable Credentials from JSON-LD semantic anchors and JSON-Schema validation schemata
The vocabulary is currently undergoing a number of proposal improvements including more robust support for GS1’s EPCIS, and more accessible documentation.
INTEROP OVERVIEW VIDEO
For those interested in more details you can see a video of the interop fest showcasing interoperability across different technologies using the Neoflow platform as front-end.
Also, if you are interested in learning in more depth the work performed in the broader SVIP Program refer to this presentation.
THE FUTURE
There are a number of exciting changes occurring in the space, and Mavennet is happy to be championing work on including standardized GS1 events in the Traceability Vocab, setting up a standardized timestamping service, and providing typescript implementations of standards-compliant libraries.
Most importantly however, we’re working on engaging with industry partners and community members to improve the state of the standards and work towards a more secure, private and efficient future.
Note: We would like to thank the SVIP program for the support and the long-term vision and Juan Caballero for his feedback on this article.