“On-chain Data Containers” (ODCs) are a series of interfaces used for indexing and managing data in Smart Contracts called “Data Object” (DO). Information stored in Data Objects can be accessed and modified by implementing smart contracts called “Data Manager” (DM). This ERC defines a series of interfaces for the separation of the storage of data from the implementation of the logic functions that govern such data. We introduce the interfaces for access management through “Data Index” (DI) implementations, the structures associated with “Data Points” (DP) for abstracting storage, the Data Managers to access or modify the data, and finally the “Data Point Registries” (DPR) interfaces for compatibility that enable data portability (horizontal mobility) between different implementations of this ERC.
Motivation
As the Ethereum ecosystem grows, so does the demand for on-chain functionalities. The market encourages a desire for broader adoption through more complex systems and there is a constant need for improved efficiency. We have seen times where the market hype has driven an explosion of new standard token proposals. While each standard serves its purpose, most often requires more flexibility to manage interoperability with other standards.
The diversity of standards spurs innovation. Different projects will implement their bespoke solutions for interoperability. The absence of a unified adapter mechanism driving the interactions between assets issued under different ERC standards is causing interoperability issues. This, in turn, is leading to fragmentation.
We recognize there is no “one size fits all” solution to solve the standardization and interoperability challenges. Most assets - Fungible, Non-Fungible, Digital Twins, Real-world Assets, DePin, etc - have multiple mechanisms for representing them as on-chain tokens through the use of different standard interfaces. However, for those assets to be exchanged, traded, or interacted with, protocols must implement compatibility with those standards before accessing and modifying the on-chain data. Moreover, the immutability of smart contracts plays a role in future-proofing their implementations by supporting new tokenization standards. A collaborative effort must be made to enable interaction between assets tokenized under different standards. The current ERC provides the tools for developing such on-chain adapters.
We aim to abstract the on-chain data handling from the logical implementation and the ERC interfaces exposing the underlying data. The current ERC proposes a series of interfaces for storing and accessing data on-chain, codifying the underlying assets as generic “Data Points” that may be associated with multiple interoperable and even concurrent ERC interfaces. This proposal is designed to work by coexisting with previous and future token standards, providing a flexible, efficient, and coherent mechanism to manage asset interoperability.
Data Abstraction: We propose a standardized interface for enabling developers to separate the data storage code from the underlying token utility logic, reducing the need for supporting and implementing multiple inherited -and often clashing- interfaces to achieve asset compatibility. The data (and therefore the assets) can be stored independently of the logic that governs such data.
Standard Neutrality: A neutral approach must enable the underlying data of any tokenized asset to transition seamlessly between different token standards. This will significantly improve interoperability among different standards, reducing fragmentation in the landscape. Our proposal aims to separate the storage of data representing an underlying asset from the standard interface used for representing the token.
Consistent Interface: A uniform interface of primitive functions abstracts the data storage from the use case, irrespective of the underlying token’s standard or the interface used for exposing such data. Both data as well as metadata can be stored on-chain, and exposed through the same functions.
Data Portability: We provide a mechanism for the Horizontal Mobility of data between implementations of this standard, incentivizing the implementation of interoperable solutions and standard adapters.
Specification
Terms
Data Index Implementation: One or many Smart Contracts implementing the Data Index interface, used for data access management through the indexing of Data Objects.
Data Point: A uniquely identifiable unit of information indexed by a Data Index, managed by a Data Manager through a Data Object, and provided by a Data Point Registry.
Data Object: One or many Smart Contracts implementing the low-level storage management of stored Data Points.
Data Manager: One or many Smart Contracts implementing the high-level logic and end-user interfaces for managing Data Points.
Data Point Registry: One or many Smart Contracts that define a space of compatible or interoperable Data Points.
The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
Data Index Interface
DataIndex SHOULD manage the access of Data Managers to Data Objects.
DataIndex SHOULD manage internal IDs for each user.
DataIndex SHOULD use the IDataIndex interface:
interfaceIDataIndex{/**
* @notice Verifies if DataManager is allowed to write specific DataPoint on specific DataObject
* @param dp Identifier of the DataPoint
* @param dm Address of DataManager
* @return if write access is allowed
*/functionisApprovedDataManager(DataPointdp,addressdm)externalviewreturns(bool);/**
* @notice Defines if DataManager is allowed to write specific DataPoint
* @param dp Identifier of the DataPoint
* @param dm Address of DataManager
* @param approved if DataManager should be approved for the DataPoint
* @dev Function should be restricted to DataPoint maintainer only
*/functionallowDataManager(DataPointdp,addressdm,boolapproved)external;/**
* @notice Reads stored data
* @param dobj Identifier of DataObject
* @param dp Identifier of the datapoint
* @param operation Read operation to execute on the data
* @param data Operation-specific data
* @return Operation-specific data
*/functionread(addressdobj,DataPointdp,bytes4operation,bytescalldatadata)externalviewreturns(bytesmemory);/**
* @notice Store data
* @param dobj Identifier of DataObject
* @param dp Identifier of the datapoint
* @param operation Read operation to execute on the data
* @param data Operation-specific data
* @return Operation-specific data (can be empty)
* @dev Function should be restricted to allowed DMs only
*/functionwrite(addressdobj,DataPointdp,bytes4operation,bytescalldatadata)externalreturns(bytesmemory);}
The Data Index is a smart contract entrusted with access control. It is a gating mechanism for Data Managers to access Data Objects. If a Data Manager intends to access a Data Point (either by read(), write(), or any other method), the Data Index should be used for validating access to the data.
The mechanism for ID managamenent determines a space of compatibility between implementations.
Data Object Interface
Data Object SHOULD implement the logic directly related to handling the data stored on Data Points.
Data Object SHOULD implement the logic for transfering management of its Data Points to a different Data Index Implementation.
Data Object SHOULD use the IDataObject interface:
interfaceIDataObject{/**
* @notice Reads stored data
* @param dp Identifier of the DataPoint
* @param operation Read operation to execute on the data
* @param data Operation-specific data
* @return Operation-specific data
*/functionread(DataPointdp,bytes4operation,bytescalldatadata)externalviewreturns(bytesmemory);/**
* @notice Store data
* @param dp Identifier of the DataPoint
* @param operation Read operation to execute on the data
* @param data Operation-specific data
* @return Operation-specific data (can be empty)
*/functionwrite(DataPointdp,bytes4operation,bytescalldatadata)externalreturns(bytesmemory);/**
* @notice Sets DataIndex Implementation
* @param dp Identifier of the DataPoint
* @param newImpl address of the new DataIndex implementation
*/functionsetDIImplementation(DataPointdp,addressnewImpl)external;}
Data Objects are entrusted with the management of transactions that affect the storage of Data Points.
Data Objects can receive read(), write(), or any other custom requests from a Data Manager requesting access to a Data Point.
As such, Data Objects respond to a gating mechanism given by a single Data Index. The function setDIImplementation() SHOULD enable the delegation of the the management function to an IDataIndex implementation.
Data Point Structure
Data Point SHOULD be bytes32 storage units.
Data Point SHOULD use a 4 bytes prefix for storing information relevant to the compatibility with other Data Points.
Data Point SHOULD use the last 20 bytes for storage identifying which Registry allocated them.
The RECOMMENDED internal structure of the Data Point is as follows:
/**
* RECOMMENDED internal DataPoint structure on the Reference Implementation:
* 0xPPPPVVRRIIIIIIIIHHHHHHHHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
* - Prefix (bytes4)
* -- PPPP - Type prefix (i.e. 0x4450 - ASCII representation of letters "DP")
* -- VV - Verison of DataPoint specification (i.e. 0x00 for the reference implementation)
* -- RR - Reserved
* - Registry-local identifier
* -- IIIIIIII - 32 bit implementation-specific id of the DataPoint
* - Chain ID (bytes4)
* -- HHHHHHHH - 32 bit of chain identifier
* - REGISTRY Address (bytes20)
* -- AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA - Address of Registry which allocated the DataPoint
**/
Data Points are the low-level structure abstracting information. Data Points are allocated by a Data Point Registry, and this information should be stored within its internal structure. Each Data Point should have a unique identifier provided by the Data Point Registry when instantiated.
Data Point Registry Interface
Data Point Registry SHOULD store Data Point access management data for Data Managers and Data Objects
Data Point Registry SHOULD use the IDataPointRegistry interface:
interfaceIDataPointRegistry{/**
* @notice Verifies if an address has an Admin role for a DataPoint
* @param dp DataPoint
* @param account Account to verify
*/functionisAdmin(DataPointdp,addressaccount)externalviewreturns(bool);/**
* @notice Allocates a DataPoint to an owner
* @param owner Owner of the new DataPoint
* @dev Owner SHOULD be granted Admin role during allocation
*/functionallocate(addressowner)externalpayablereturns(DataPoint);/**
* @notice Transfers a DataPoint to an owner
* @param dp Data Point to be transferred
* @param owner Owner of the new DataPoint
*/functiontransferOwnership(DataPointdp,addressnewOwner)external;/**
* @notice Grant permission to grant/revoke other roles on the DataPoint inside a Data Index Implementation
* This is useful if DataManagers are deployed during lifecycle of the application.
* @param dp DataPoint
* @param account New admin
* @return If the role was granted (otherwise account already had the role)
*/functiongrantAdminRole(DataPointdp,addressaccount)externalreturns(bool);/**
* @notice Revoke permission to grant/revoke other roles on the DataPoint inside a Data Index Implementation
* @param dp DataPoint
* @param account Old admin
* @dev If an owner revokes Admin role from himself, he can add it again
* @return If the role was revoked (otherwise account didn't had the role)
*/functionrevokeAdminRole(DataPointdp,addressaccount)externalreturns(bool);}
The Data Point Registry is a smart contract entrusted with Data Point access control. Data Managers may request the allocation of Data Points to the Data Point Registry. Access-control to those Data Points is also managed by the Data Point Registry.
Data Manager Contract
Data Manager MAY use read() or DataObject.read() to read data form Data Objects
Data Manager MAY use write() to write data to Data Objects
Data Manager MAY share Data Point with other Data Managers
Data Manager MAY use multiple Data Points
Data Manager MAY implement the logic for requesting Data Points from a Data Point Registry.
Data Managers are independent smart contracts that implement the business logic or “high-level” data management. They can either read() from a Data Object address and write() through a Data Index Implementation managing the delegated storage of the Data Points.
Rationale
The decision to encode Data Points as bytes32 data containers is primarily driven by flexibility and future-proofing. Using bytes32 allows for a wide range of data encodings. This provides the developer with many options to accommodate diverse use cases. Furthermore, as Ethereum and its standards continue to evolve, encoding as bytes32 ensures that the Standard Adapters can support future data types or structures without requiring significant changes to the standard adapter itself. The Data Point encoding should have a prefix so that the Data Object can efficiently identify compatibility issues when accessing the data storage. Additionally, the prefix should be used to find the Data Point Registry and verify admin access of the Data Point. The use of a suffix for identifying the Data Point Registry is also required, for the Data Object to quickly discard badly formed transactions that aim to use a Data Point from an unmatching Data Point Registry.
Data Manager implementations decide which Data Points they will be using. Their allocation is managed through a Data Point Registry, and the access to the Data Point is managed by passing through the Data Index Implementation.
Data Objects being independent separate Smart Contracts that implement the same read/write interface for communicating with Data Managers is a decision mainly driven by the scalability of the system. Offering a simple interface for this 2-layer structure enables different applications to have their addresses for storage of data as well as assets. It is up to each implementation to manage access to this Data Point storage space. This enables a wide array of complex, dynamic, and interactive use cases to be implemented with multiple ERCs as well as other smart contracts.
Data Objects offer flexibility in storing mutable on-chain data that can be modified as per the requirements of each specific use case. This enables the Data Managers to hold mutable states in delegated storage and reflect changes over time, providing a dynamic layer to the otherwise static nature of storage through most other standardized interfaces.
As the Data Points can be set to respond to a specific Data Index implementation, Data Managers can decide to migrate the complete storage of a Data Object from one Data Index implementation to another.
By leveraging multiple implementations of the IDataIndex interface, this standard delivers a powerful framework that amplifies the potential of all ERCs (present and future).
Backwards Compatibility
This ERC is intended to augment the functionality of existing token standards without introducing breaking changes. As such, it does not present any backward compatibility issues. Already deployed tokens under other ERCs can be wrapped as Data Points and managed by Data Objects, and later exposed through any implementation of Data Managers. Each interoperability integration will require a compatibility analysis, depending on the use case.
Reference Implementation
We present an educational example implementation showcasing two types of tokens (Fungible and Semi-Fungible) sharing the same storage. The abstraction of the storage from the logic is achieved through the use of Data Objects. A factory is used for deploying fungible token contracts that share storage with each semi-fungible NFT representing a collection of fractions. Note that if a transfer() is called by either interface (Fungible or Semi-Fungible), both interfaces are emitting an event.
This example has not been audited and should not be used in production environments.
The access control is separated into three layers:
Layer 1: The Data Point Registry allocates for Data Managers and manages ownership (admin/write rights) of Data Points.
Layer 2: The Data Index smart contract implements Access Control by managing Approvals of Data Managers to Data Points. It uses the Data Point Registry to verify who can grant/revoke this access.
Layer 3: The Data Manager exposes functions that can perform write operations on the Data Point by calling the Data Index implementation.
No further security considerations are derived specifically from this ERC.