This standard introduces a flexible on-chain storage pattern that organizes data into structured tables that consist of records with fixed key and value schemas, similar to a traditional database. This storage pattern consists of a unified contract interface for data access, along with a compact binary encoding format for both static and dynamic data types. State changes are tracked through standardized events that enable automatic, schema-aware state replication by off-chain indexers. New tables can be dynamically registered at runtime through a special table that stores schema metadata for all tables, allowing the system to evolve without breaking existing contracts or integrations.
Motivation
The absence of consistent standards for on-chain data management in smart contracts can lead to rigid implementations, tightly coupled contract logic with off-chain services, and challenges in updating or extending a contract’s data layout without breaking existing integrations.
Using the storage mechanism defined in this ERC provides the following benefits:
Automatic Indexing: By emitting consistent, standardized events during state changes, off-chain services can automatically track on-chain state and provide schema-aware indexer APIs.
Elimination of Custom Getter Functions: Any contract or off-chain service can read stored data through a consistent interface, decoupling smart contract implementation from specific data access patterns and reducing development overhead.
Simpler Upgradability: This pattern leverages unstructured storage, making it easier to upgrade contract logic without the risks associated with using a fixed storage layout.
Flexible Data Extensions: New tables can be added at runtime without without breaking existing integrations with other data consumers.
Reduced gas costs: Using efficient data packing reduces gas costs for both storage and event emissions.
Specification
Definitions
Store
A smart contract that implements the interface proposed by this ERC and organizes data in Tables. It emits events for each data operation so that off-chain components can replicate the state of all tables.
Table
A storage structure that holds Records sharing the same Schema.
On-chain Table: Stores its state on-chain and emits events for off-chain- indexers.
Off-chain Table: Does not store state on-chain but emits events for off-chain indexers.
Record
A piece of data stored in a Table, addressed by one or more keys.
ResourceId
A 32-byte value that uniquely identifies each Table within the Store.
typeResourceIdisbytes32;
Encoding:
Bytes (from left to right)
Description
0-1
Table type identifier
2-31
Unique identifier
Table Type Identifiers:
0x7462 ("tb") for on-chain tables
0x6f74 ("ot") for off-chain tables
Schema
Used to represent the layout of Records within a table.
typeSchemaisbytes32;
Each Table defines two schemas:
Key Schema: the types of the keys used to uniquely identify a Record within a table. It consists only of fixed-length data types.
Value Schema: the types of the value fields of a Record within a table, which can include both fixed-length and variable-length data types.
Byte(s) from left to right
Value
Constraint
0-1
Total byte length of static fields
2
Number of static length fields
≤ (28 - number of dynamic length fields)
3
Number of dynamic length fields
For the key schema, 0
For the value schema, ≤5
4-31
Each byte encodes a SchemaType
Dynamic-length types MUST come after all the static-length types.
SchemaType
Single byte that represents the type of a specific static or dynamic field.
enumSchemaType{...}
Type Encoding:
Value Range
Type
0x00 to 0x1F
uint8 to uint256 (increments of 8 bits)
0x20 to 0x3F
int8 to int256 (increments of 8 bits)
0x40 to 0x5F
bytes1 to bytes32
0x60
bool
0x61
address
0x62 to 0x81
uint8[] to uint256[]
0x82 to 0xA1
int8[] to int256[]
0xA2 to 0xC1
bytes1[] to bytes32[]
0xC2
bool[]
0xC3
address[]
0xC4
bytes
0xC5
string
FieldLayout
Encodes the concrete value Schema information, specifically the total byte length of the static fields, the number of dynamic fields and the length of each static field on its own.
This encoding serves as an optimization for on-chain operations. By having the exact lengths readily available, the Store doesn’t need to repeatedly compute or translate the schema definitions into actual field lengths during execution.
typeFieldLayoutisbytes32;
Byte(s) from left to right
Value
Constraint
0-1
Total length of static fields
2
Number of static length fields
≤ (28 - number of dynamic length fields)
3
Number of dynamic length fields
For the key schema, 0
For the value schema, ≤5
4-31
Each byte encodes the byte length of the corresponding static field
EncodedLengths
Encodes the byte length of all the dynamic fields of a specific Record. It is returned by the Store methods when reading a Record, as it is needed for decoding dynamic fields.
typeEncodedLengthsisbytes32;
Bytes (from least to most significant)
Type
Description
0x00-0x06
uint56
Total byte length of dynamic data
0x07-0xB
uint40
Length of the first dynamic field
0x0C-0x10
uint40
Length of the second dynamic field
0x11-0x15
uint40
Length of the third dynamic field
0x16-0x1A
uint40
Length of the fourth dynamic field
0x1B-0x1F
uint40
Length of the fifth dynamic field
Packed Data Encoding
Record data returned by Store methods and included in Store events uses the following encoding rules.
Field Limits
Maximum Total Fields: A record can contain up to 28 fields in total (both static and dynamic fields combined).
This limit is due to the Schema type structure, which uses 28 bytes (bytes 4 to 31) to define field types, with one byte per field (SchemaType).
Dynamic Fields Limit: A record can have up to 5 dynamic fields.
This is due to the fact that a single 32 bytes word (EncodedLengths) to encode the byte lengths of each dynamic field, instead of encoding each length separately as Solidity’s abi.encode would.
Static Fields Limit: The maximum number of static fields is 28 minus the number of dynamic fields.
For example, if there are 5 dynamic fields, the maximum number of static fields is 23 (28 - 5).
Encoding Rules
Static-length fields are encoded without any padding, and concatenated in the order they are defined in the schema, which is equivalent to using Solidity’s abi.encodePacked.
For dynamic-length fields (arrays, bytes, and strings):
If the field is an array, its elements are tightly packed without padding.
All dynamic fields are concatenated together without padding and without including their lengths.
The lengths of all dynamic fields are encoded into a single EncodedLengths.
bytesmemorystaticData=abi.encodePacked(id,owner);// This is a custom function as Solidity does not provide a way to tightly pack array elements
bytesmemorypackedScores=packElementsWithoutPadding(scores);// abi.encodePacked concatenates both description and packedScores without including their lengths
bytesmemorydynamicData=abi.encodePacked(description,packedScores);// Total length is encoded in the 56 least significant bits
EncodedLengthsencodedLengths=dynamicData.length;// Each length is encoded using 5 bytes
encodedLengths|=(description.length<<(56));encodedLengths|=(encodedData.length<<(56+8*5));// The full encoded record data is represented by the following tuple:
// (staticData, encodedLengths, dynamicData)
Store Interface
All Stores MUST implement the following interface.
interfaceIStore{/**
* Get full encoded record (all fields, static and dynamic data) for the given tableId and key tuple.
*/functiongetRecord(ResourceIdtableId,bytes32[]calldatakeyTuple)externalviewreturns(bytesmemorystaticData,EncodedLengthsencodedLengths,bytesmemorydynamicData);/**
* Get a single encoded field from the given tableId and key tuple.
*/functiongetField(ResourceIdtableId,bytes32[]calldatakeyTuple,uint8fieldIndex)externalviewreturns(bytesmemorydata);/**
* Get the byte length of a single field from the given tableId and key tuple
*/functiongetFieldLength(ResourceIdtableId,bytes32[]memorykeyTuple,uint8fieldIndex)externalviewreturns(uint256);}
The return values of both getRecord and getField use the encoding rules previously defined in the Packed Data Encoding section. More specifically, getRecord returns the fully encoded record tuple, and the data returned by getField is encoded using the encoding rules as if the field was being encoded on its own.
Store Operations and Events
This standard defines three core operations for manipulating records in a table: setting, updating, and deleting. For each operation, specific events must be emitted. The implementation details of these operations are left to the discretion of each Store implementation.
The fundamental requirement is that for on-chain tables the Record data retrieved through the Store interface methods at any given block MUST be consistent with the Record data that would be obtained by applying the operations implied by the Store events up to that block. This ensures data integrity and allows for accurate off-chain state reconstruction.
Store_SetRecord
Setting a Record means overwriting all of its fields. This operation can be performed whether the record has been set before or not (the standard does not enforce existence checks).
The Store_SetRecord event MUST be emitted whenever the full data of a record has been overwritten.
Splicing the static data of a Record consists in overwriting bytes of the packed encoded static fields. The total length of static data does not change as it is determined by the table’s value schema.
The Store_SpliceStaticData event MUST be emitted whenever the static data of the Record has been spliced.
The start position in bytes for the splice operation
data
bytes
Packed ABI encoding of a tuple with the value’s static fields
Store_SpliceDynamicData
Splicing the dynamic data of a Record involves modifying the packed encoded representation of its dynamic fields by removing, replacing, and/or inserting new bytes in place.
The Store_SpliceDynamicData event MUST be emitted whenever the dynamic data of the Record has been spliced.
An array representing the composite key for the record
dynamicFieldIndex
uint8
The index of the dynamic field to splice data, relative to the start of the dynamic fields (Dynamic field index = field index - number of static fields)
start
uint48
The start position in bytes for the splice operation
deleteCount
uint40
The number of bytes to delete in the splice operation
encodedLengths
EncodedLengths
The resulting encoded lengths of the dynamic data of the record
data
bytes
The data to insert into the dynamic data of the record at the start byte
Store_DeleteRecord
The Store_DeleteRecord event MUST be emitted whenever the Record has been deleted from the Table.
To keep track of the information of each table and support registering new tables at runtime, the Store implementation MUST include a special on-chain Tables table, which behaves the same way as other on-chain tables except for the special constraints mentioned below.
The Tables table MUST use the following Schemas:
Key Schema:
tableId (ResourceId): ResourceId of the table this record describes.
Value Schema:
fieldLayout (FieldLayout): encodes the byte length of each static data type in the table.
keySchema (Schema): represents the data types of the (composite) key of the table.
valueSchema (Schema): represents the data types of the value fields of the table.
abiEncodedKeyNames (bytes): ABI encoded string array of key names.
abiEncodedFieldNames (bytes): ABI encoded string array of field names.
Records stored in the Tables table are considered immutable:
The Store MUST emit a single Store_SetRecord event for each table being registered.
The Store SHOULD NOT emit any other Store events for a Table registered in the Tables table.
The Tables table MUST store a record that describes itself before any other table is registered, emitting the corresponding Store_SetRecord event. The record must use the following tableId:
// First two bytes indicates that this is an on-chain table
// The next 30 bytes are the unique identifier for the Tables table
// bytes32("tb") | bytes32("store") >> (2 * 8) | bytes32("Tables") >> (2 * 8 + 14 * 8)
ResourceIdtableId=ResourceId.wrap(0x746273746f72650000000000000000005461626c657300000000000000000000);
By using a predefined ResourceId and Schema for the Tables table, off-chain indexers can interpret store events for all registered tables. This enables the development of advanced off-chain services that operate on structured data rather than raw encoded data like in the previous indexer implementation example.
Rationale
Splice Events
While the Store_SetRecord event suffices for tracking the data of each record off-chain, including Splice events (Store_SpliceStaticData and Store_SpliceDynamicData) allows for more efficient partial updates. When only a portion of a record changes, emitting a full SetRecord event would be inefficient because the entire record data would need to be read from storage and emitted. Splice events enable the store to emit only the minimal necessary data for the update, reducing gas consumption. This is particularly important for records with large dynamic fields, as the cost of updating them doesn’t grow with the field’s size.
Disallowing Arrays of Dynamic Types
Arrays of dynamic types (e.g., string[], bytes[]) are intentionally not included as supported SchemaTypes. This restriction enforces a flat data schema, which simplifies the store implementation and enhances efficiency. If users need to store such data structures, they can model them using a separate table with a schema like { index: uint256, data: bytes }, where each array element is represented as an individual record.
FieldLayout Optimization
Including the FieldLayout in the Tables schema provides an on-chain optimization by precomputing and storing the exact byte lengths of static fields. This eliminates the need to repeatedly compute field lengths and offsets during runtime, which can be gas-intensive. By having this information readily available, the store can perform storage operations more efficiently, while components reading from the store can retrieve it from the Tables table to decode the corresponding records.
Special Tables table
Including a special Tables table provides significant benefits for off-chain indexers. While emitting events for table registration isn’t strictly necessary for basic indexers that operate on raw encoded data, doing so makes indexers aware of the schemas used by each table. This awareness enables the development of more advanced, schema-aware indexer APIs (e.g., SQL-like query capabilities), enhancing the utility and flexibility of off-chain data interactions.
By reusing existing Store abstractions for table registration, we also simplify the implementation and eliminate the need for additional, specific table registration events. Indexers can leverage the standard Store events to access schema information, ensuring consistency and reducing complexity.
Reference Implementation
Store Event Indexing
The following example shows how a simple in-memory indexer can use the Store events to replicate the Store state off-chain. It is important to note that this indexer operates over raw encoded data which is not that useful on its own, but can be improved as we will explain in the next section.
We use TypeScript for this example but it can easily be replicated with other languages.
typeHex=`0x${string}`;typeRecord={staticData:Hex;encodedLengths:Hex;dynamicData:Hex;};conststore=newMap<string,Record>();// Create a key string from a table ID and key tuple to use in our store Map abovefunctionstoreKey(tableId:Hex,keyTuple:Hex[]):string{return`${tableId}:${keyTuple.join(",")}`;}// Like `Array.splice`, but for strings of bytesfunctionbytesSplice(data:Hex,start:number,deleteCount=0,newData:Hex="0x"):Hex{constdataNibbles=data.replace(/^0x/,"").split("");constnewDataNibbles=newData.replace(/^0x/,"").split("");return`0x${dataNibbles.splice(start,deleteCount*2).concat(newDataNibbles).join("")}`;}functionbytesLength(data:Hex):number{returndata.replace(/^0x/,"").length/2;}functionprocessStoreEvent(log:StoreEvent){if(log.eventName==="Store_SetRecord"){constkey=storeKey(log.args.tableId,log.args.keyTuple);// Overwrite all of the Record's fieldsstore.set(key,{staticData:log.args.staticData,encodedLengths:log.args.encodedLengths,dynamicData:log.args.dynamicData,});}elseif(log.eventName==="Store_SpliceStaticData"){constkey=storeKey(log.args.tableId,log.args.keyTuple);constrecord=store.get(key)??{staticData:"0x",encodedLengths:"0x",dynamicData:"0x",};// Splice the static field data of the Recordstore.set(key,{staticData:bytesSplice(record.staticData,log.args.start,bytesLength(log.args.data),log.args.data),encodedLengths:record.encodedLengths,dynamicData:record.dynamicData,});}elseif(log.eventName==="Store_SpliceDynamicData"){constkey=storeKey(log.args.tableId,log.args.keyTuple);constrecord=store.get(key)??{staticData:"0x",encodedLengths:"0x",dynamicData:"0x",};// Splice the dynamic field data of the Recordstore.set(key,{staticData:record.staticData,encodedLengths:log.args.encodedLengths,dynamicData:bytesSplice(record.dynamicData,log.args.start,log.args.deleteCount,log.args.data),});}elseif(log.eventName==="Store_DeleteRecord"){constkey=storeKey(log.args.tableId,log.args.keyTuple);// Delete the whole Recordstore.delete(key);}}
Security Considerations
Access Control
This standard only defines functions to read from the Store (getRecord, getField, and getFieldLength). The methods for setting or modifying records in the store are left to each specific implementation. Therefore, implementations must provide appropriate access control mechanisms for writing to the store, tailored to their specific use cases.
On-Chain Data Accessibility
All data stored within a store is accessible not only off-chain but also on-chain by other smart contracts through the provided read functions (getRecord, getField, and getFieldLength). This differs from the typical behavior of smart contracts, where internal storage variables are private by default and cannot be directly read by other contracts unless explicit getter functions are provided. Thus, developers must be mindful that any data stored in the store is openly accessible to other smart contracts.