March was another busy month of coding, with an additional 68,000 lines of code written and the release of the Tritium testnet. The development team have also been holding weekly zoom meetings, for which we have provided some of the highlights.
The team has run several successful mining/sync/fork recovery tests. On the back of this, the tritium testnet is now open to public connections (we were previously whitelisting connections to developer IPs only). You are welcome to connect to the testnet to test mining and basic account / API use. Please join #tritium-testnet if you want to participate, as you will need to know the current testnet number to set testnet=xx in your nexus.conf. At the time of writing we are using testnet 11.
To test the tritium core in beta mode on the legacy mainnet, please use the ‘Staging’ branch in github.
To test the tritium features (signature chains, APIs) on the tritium testnet, please use the ‘Merging’ branch in github.
The final improvements to the legacy wallet code are complete, and the Tritium wallet with legacy back-end now syncs in under one hour. We’re closing off some performance issues with the legacy wallet, specifically the rescan function. In order to do this we are making changes to the way we access data in the LLD to perform better serial access, as opposed to random access for which it was designed.
A new LISP-Trace monitoring tool has been written called ‘ltr’ which shows the path a packet takes from source EID to destination EID, as well as the return path. This is a very useful tool for debugging the LISP connectivity and messaging issues.
The encrypted pointer encrypts the memory location using AES-128. This makes it very difficult for a virus to ‘eavesdrop’ and potentially steal sensitive data by reading process memory, such as your sigchain login or pin. It is also useful for developing applications that rely on critical information in memory, and is available to use in the LLL utilities.
Nodes on the Tritium Protocol keep track of global indexes, meaning that you don’t need to rescan a node if you are logging in to it for the first time. This makes managing notifications (transactions that require your acceptance such as debits and transfers) much more efficient and optimized.
Argon2 is now being used for key and username generation, which is a memory-hard password hashing algorithm with variable complexity arguments, meaning that it can control how many seconds it takes to generate another key or username. Now the time it takes an external ‘hacker’ to offline brute-force a sigchain can be computationally bound by memory-latency, resulting in the leveling of the playing field between all devices. Therefore, an FPGA, ASIC or even a GPU farm have less competitive advantage over a CPU.
Our current Argon2 settings requires at least 0.3 seconds to generate a new key, meaning one is only able to ‘try’ three passwords per second. Combining this with a minimum requirement of at least 8 alphanumeric [a-Z, 0-9] characters per password, even if the username and PIN were compromised, the time required to crack the password would be in the order of 2.3 million years. The use of biometric username generation will also be another step in strengthening your credentials and sigchain access by further increasing the physical requirements to gain access.
Falcon is a very compact lattice-based cryptographic algorithm and a second round candidate of NIST’s Post-Quantum competition. The computational requirements are at least 1/40th of ECDSA, which means you can verify signatures very very fast. The downside is it is about 1.5kb for both the public key and signature. Though Falcon is based on aged and proven mathematics (NTRU lattices), it has not undergone as much crypto-analysis as ECC or RSA. Falcon is now running on the testnet, and more information can be read about it here:
These APIs also provide functionality for an asset to be owned by a token, to create what is known as ‘Tokenized Ownership’. Your token balance represents your partial ownership in the underlying asset. Therefore, tokens can allow the function of automatic dividend payouts (split revenue) without the requirement of a third party custodian.
The team has implemented support for sessionless API use. This simplifies the process for users who would like to interact with their sigchain and use the various APIs with the CLI (command-line interface), without having to keep track of, and supply their session ID with each API call. This makes the usage more akin to the legacy RPC CLI. The API will default to sessionless, though can be switched to session-based by adding -apisessions=1 to your config.
We have been working closely with the seed node operators and block explorer developers to shape the requirements for the Ledger API, thanks to @mercuryminer, @psipherious, and @danialsan for their input. A new getblocks method has been added to allow batches of up to 1000 blocks and their transactions to be retrieved in a single call (taking about 3 seconds in testnet), which is crucial for block explorers and other data aggregators.
Next week work will start on the Network and Legacy APIs, which will serve as drop-in replacements for many of the frequently-used RPC commands, so that users/integrators only have to use the new API rather than having to switch between it and the legacy RPC.
We have static / unit tested the underlying functionality for most of the APIs, so we are now finishing and testing the individual API methods. These APIs are lower level, without schema or format specified. APIs are in development for Licensing & Royalties, Dividends and Voting.
Jack is working on a universal mining application that can be used on both the prime and hash channels by GPU and CPU. The immediate priority of this work is to increase the efficiency of prime channel mining using GPUs in order to compete with privately developed mining farms that currently dominate this channel. We have made good progress on this, and hope to release an updated miner with a significant speed increase next month.
The team has made significant progress on the validation scripts that will be used to drive more complex contract behaviour. Essentially, a validation script is a set of rules that must evaluate to ‘true’ for a transaction to execute or to be claimed. These rules can include data from global state variables such as unified time, block height, and coin supply, as well as data from the sender / recipient signature chain and the registers that they own.
For example, this opens up the possibility to encode rules such as ‘transfer asset X from sig chain A to sig chain B, as long as 1000 ABC tokens have been deposited into A’s signature chain, and as long as this occurs before the date 01/01/2020’.
To verify these results, please compile the source code with LIVE_TESTS=1 to run benchmarks and unit tests.
We are developing a set of API methods that will encapsulate the commonly used validation scripts that we expect people to use for ICOs / STOs, royalty payments, dividends and for the DEX. More advanced users will be able to create their own validation scripts by writing them in our virtual machine assembly, or a higher level domain specific language (DSL) when developed.
We have designed this aspect of the Operations Layer to be sensitive to common mistakes that developers may make, making it more difficult to introduce ‘bugs’ into the contract that could be exploited as security flaws
The framework for the module market is near completion. Once complete, it will allow anyoneto start developing modules for the nexus wallet. The first official module under development is the internal wallet block explorer.
The foundations of the decentralized exchange (DEX) are validation scripts. Essentially, an asset could be put up for transfer as a validation script. For example, my order requires ‘1000 ABC tokens’ before you can claim ‘asset X’.Once a corresponding transaction fulfils this script the token and asset transfer clears, allowing each party to claim both sides of the exchange without the requirement of a central clearinghouse.
Running with -dex enabled will require more disk space for a full node, because of all the necessary indexing of the orders. Currently, disk usage is 30% less for a tritium node, versus legacy mode. We don’t expect that the DEX will require too much computation, because it is only going to depend on foreign indexes of transactions to an iterator number.
If enabled (by issuing the config flag -dex), you’ll be able to see all of the open orders and all of the orders that have ever been executed. From here, the front-end development team will have the data to populate graphs.
Paul attended the ADC Global Blockchain Summit in Adelaide earlier this month. The event brought together government, businesses, financiers, regulators, researchers, and innovators to discuss the strategies and practical applications of blockchain technology. Notable contacts were made with regulatory and research organizations such as OECD, CSIRO, and MainChain, in addition to various businesses and educational establishments looking for blockchain tech partners.
Discussions continue with our lawyers and the tax office over the tax treatment of the ambassador keys, and the general tax structure of the embassy and its subsidiary operating company. The decision has been made to apply to the ACNC to register the Australian Embassy as a charity which, if successful, will provide us with tax exemption and greatly simplify the financial and accounting process.
Nexus UK has attended a number of events such as:
London Blockchain Week
‘Law and Blockchain’ & ‘Blockchain Unchained’ Seminars
We also spent some time with our advisors, in particular Jeff Garzik, discussing how Nexus can increase adoption globally both with regards to enterprise solutions and crypto consumers. The UK Embassy has continued to explore a number of high profile business development opportunities with the goal of creating globally adopted use cases.
Dino and Colin will be at the IETF (Internet Engineering Task Force) on the 29th and 30th of March. Please message @jules if you are in Prague.
Alex and Colin will be hosting a meetup in London on Thursday 4th April at 18:00pm. Venue: The Chapel Bar, 29 Penton St, London, N1 9PX. Please come and join us to learn more about our recent developments.
First up, there’s a ton of new code. 50k+ new lines of it. Great job, Nexus devs!
Lower Level Crypto:
Nexus is monitoring new quantum-resistant signature schemes that use lattices to replace the Elliptic Curve Cryptography scheme that is currently used for private/public keys. Lattices will make your signature chain more secure against quantum computers but requires greater processing, which is already a throughput bottleneck at 4.3k TPS.
Lower Level Database:
A new keychain called a Binary Hash Map has been created as an alternative to the Binary File Map (another form of indexing). Essentially, if the blockchain is indexed via account, then the account name is run through a hash function which determines its location in the index, called a bucket. A bucket collision is when the hash output is the same for another input.
When this occurs, it searches through multiple hashmaps from the largest index to smallest index and places the information in the first empty bucket at the calculated position. When the key needs to be retrieved, it calculates the bucket location and then checks that bucket in each hashmap until the key is found. Greater bucket collision resistance means that the Binary Hash Map is more efficient.
The Binary LRU cache is pretty self-explanatory. Information that is used most frequently is retained whilst infrequently used data is replaced.
The Transaction Journal is like a page file for the blockchain. Rather than holding every transaction that has been processed and is awaiting placement into a block in RAM, which can be corrupted after power failure, the LLD sets aside a portion of the hard drive in a file and stores the pending writes there. After reboot, it can recover if there was a failure at any point in the ACID transaction, due to checkpointing the journal file before events are committed to the main database.
ACID is not just known for great visual effects and synesthesia, but is a well-known acronym from the world of databases. It stands for Atomicity, Consistency, Isolation and Durability. Meaning:
When a transaction is made, every part of the transaction must be valid or the transaction fails.
Transactions must be in order, for instance a DEBIT must happen before a CREDIT, or for that instant in time the ledger would be in an invalid state.
Once a transaction is made, it is irreversible. You cannot make another transaction that would invalidate a previous transaction. For instance, let’s say I have 1000 NXS and buy something worth 500 NXS. If I try to quickly go and transfer 501 NXS, it would fail.
The LLP handles the networking layer of Tritium, managing sockets and connections with other nodes. The benchmarks reported here are run on the underlay without LISP. As can be seen in the photo, and as Colin explained, this benchmark is not performed with any Ledger layer validation and is simply a load test on network capability alone. Under real-world conditions, this will not be the case.
A transaction object contains all the information needed to make a transaction, for instance sending account, receiving account, amount to be transferred, signature data, public key, tx ID etc. When a new transaction is received, the node performs pre-processing to validate the transaction. When a block is received, the node then checks to see if that transaction is included in the broadcast block, and then commits this collection of indexes to disk using an ACID transaction (remember, ‘all or nothing’). This benchmark of 647ms was for post-processing only. Pre-processing takes a little longer, but given an average block time of 50 seconds, there is plenty of time available to perform this processing.
Preprocessing tests received transactions to ensure that they are valid and do not violate ledger, register, or operations verification. For a short period, I believe about 12 months was the stated duration, nodes will be able to process both Tritium transactions and legacy transactions. Tritium transactions use the new account model with the new signature chains, whereas legacy transactions still use the UTXO system with private/public key security. Legacy addresses can send to Tritium accounts, but Tritium accounts cannot send to Legacy addresses. Any Legacy addresses containing coins after this period will be inaccessible.
All pre-processed transactions are retained in a mempool awaiting inclusion in a block. If a second transaction is received which contradicts or invalidates an already processed transaction, then that second transaction is rejected.
Tritium blocks do not contain all the information that was contained within the transaction object. They only contain a list of all the transaction IDs contained within the block. When the block is received, nodes compare every transaction ID which has passed pre-processing and if the block contains an unknown or unprocessed transaction ID, then that block is not accepted until that transaction is received and processed. This list of transaction IDs also serves to verify the merkle root that was calculated in the block header.
Before Amine is implemented, all the pre and post processing is performed by the same nodes via PoS or PoW miners. Under Amine, they will be separated, with pre-processing or transaction validation performed via the PoS/Trust nodes, and post-processing or block verification performed by miners.
Obsidian will split the consensus process over 3 channels, the L1, L2 and L3 locks. The first 2 locks are the same as explained above. The L3 lock is a hardening stage using proof of work. Instead of miners racing to find a winning block hash, each miner works to find the most weighted hash within a 60 second period. Miners then submit this hash (which is based off the L2 locks, previous block hash, their Signature Chain’s Genesis ID, and randomly generated nonce) to the network and combined into a single merkle root hash. This way each miner gains a portion of the reward proportional to how much collective weight they contributed.
This explains that by including the register pre-state within the transaction object, it forms part of the transaction hash which is part of the block header. Light nodes can then check block headers to see if the pre-state of a new transaction formed part of a recorded block. Because this information is recorded within the block header, older pre-states can be pruned and removed from the blockchain, decreasing blockchain bloat. If a node were to attempt to include an older pre-state, it would fail register verification on receive of a new transaction, being that the published pre-state was not consistent with the current register’s state in the LLD instance that holds register data.
This section goes on to point out that extravagant claims of “100k TPS” are unrealistic without some sort of blockchain pruning or sharding, and that network bandwidth and hard drive storage are bottlenecks to this sort of throughput.
The post state is the new state of the register which is recorded in the database, and transacting nodes have to include a post-state checksum along with the transaction, which must be the same as that calculated by the validating nodes. This is important for the development of dapps built on Nexus, for if a dapp makes a transaction and expects a certain result, then the checksum provides the means to prevent the register moving to a state which the application did not intend. Its a form of error handling.
This section is basically just reiterating the contents of the Tritium whitepaper.
State registers can be manipulated by primitive operations, which will be detailed further below. These registers contain state information for external applications shared across multiple instances. This might be the location of a sea container or the status of a driver’s license.
State registers come in three flavours, Raw, Append, and Read-Only.
Raw registers have no security parameters besides ownership, only the owner of the register can modify the content of this register.
Append registers can only be added to and retains their original data and state history within the register database. This is useful for tracking ownership of property, land titles etc.
Read-only registers are unable to be changed even by the owner. It is useful for holding constants or information that will not change.
Currently, there are two types of objects, accounts and tokens. These are specialized registers which are used internally within the Operation layer. As such, their data format is pre-set and their contents can only be manipulated with specialized operations.
The Register operation assigns a new memory address to a new register. Think of this as declaring a variable within a program. Eventually, when sharding is implemented, this will need to allow assigning and accessing memory addresses in remote shards through inter-shard communication.
The Write and Append operations are self-explanatory.
The Transfer operation changes ownership of State or Token registers from one signature chain to another. This operation can only be performed by the owner of said register
A successful transaction of funds uses both a Debit and Credit operation. These operations can only be applied to Account registers. In the event of a successful Debit operation but lack a corresponding Credit operation, the issuing Debit account is able to redeem those unclaimed funds.
The next two operations, Validate and Require, operate along similar lines to IF control statements within programs. These allow conditions to be placed on the execution of operations and form the basis of Nexus’s Advanced Contracts.
Using Colin’s example of a decentralized exchange of tokens, the user wishing to sell tokens would execute the Require operation, which debits the funds and then waits for the conditions to be fulfilled. The buyer then performs a Validation operation which triggers the evaluation of User A’s conditions and removes the funds from User B’s account. Both users are then able to execute a Credit operation to deposit the corresponding funds into their accounts.
January was an extremely busy month, culminating in the release of the open source code for Tritium. An additional 58,627 lines of code have been written since the end of last October. Meanwhile, the UK and Australian Embassies have continued to develop their outreach in their respective regions. All three Embassies have provided an update of their recent progress.
On the 8th of January Colin spoke at CES Las Vegas on the topic of “How blockchain is remaking the Media/Entertainment Business”.
Please watch the recording of his discussion here.
On the 31st of January, the team released the source code for Tritium. Colin published the next update of the TAO series explaining what the code contains and what you are currently able to do with it.
We have been hard at work preparing the Tritium Daemon ready for full release out of public beta testing. Our current data shows incredible optimizations with memory usage at 90 MB, syncing from scratch on Linux at just over 1.25 hours, and syncing on a Mac laptop at 3.5 hours from genesis. This is a drastic improvement from the current legacy synchronization which requires 30 hours or more.
The source code and public beta of the Tritium 0.8.7 Interface has been released which still contains legacy cores. The devs are busy making a branch called ‘TritiumCores` which will contain the WIP for tritium cores.
We are getting ready to integrate the interface with a Tritium daemon for all the features Tritium beta has to offer: fast synchronization, instant loading, wallet stability and minimum hardware requirements.
The API, as it stand contains three main interfaces, namely Account, LISP, and Supply Chain. Each of these are tailored for industry specific applications, such as supply chain management, account verification and integration services for existing systems. As the API deployment continues, we will keep you up to date with new APIs that are ready for operation. Over the forthcoming weeks, we will continue to add more API calls and validation scripts as we get the testnet ready for deployment. If you would like to contribute to development, please make sure to submit pull requests and begin discussions on github.
The Three Pillars
The technology of Nexus is a cornerstone to the development of the community and business adoption. We believe technology, community and enterprise form the three pillars of Nexus, and thus we are happy to see the technology advancing steadily, making strides that have become important for adoption and ease of use.
“Community is the foundation to the growth of Nexus, and the technology and adoption is what supports this.”
We would like to give a special thank you to Shea for writing a simplified version of the last TAO update.
Through our membership with TechUK we have attended/are due to attend a variety of events throughout February:
– Australian-UK Fintech Reception at the Australian High Commission
– TechUK Digital ID Paper launch in Houses of Parliament
– IoT Secure by Design
– Future of Payments in the UK
– DLT Working Group
The DLT working group is of a particular interest, as we have co-written a UK Blockchain White Paper with TechUK and other influential members, due to be published later this year. This paper will be distributed to the 900 businesses who are TechUK members, and will be presented to the UK government with the aim to help encourage blockchain adoption throughout the UK.
Through the efforts of our UK PR agency Nexus was mentioned in the Independent, the Financial Times (both national news outlets) and Yahoo Finance (International) in December of 2018. To be mentioned in such news sources improves our credibility within the digital currency space.
As always, we continue to take a number of business development related meetings.
Since its launch in late November, the Australian Embassy has been working hard to set the foundations for a busy 2019. PricewaterhouseCoopers (PwC) were engaged to assist in the establishment of the business entities and advise on the correct tax framework. They found that the tax treatment of the Ambassador keys did not fit within any of the current Australian Taxation Office (ATO) tax guidelines relating to cryptocurrencies, so this process continues. On the back of these discussions, Paul has been invited to join the Australian Chamber of Digital Commerce (ADCA) Tax Working Group, which directly advises the ATO on tax legislation.
The Embassy has been actively looking for a marketing and PR agency within the Australian and Asia-Pacific regions and several candidates have been found. Most recently we met with the Chief Editor of the new Blockchain Australia magazine, which circulates in the Australian Financial Review. These agencies will expose Nexus to industry leaders and key decision makers through specific publications and articles.
The Embassy has been attending the monthly ‘Crypto Sydney – Intelligence Traded’ networking events developing some great contacts.
If you would like to join the guys in Sydney you can sign up to the event here.
We are planning to have a presence at the ADC Global Blockchain Summit in Adelaide this March, as well as the APAC Blockchain Conference in Sydney in June, the largest blockchain event in the Asia Pacific region, to meet with high profile businesses and explore potential use cases for Nexus.
Paul joined the development team in November and has hit the ground running, working alongside the other developers around the globe to complete the Tritium core. The recent downturn in NXS price has curtailed our plans to expand the development team within the Australian Embassy, but we are confident to get our plans back on track later in the year. Nicco and Mike continue to work as volunteer directors.
We would like to give a big thank you to all the dev team for their hard work and our community for their continued support, and look forward to all your feedback from the forthcoming testing of Tritium.
Dino and Colin will be at the IETF (Internet Engineering Task Force) in Prague in the last week of March. We will be organising a workshop on the afternoon of the 29th and the 30th. We will confirm the location of the venue at a later date on social media. We hope to meet many of you there.
this edition of the TAO update series, I will explain what has been
completed thus far, what is left to do, and what you can do with Tritium
after you read this article. So, let’s get started with the usual git pull origin master.
As you can see, there has been an additional 58,627 lines of code since the last TAO update, which equates to roughly 3 months of solid coding since the last git pull. This averages out to around 651 lines of new code every day since the end of October. Anyhow, let’s begin by first taking a look at the acronym to our framework: TAO
This word comes from a Chinese classical text, The Tao Te Ching,
which has been studied by some of the greatest philosophers of our
time. It represents an idea that contains the principles of balance, and
order in the greater concepts of the mind.
Tao is hidden, and has no name; but it is the Tao which is skillful at
imparting (to all things what they need) and making them complete”
Lower Level Library
The Lower Level Library (LLL) is the foundation of the TAO, which includes: viz. Crypto, Database, and Protocol (Network Layer).
Lower Level Crypto
is not much to report here, other than cleaning up some memcpy from the
Skein and Keccak functions, along with the research of some promising
candidates for a lattice based signature scheme. Right now the NIST
competition is in the first round of the review process. We will observe
how this evolves over the next year to identify which candidates to
the future, we may try out hybrid signature schemes on a test network
to see the effectiveness of lattice and elliptic curve hybrid
signatures. The data and computational overhead would be higher, but the
security parameters of our public keys would inherit a higher degree of
quantum resistance compared to that provided from our current use of
Skein and Keccak.
Lower Level Database
The following new components have been added to the Lower Level Database:
Binary Hash Map
— This is a hashmap with very low memory footprint and on disk indexing,
which handles bucket collisions at O(n) reverse iterations (linear
time), and is designed for write intensive applications. The write
capacity has peaked at around 450k writes / second, with reads peaking
at 25k reads / second from disk, and 1.4m reads per second if cached.
Binary LRU Cache
— LRU stands for Least Recently Used, and means that the cache will keep
only the elements that have been recently used, and discards the
elements that are the oldest. This makes for an efficient cache
implementation compared to FIFO (First in first out)
Transaction Journal — This
is an anti corruption measure when handling an ACID transaction that
recovers the database from invalid states in the case of power failures,
program crashes, or random restarts.
one of these components is a part of the modular framework, that you
will be able to see if you go to the src/LLD/cache, src/LLD/keychain,
src/LLD/templates folders. The most exciting piece is the addition of
the Transaction Journal. Before I give a deeper explanation of this, let
us review what is meant by the term ‘ACID’.
Atomicity — All transactions are seen as individual units that together must complete as a whole.
Consistency — All transactions must bring the database from one consistent state to another.
Isolation — Transaction
reads and writes with concurrent execution must leave the database in a
valid state, as if it was being processed in series.
Durability — Once
a transaction is committed, it must stay so even in the event of a
power failure. This usually means committing the transaction to disk.
huh? I can explain this some more. Think of a database transaction as a
commitment of many pieces of data that synchronize together in an
amalgamation of information. To understand this better, let us use a
real life example such as Kim will only give John an apple, if Carry
gives Sue a Peach. If this was a database transaction, what this would
mean is that all the prerequisites would need to be committed together,
and if any of them were to fail, the entire transaction would fail.
Let’s combine this with an ACID expression.
Atomicity — Kim, John, Carry, and Sue (individual units) exchanging fruit (the whole).
Consistency — Carry -> give Peach to Sue -> then Kim -> gives John an Apple
— If carry and John both execute the giving of their Peach and Apple
close to the same time, the ordering must be correct in the consistency
sequence, which means the peach must be given to Sue before the apple is
given to John.
— If Carry and John agree to the exchange, but never fully execute by
exchanging the Peach and the Apple due to an error, such as the apple
being forgotten by Kim, then the Apple and the Peach may never reach
John or Sue. In this case, the commitment existed (in memory), but it
never obtained durability since the physical object did not complete
I hope this above helps you understand the importance of an ACID
transaction, of which one of the most important pieces is the
‘Durability’ component. When implemented with the proper logic, this can
result in a database that cannot be corrupted, even under conditions of
power failure. Let me explain how this is achieved.
The Transaction Journal
the implementation of the Transaction Journal, every sequence of a
transaction was executed in memory. In this way the database only
records the state accompanied by the pending disk write, once the
transaction has committed. Transaction journaling introduces an on disk
checkpointing system that detects if there was an interruption during
the transaction commit process. When the database re-initializes, it is
able to detect any corruption, allowing the journal to be used to
restore the database to the most recent transaction checkpoint even
across many database instances. Therefore, at the sacrifice of a bit of
speed, we can achieve higher levels of durability for the database
engine. The latest statistics in our 100k Read and Write test ran as low
as 0.33 seconds with the binary hash map, down from the 0.86 seconds when using the binary file map.
Lower Level Protocol
I’m sure many of you remember, our last test ran as high as 200,000 requests / second. I’m happy to report that the new numbers stand at:
request is a message from one computer sent to another computer which is
generally a request for a piece of data on the remote computer, such as
a web page for a web server. Our latest test above shows the peak
performance of the Lower Level Protocol at 452,171 requests / s, over a
double increase in performance compared to the last test we submitted.
The above demonstrates the capabilities of the Network layer, without Ledger
layer validation which confirms that the network can handle very large
workloads. It is important to have efficiency in all parts of a system
in order for it to scale effectively. The efficiency of an application
comes directly from the level of physical resources required to perform
the task at hand.
ledger contains two components to its processing, the transaction
objects and the blocks that act to commit transactions to disk, and
therefore the database. Think of any blockchain as a verification
database system, where the data is required to be processed before it is
allowed to be written to the disk. Along with this set pre-processing,
every single node in the network must agree on the outcome of the
process, arriving at the same state in a synchronized ACID transaction which is carried by a block. In the case of Nexus, we follow a similar model. However, we perform the Consistency preprocessing before allowing for synchronized Isolation and Atomicity, and perform the post-processing verification afterwards as the final block receipt, allowing for Durability.
tritium transaction object contains aspects of the ledger for pre and
post processing, and the register pre states and post states, and
finally the operations payload that is responsible for mutating the
software stack for Tritium has come a long way in recent months. Now
that we have a foundation provided by the Lower Level Database and Lower
Level Protocol, it has been fun to plug in some of the features that
form the layers above. Below is a more recent stress test that verifies a
block that is at full capacity (approximately 2MB). This block as you
can see contained 32,252 transactions, and processed in 647 ms.
The software stack for Tritium has come a long way in recent months. Now that we have a foundation provided by the Lower Level Database and Lower Level Protocol, it has been fun to plug in some of the features that form the layers above. Below is a more recent stress test that verifies a block that is at full capacity (approximately 2MB). This block as you can see contained 32,252 transactions, and processed in 647 ms.
test verified the time required for ‘post-processing’ which is the
processing required after a block is received and is then added to the
chain. The required time for ‘pre-processing’ which is the processing
required before a block is received, was not included in this benchmark
test. Let’s dig a little bit deeper into what all this means, and how
these specific elements are prerequisite to Amine.
is the processing that is required for an object before it becomes a
part of the ledger. This will generally be checking for conflicts within
the database system such as spends or register pre-states, and then
more complex pre-processing which would be signature verification. It is
important to note that our tests have shown that signature verification
is the biggest bottleneck in the processing of any transaction or
contract in Tritium. Since we use a 512-bit standard for key sizes which
raises our security to around 2²⁵⁶ bits (2¹²⁸ for Bitcoin, since ECC
only retains about half of the key length in usable security due to
different types of attacks), we have more signature data that is
required to be processed when a transaction is received.
in Tritium will be performed through the memory pool. Since Tritium
blocks do not include the whole transaction object, they only contain
references to objects that they are committing to disk (think of a block
as a sort of ACID
transaction). This means that if a block is submitted that contains a
txid (transaction i.d.) that has never been publicly known by the nodes
on the network, this block will not be able to propagate until the
receiving nodes are able to run the pre-processing for that particular
txid. Consequently, if a miner tried to submit a malicious transaction
in a block as an attempt to double spend a transaction already accepted
in the memory pool, they would find it increasingly difficult to get it
added to the main chain (i.e. verified by validating nodes). This is
because none of the nodes would have the pre-processing data required to
accept this block, and that their conflicting transaction would in most
cases fail to be accepted with it being that it had a direct conflict
with another transaction that had already passed preprocessing.
Preprocessing in Amine
will be aggregated into a two processing layers, namely Trust and
Miners. This means that Trust nodes will be mainly providing
preprocessing to the network, and miners providing the post-processing.
is the processing required when a block is received, in order to fully
commit components of the data, and change the register pre-states into
their post states with verified checksums. The example above was pure
post-processing which showed that our post-processing layers scale quite
nicely, with a maximum of around 40–50k tx/s if split into a two-tier
(pre/post) processing system. Our two-tier processing system will be the
main aspect of the Amine architecture upgrade, along with additional operations and registers, and deeper/more advanced LISP functionality (we will explain how LISP shards will function in a later update).
With Obsidian, the two-tier process will become a three-tier process, which when integrated will have pre-processing (L1 processing channels), post-processing (L2 trust channels), and hardening (L3 distributed mining).
It is important to understand how the present Tritium architecture is
setting the foundation for all that is to follow. As many of you will
know, as with any undertaking, once the foundation is set, it is not
easily changed unless one takes apart the entire system. This is why it
was so important to give Tritium the time it needed.
pre-processing and post-processing is divided into two tiers as well,
the pre-state and the post-state. This is important to know, so that you
can understand how the registers act to modulate their states.
Understanding this will help discover some of the benefits of pre-states
in a chain, and how a node can prune prior pre-state data based on the
verification of a transaction in a block object. A register post-state
could be considered one individual unit of Atomicity.
pre-states contain the current database state of the given register
before the operations execute. It is packaged into a binary format
inside the transaction object as the means of verifying that the initial
claimed state is the same state that the current network contains of
benefits of this come two-fold, one that you are able to rollback the
chain without having to iterate back an unspecified number of blocks to
find the state mutation of the previous register, and two that you are
able to know the state of the register without having to calculate all
its previous states. This adds additional benefits, such as being able
to run nodes in ‘lighter’ mode, where nodes are only required to verify
chain headers (which contain references to all of the transactions in a
block), to know that a transaction with a given pre-state was included
in a block. This allows for ‘light’ verification of a pre-state, i.e
that the transaction was confirmed with the consensus of the network at a
given block height, and therefore is indeed valid.
the growth of the network and size of the ledger (one aspect of scaling
to consider), we can prune the data held by the ledger by removing old
pre-states, which lowers the data requirement and creates a more
efficient and sustainable network over an extended period of time. By
implementing this architecture now, we won’t end up with an over-baked
architecture in the future that can’t handle the overwhelming volume of
data that has been processed in the past.
When you hear of projects boasting 100k tx/s, or even 1M tx/s, let’s look at what this really entails:
On average, a tritium transaction will be a minimum of 144 bytes, and a maximum of 1168 bytes.
Let us take a best case scenario, with a normal OP::DEBIT / OP::CREDIT being around 24 bytes, so an example of a transaction that is 166 bytes.
Let us now multiply this number by 100,000 transactions which equals 16,600,000 bytes per second, or 16 MB per second. This means your internet connection would need to support at a minimum 16MB per second, or a 128 Mbps connection.
Now beyond that, let us look at the damage as it compounds. 16MB per second multiplied by 86,400 seconds (1 day) is 1,382,400 MB, which is 1.3 TB per day. Multiply this by 365 for a one year period and we have 504 TB per year consumed. This is obviously not possible on consumer grade hardware.
above proof shows that the claims of such grandiose scale are most
likely rooted in either folly, or malarky. For us, our pre-processing
and post-processing systems, LISP data shards, Lower Level Database, and
register Pre-States will help scaling significantly, but there is no
way of knowing the exact scale that will be able to be achieved until
demonstrated in real world conditions, over a long period of time. Right
now, our results are promising, seeing that we are achieving a
reasonable scale in post-processing, and managing architecture that is
able to shard the pre-processing to exceed the 4.3k tx/s bottleneck from signature verification.
register has a pre-state which is used by the operations layer for
execution to move the register into its post-state. A post-state is what
is recorded in the register database as the new state of the register
after the transaction has completed. In order to not weigh down the
register script (some of the binary data packed into a transaction), we
included what is called a post-state checksum at the end of a register
pre-state. Therefore, any validating node will compare their calculated
post-state to the post-state checksum that was included with the
benefits of this, is that a transacting node is required to do the
calculations themselves, to prove that they have done honest work. Other
validating nodes verify this calculation by comparing their new
register state checksum to the post-state checksum included in the
those that are able to house extra data on their hard-drive, their node
can be enabled to show the history of the registers without much
processing required. Since the keychain object that is used for the
register database is a binary hash map, you can enable it to operate in
APPEND mode, which will append new data to the end of the corresponding
database files, enabling a user to reverse iterate from the end of a
hashmap collision, which will show the sequence of the register history.
This is very useful for registers used in supply chains or other
‘history’ related chains, such as the transfer of ownership of titles
and deeds for example.
are a few different types of registers that determine what types of
operations can be executed on them. As you know from the tritium white
paper there are object registers and state registers. Let’s briefly
explain what each one is for:
state register is one that holds the state for a component of an
external application, with no specification on the data format which
means that specialized operations can not be applied to these register,
— A raw register is a register with given number of bytes that can be
written to or appended to at any time. It is the most versatile type of
register with no security parameters applied to it. Each WRITE is immutable, but with it being RAW, it can only be overwritten by the owner of it. WRITE is only permissible if done from the signature chain that is the current owner.
— An append register is similar to a raw register in that it is created
with a given number of bytes, but this type of register can only have an
operation applied to it to change the data state. This means that in the
database itself, the original data always exists before it, and so does
the history of all APPEND operations. A WRITE operation on this type of register will fail, even if done by the current owner. Therefore, an APPEND
register has security parameters associated with it that make it useful
for applications that would like to be able to update a register
without losing the data that existed before it. This makes every APPEND immutable but able to be modified.
TYPE::READONLY — This type of register is useful for a ‘write once’ type of register. It is only possible to use the ‘OP::REGISTER’
operation for this type, since it can only be written to once. This
would be similar to a ‘const’ type in any language, and contains
security properties that are useful for certificates of authenticity,
titles, deeds, or contracts that the creator/publisher would like never
to be modified.
registers are more specialized, as it is necessary for the operations
layer to be able to recognize the data type that they contain. This is
useful for specialized operations that require knowledge of the format
of the data that the register contains. The following Objects are
defined and useable in the current source code.
— This is a specialized register that contains details regarding
someone’s account. An account can contain the balance of any type of
token, as it is denoted by a token identifier. Token identifier 0 is a
reserved identifier and is used for the native NXS token.
— This is a specialized register that contains the details of a token,
and claims that token identifier for use of the specific token. This
register contains information regarding the significant figures of a
token, and other parameters to define the total supply, and the total
supply that has been made available to the public.
operations layer now contains a foundational set of processes, which
act as the ‘Primitive’ operations. These together allow the creation of
records, history, tokens, transfers, and non-fungible tokens. Let us go
through each operation one by one, to explain what each one is capable
operational code creates a new register with a memory address assigned
to it. The memory address must be unique, and will index the data of the
register. Think of it as an abstracted memory address that comes from
getting the memory location of a variable (in the programming language
C/C++, this would be with the symbol ‘&’ which is an abstract of a
machine address), but it lives in the Nexus Blockchain. This will be
further abstracted towards Amine, when addresses will not only be ‘locally accessible, but will be ’network accessible’.
Though replicating the exact same state across the system does provide
added levels of redundancy, it evidently limits the potential of the
system to scale. Creating shards of the data work load into ‘network accessible’ groups
is therefore necessary, where specialized processing is performed by
different groups and types of nodes, whilst retaining the levels of
redundancy that replication provides.
specific addressing is one of the innovations that is designed to solve
the data overhead problem outlined in the above section regarding
scaling. The two most notable bottlenecks that limit scaling are
signature verification and the increasing amount of data overhead that
compounds very quickly as volume increases. A scalable system is not one
that can simply ‘process’ X many transactions per second, but one that
can still function after processing X many transactions per seconds for
years on end. Even if one were to use conventional data structures that
go as low as O(log n), when the system scales to billions of keys, the
processing can still become quite large, especially when indexing from
This primitive operation initiates a ‘write’ on a register, which overwrites all the data of the pre-state with the new data of the post-state. It has certain limitations such as the register must be a TYPE::RAW
type, and the total number of bytes being written must be the same as
what it had prior. This type of operation is generally best suited for
applications that are submitting raw data into the ledger, to enable the
immutable storage of certain events such as submitting a proof hash
into the public ledger from a hybrid system, or having their application
require certain JSON to be submitted into a register
This primitive operation acts on a register of TYPE::APPEND,
and adds data to the end of the register, without modifying the
original data. Useful examples of this operation would be, flagging a
title to that is claimed by an insurance company, or updating specifics
about an item along a supply chain. Since the original data is always
retained in the append sequence, updates to a register via OP::APPEND
provide a useful audit and history mechanism.
allows the ownership of a register to be transferred from one signature
chain to another. A transfer can also be instantiated to another
register such as a TYPE::TOKEN
if someone would like a token to govern the ownership of a register.
This is how joint ownership can be provided between individuals, as the TYPE::TOKEN
then represents the ownership. This can also be useful for showing the
chain of custody between parties of a supply chain. If one wants to
create non-fungible tokens, this would be the method that is used to
transfer the ownership of the non-fungible token, with the non-fungible
token generally being a TYPE::READONLY
register with an identifier specifying parameters regarding an object.
This could be a simple digital item with JSON specifications, and the
transfer operation would be the proof of ownership of that digital item
or non-fungible token.
operation is responsible for the commitment of funds from one account
to another. It is quite like the ‘authorize’ of a debit card
transaction. When this operation is instantiated, the funds do not move
to the receiving account until the other user (the receiver) issues
their credit. The acceptance of the transaction by the receiver
completes the commitment. This operation works only on a TYPE::ACCOUNT object register, and can handle the debiting from any type of token by any identifier.
operation is responsible for the final commitment of funds from one
account to another. Together the debit and credit produce a ‘two-way
signature’, which reduces the chance of funds being lost due to the use
of an incorrect address. If the funds are not accepted by the receiver
within a specified time-window, they are then redeemable by the OP::DEBIT
issuer. Therefore, funds will never be lost if sent to an invalid
address. Another additional benefit of this is allowing a user to reject
funds sent to their account if there is question of who the funds came
from. It also provides the option to generate a whitelist of addresses
from which the user will automatically accept transactions from. This is
important for monetary safety, as if you receive a mysterious deposit
in your account, there is no knowing who or why it reached you.
What are the next operations?
next two operations are very important, as they unlock the ‘validation
scripts’ which act as small computer programs that define the movement
of NXS. Validation scripts enable the full potential of the operations
layer, allowing functions such as the decentralized exchange of assets
to tokens, tokens to tokens, irrevocable trusts, programmable accounts,
The validate function will execute the corresponding OP::REQUIRE
with the necessary parameters. If the validate executes to true, then
the required will be satisfied and therefore the validated transaction
will set a boolean expression that will be required to evaluate to true
in order for a transaction to be claimable. Such an example would be OP::REQUIRE TIMESTAMP GREATER_THAN1549220657, meaning that a corresponding transaction would not be able to execute until the timestamp has been reached.
Introducing the DEX
The DEX will work as a native extension of the OP::REQUIRE and OP::VALIDATE operations. It can be thought of as this:
User A wishes to sell 55 of Token Identifier 77. They want to sell it for Token Identifier 0 (NXS).
They choose their price: OP::DEBIT <from-account> <claim-account> 55 OP::REQUIRE TIMESTAMP LESS_THAN 1549220657 AND OP::DEBIT <my-account> 10.
In this above script <my-account> will be an account with identifier 0, and <from-account> will be of token identifier 77.
User B wishes to buy the 55 of Token ID 77. They send a transaction such as: OP::VALIDATE <txid> OP::DEBIT <from-account> <to-account> 10
Since this includes an OP::VALIDATE, it triggers the validation of the corresponding OP::REQUIRE submitting the parameters it is verifying. Since the OP::DEBIT was one of the parameters to the OP::REQUIRE, this will evaluate to true, satisfying the validation script.
User A can now submit a transaction: OP::CREDIT <txid> <claim-account> 55
User B can now submit a transaction OP::CREDIT <txid> <claim-account> 10
the above sequence, 4 transactions are executed to facilitate the
decentralized exchange between two different types of tokens. This
process can also be programmed for the decentralized exchange of an
asset to a token, or even an asset to an asset. I will explain more on
how this works and how we see the growth of the DEX in the next TAO
API as it stands contains two types, Accounts and Supply. The
implementation details for now are therefore for the purpose of
demonstration only, using only a simple combination of operations such
as OP::APPEND, OP::TRANSFER, and OP::REGISTER, for example.
Please keep your eyes peeled for additional API calls that will be
shown in the API documentation. I will explain how to interact with the
Use a web browser to access the JSON responses.
can use a web browser to make API requests to your Tritium node. This
is achieved by submitting a GET request to the API endpoint. This will
always be the IP address of the node, and port 8080 followed by
An example would be:
above request will log you into the API and returns a session
identifier. The session identifier should be included in all subsequent
requests to the API for methods that require authorization. Your PIN is
required for any transaction requiring authorization to ensure that even
in a case where your username and password were compromised, your PIN
will still be required in order to access your account. This gives
similar properties to 2FA that most login systems utilize today.
Create a login page in your website powered by the Tritium daemon
can embed a custom HTML form into your website to use a Tritium daemon
as a secondary login system that gives verification properties to your
web service. In the future, a login over the API will also trigger a
unique EID that is coupled with the login, making your service immutable
to IP spoofing. The API handles application/x-www-form-urlencoded , so make sure to include your parameters in your form as follows:
page you are sent to afterwards will include the JSON response data
that includes the genesis ID and the session identifier to be used for
all subsequent calls to the API that require authorization. This way you
can give a user secure access to their signature chain through your
service node in your online service. Importantly this gives users a way
to access their sig chain without needing to run a full node, and
without giving up custodianship of their funds and account information.
Embed contracts into your web application.
Since the API supports application/x-www-form-urlencoded,
you are able to embed any contract functionality into your existing web
application, either by forwarding forms through the API and applying a
forwarding url to pass through, or by making custom forms that use the POST aspect of the API to process webforms. The above HTML example is a basic webform which can be integrated with your existing login system. To extend this, you can make calls to the API via AJAX
or more complex forms inside your system. This means that to build with
Nexus Advanced Contracts, all you need is to hire a web developer who
is able to ‘plug and play’ the correct sequence of API calls into your
Use contracts or tokens in your regular desktop application
The API also supports application/json to make requests to the API via any of our provided Software Development Kits (SDKs), so that your native application can take advantage of the API. Currently, we provide a Python SDK for use in any external python application, which can be found in the repository in the folder named ‘SDK’.
We would like to encourage developers to build software development
kits in their languages of choice for the API and contribute to the open
source development of Nexus.
Please refer to the following API documentation for up to date documentation on all API’s and calls that are available:
As any new call is implemented for -testnet or -private
mode, the corresponding documentation will be included. Please give
feedback if you find any information difficult to understand, and we
will modify the documentation to communicate it in a clearer manner
Logical / Interface
are making progress on the App Store, which will be a developer
friendly area to buy, sell and share Nexus apps. Our current design is
‘module’ based. However, this is only the first iteration of the App
Store. We will give more details on how the App Store will develop, and
how we will provide security to the applications supported by the App
Request for new standards
Standards in the API and requests for new calls can be formally submitted and discussed on this mailing list here: [email protected].
Requests to lower layers such as new register types or operations can
be submitted to the same location. Please do give feedback if you find
anything you believe could be improved.
Command-line Flags Available
following flags are available for use with the Tritium Daemon. Some are
experimental and are undergoing debugging, while others are hardened
and are ready for use.
-fastsync (experimental) — this flag will reduce your required synchronization time by a factor of 2.
— sync your tritium node on the mainnet with legacy rules. This will
allow you to run a Tritium node on the mainnet, which gives you access
to all the nice Tritium features such as sub second load time, quick
synchronization time, and database stability
— run your node in private mode to access the API functionality and
build local contracts. post-processing is done via a private block, and
clears in sub-second intervals
-legacy — use legacy specific RPC formatting for nodes that need to retain backwards compatible formatting
add foreign indexes for all blocks by height as well as by hash, allows
the indexing of blocks by height from disk, but requires extra disk
run your node in testnet mode over LISP or the regular underlay. This
will synchronize you to the test network, and require mining to produce
valid blocks and commit post-processing data from your API calls.
repository has specific semantics for each branch. The following list
will briefly describe the purpose of each branch, and what they mean for
you are testing:
Personal — any branch that is named after a user such as viz, jack, scottsimon, paul, or dino. We recommend NOT building from a personal branch, as the code you pull will be incomplete or in development.
Merging — this branch is used to merge code between developers. Any code that exists on merging is still considered ‘unstable’, so if you decide to test off of the merging branch, do so with a debugger (debug instructions below). We recommend NOT using this code unless you are a qualified tester or developer.
— this branch is used for pre-releases. This means that code is in Beta,
and is ready for wider public testing. Once code reaches staging, we
will periodically include pre-release candidates and binaries with
revisions and stability fixes. This branch is for public testing before
the release of official binaries.
— this branch will be the least updated, so if you are looking for the
most recent code, any of the aforementioned branches will keep you up to
date. Code is only pushed to master when a FULL
release is made, accompanied by a release candidate, binaries, and a
change log and description. The code on master can only be merged from
pre-releases in staging.
First, you will need to have a debugger handy. If you are on Linux, make sure to have gdb installed. This can be installed via: sudo apt-get install gdb
For OSX, the debugger will be included with your X-Code command-line tools named lldb
Next, make sure to build the source clean by issuing this command: make -f makefile.cli clean
Next compile it with: make -j 8 -f makefile.cli ENABLE_DEBUG=1
Once this completes, you will need to start Tritium up with your debugger such as: gdb nexus
This will then enter you into a new command-line console, in which you want to type: run -beta -fastsync -gdb
the -gdb flag, the daemon will close if you press the return key, due
to the debugger generally catching all the signals before the
If you ever run across a point where the program crashes, get the backtrace by issuing the following command: bt
Take this backtrace and submit it to the #dev channel in slack for assessment.
you have already been testing or are looking to start helping test the
core, I would like to extend a big thank you for all your help!
Check out docker if you want to deploy nodes over LISP. You can find docker documentation here:
that is about all I have to report as of now, I hope that you continue
to watch the progress on our repositories, continue to give us feedback,
and of course, have fun doing it! Remember, if you’re not having fun,
you’re not doing what you love, so on that note, I will leave you to
ponder on what it is that brings you the greatest joy. In the meantime:
Enterprise adoption is instrumental to blockchain technology becoming mainstream, and Nexus Advanced Contracts are the next step in leading this progression. Existing Smart Contracts have experienced issues in relation to ease of use and scalability due to a Turing complete system. Addressing these issues, Nexus has produced what is in essence a ‘Register-based Virtual Machine’, set for release in January 2019 with the Tritium upgrade. Tritium will allow developers to access the technology of Advanced Contracts simply through an API set. Before an explanation of Advanced Contracts is given, some context will be provided as to how conventional Smart Contracts function.
Smart Contracts are self-executing. Their design is to enforce the terms and conditions of a contract through programmable logic, reducing the need for third party intermediaries such as brokers and banks. Smart Contracts are an additional layer of processing above the ledger layer, i.e what is known as ‘the blockchain’, and are comparable to small computer programs that hold a state of information. The calculations of the contract are carried out by the processing nodes of a blockchain, which change the state of the information. Given that the calculations or processing is carried out by distributed consensus, the state of a Smart Contract is immutable.
Bitcoin was the first cryptocurrency with built-in Smart Contract capabilities, which it calls ‘scripts’. Scripts are not Turing complete and contain byte code. Ethereum augmented these capabilities into its ‘Turing Complete Smart Contracts’, which are generic to developers’ needs. Ethereum gives developers more access to contract functionality on a blockchain through a custom programming language called Solidity, which is then compiled into assembly language that is run on the Ethereum Virtual Machine (EVM). The EVM is a ‘Stack-based Virtual Machine’ that processes each instruction in turn.
Though very capable, Ethereum has experienced some issues in regards to security, performance, and ease-of-use, predominantly because of its Turing complete design. Some notable cases include the $75m DAO hack on Ethereum, and the $286m Parity bug. Vulnerabilities existed due to the large complexity of a Turing complete system, and the resulting difficulty of resolving bugs in a protocol written in immutable code. The complexity of operations that support universal computation or Turing complete designs also limit scalability. A universal system has a higher degree of complexity, and can not therefore compete with technology that is designed for more specialized tasks. An example of this observation would be the comparison between a CPU (Central Processing Unit) with a ASIC (Application Specific Integrated Circuit) in the mining of cryptocurrency. A CPU can’t compete against a SHA256 miner, as its complexity and design is geared to support universal general computation, not specialized computation. A similar conclusion could be drawn when a comparison is made between the system design of Ethereum (universal), and Nexus (specialized).
Nexus Advanced Contracts
Nexus has developed a ‘Register-based Virtual Machine’, a specialized contracting engine with greater capabilities than the EVM. Unlike the the EVM, which is defined by only two distinct layers of processing and is dependent on a Turing complete system, the Nexus contract engine is facilitated through the seven individual layers of the Nexus Software Stack, each designated to carry out specialized processes.
The third layer of processing is called the Register Layer. Here, the states of individual pieces of information contained by Advanced Contracts are recorded in architectural components called registers. Registers are used by typical computer processors and provide easy access to memory storage of frequently used information or values. With respect to Nexus Advanced Contracts, each register is owned by a Signature Chain. Therefore, the ownership and write access of a register is validated by the second layer, the Ledger Layer. The fourth layer is the Operation Layer which defines the rules of the state changes to a register, called ‘operations’. The operations are carried out by validating nodes that change the state of the registers by distributed consensus. The design provides the required functionality of a contract engine, without the over complexity and complications of a Turing complete system.
The ownership of a register can be transferred providing many proof of ownership use cases. Examples of such include titles, deeds, digital certificates and records, agreements, or any other digital means of representing tangible assets or time-stamped events. A register can also be owned and governed by another register, creating a relationship between many users. Relations can be used as proofs on the Operation Layer to provide additional functionality. An example of this would be a register that holds metadata representing the ownership of an item, and it being owned by another ‘token register’. The token ownership signifies partial ownership of the item, which provides the possibility for further use cases such as royalty payments with split ownership.
Conditions or stipulations can also be coded into Advanced Contracts by validation scripts or Boolean logic. Validation scripts require a transaction to fulfill a certain set of conditions to execute, which allows a user to program in stipulations on the exchange of NXS, tokens or any other digital asset. This allows a user to void transaction orders, place time locks on funds, or exchange any digital asset without a central intermediary.
Advanced Contracts which will be accessible through an API set will be able to improve many existing processes, including digital ownership, tokenization of assets and enterprises, digital rights, royalty payments, supply chain management, escrow services, financial applications, legal documentation of digital signatures, and many more.
The standards of object registers, operation codes, and API methods will be defined through working group consensus, to ensure a consistent connection between developers and users. Nexus borrows a similar model to the Internet Engineering Task Force (IETF) that provides the working groups for all RFC (Request for Comments) standards. This is important to drive a vibrant ecosystem forward. Just as we have seen with the success of the internet, we hope to continue this success in the next era of global connection: blockchain, artificial intelligence, and satellite communication.
It is clear that organisations can only operate effectively with easy access to products and services. Likewise, no organisation can continue to grow if late payments and poor procurement processes remain in place. This is where blockchain technology can play a crucial role, in both the modernisation and improvement of the logistics and operations which are vital to the performance of supply chain systems.