April Newsletter

April Newsletter

Since early March the tritium testnet has been functioning with the primitive core APIs and Operations, and we are now engaging with a security firm to perform a code audit. They will start an audit on the staging branch (tritium beta running over the legacy rules), and the wallet.

Tritium Staking

The Proof-of-Stake system is almost complete. The reward earned now goes to your spendable balance, whereas your stake remains locked up. If you make a transaction that withdraws from your stake account, it will result in a partial loss of trust, depending on the percent withdrawn.

Recovery of Signature Chains

We have designed a method for password recovery, by developing into the signature chain a ‘master seed’ or a ‘recovery’ phrase, which will be 20 words (128 bits of entropy), similar to hyper deterministic wallets for Bitcoin. The master seed phrase provides extra assurance, enabling one to recover their signature chain in the event of a forgotten password or compromised account.

Reversible Transactions

With the implementation of signature chains, transactions come to your sig chain and then you are required to claim them. This is to prevent people from losing funds sent to non-existent accounts. All transactions will have a timeout after a certain duration which means a transaction can be returned to the sender if somebody doesn’t accept it within the set timeframe. This technology is facilitated through the use of validation scripts, that essentially are appended at the end of a transaction that specify its behavior or anything that is trying to claim it. One can debit or credit back to oneself as transactions are by nature reversible. The validation script will specify the time frame the transaction can return to the sender.

Object Registers

Object registers are now complete and running in the core, which allow user defined type-safe objects. They will combine together data members that are either mutable or immutable, and will be accessed or modified through different formats in the API such as JSON or XML.

Validation Scripts

Validation scripts operate on a 64-bit register virtual machine, whereas Ethereum operates as a 256-bit stack virtual machine. In this case, Ethereum has gone beyond the hardware’s limits which has lead to major slowdowns, being that the processor can only operate natively on 64 bits at a time. To add to this, an Ethereum account balance uses 256 bits, where a balance is unlikely ever to need more than 64 bits. This creates a massive bottleneck in the processing of every account on Ethereum, which is one reason they have found troubles scaling. Our register virtual machine performs at 30 to 50 nanoseconds per instruction, compared to an average instruction time on Ethereum being on the order of 1,767,258 nanoseconds. You can read in the article below about how moving gas computation to 64 bits, increased the Ethereum virtual machine performance by an order of 70%. This shows that using 64-bit CPUs to synthesize 256-bit arithmetic, produces significant performance losses for no apparent advantage.

Optimising the Ethereum Virtual Machine


SDKs (software development kits) are written in a native programming language, and are abstracted on top of the API, allowing one to make native calls to access the functionality of the actual API. Dino has been working on the Python SDK, and Daniesan on the C# SDK. We hope to see more community involvement in the development of SDKs. It is simplified to port into any language without having to handle the HTTP protocol. The API also will contain an ‘events’ processor that will automatically respond to notifications, to accept and claim new transactions.

API Cookbook

Anyone who is keen to learn can join the testnet, can use a new cookbook Dino created that we hope will give better insight into the use of the API. The link is as follows:

API Cookbook

The Module Framework & Wallet

The module framework is an exciting feature of the wallet. It has now been integrated into the main UI code. The core modules are complete and awaiting final security audits. We are in the process of obtaining developer certificates for both Apple and Microsoft for the auto updater, which means that the binaries themselves will be signed by a certificate that is approved through each of these vendors. This will also solve some of the flagging issues when downloading the application.

In the last Zoom meeting, Kendal gave a demonstration on the Binance module to show the functionality of a module:

Watch the Zoom 05/01/19 here

Benchmark Results

The following results are benchmark results running on a consumer grade Apple laptop:

[13:47:57.791] ADD::Processed 34.7858 million ops / second

[13:47:57.933] MUL::Processed 35.1867 million ops / second

[13:47:58.174] EXP::Processed 20.7175 million ops / second

[13:47:58.313] SUB::Processed 35.9769 million ops / second

[13:47:58.418] INC::Processed 38.1832 million ops / second

[13:47:58.523] DEC::Processed 38.1486 million ops / second

[13:47:58.663] DIV::Processed 35.8513 million ops / second

[13:47:58.805] MOD::Processed 35.1786 million ops / second

[13:47:59.211] SUBDATA::Processed 9.85513 million ops / second

[13:47:59.358] UNIFIED::Processed 20.3705 million ops / second

[13:48:00.353] Parse::5.02705 million values / second

[13:48:00.363] Write::102.072 million uint8_t / second

[13:48:00.374] Write::89.6941 million uint64_t / second

[13:48:00.384] Write::97.8953 million strings / second

[13:48:00.394] Write::98.5319 million vectors / second

[13:48:00.405] Read ::97.1723 million uint8_t / second

[13:48:00.405] Read ::97.1723 million uint8_t / second

[13:48:00.415] Read ::92.584 million uint64_t / second

[13:48:00.425] Read ::100.746 million strings / second

[13:48:00.436] Read ::93.633 million vectors / second

Signature Chain Generation with Argon2

In the first Zoom meeting of April, Colin showed the speed of signature chain generation. We have designed it to be slow, to limit offline password attacks. Generating a Genesis ID for a signature chain took 891 milliseconds. This means that if someone tried to brute force attack your account, Argon2 would limit the amount of passwords that can be guessed in a given period of time. Users will also be able to increase the duration of Argon2 to heighten security, and enterprise will be able to reduce it, to increase transaction speed.

Fee Model

Ethereum charges a fee per transaction called ‘gas’, which is based on the computational complexity of a contract. In the case of Nexus, we can choose what transactions will carry a fee. DEBITS and CREDITS will be free on Nexus, though other functions such as creating a token or registering a vanity name will have a cost to prevent them from being consumed. Our current fee model reduces the supply of NXS as demand increases.

Nexus Ecosystem

In the first Zoom meeting of April, Colin presented a live demonstration of a split revenue payment, to exemplify the distribution of royalties between three parties.

Watch the Zoom 04/11/19 here

The below article introduces Fungible and Non-Fungible Tokens, and Hybrid and Sister networks, which will form the basis of the Nexus Ecosystem.

The Nexus Ecosystem


We have released a roadmap that outlines the core features of the TAO Framework, which will be updated to reflect ongoing development.

TAO Framework Roadmap

Educational Articles

The following articles describe how reputation or trust embedded in a consensus mechanism have the ability to drastically improve its tolerance to malicious actors.

Reputation Based Consensus – Advanced

Reputation Based Consensus – Simplified

UK Embassy

We attended 2 conferences in April:

  • The BlockPass Security Token (STO) event which discussed the future of tokenized securities and blockchain applications.
  • The Olympia London expo – a multi-feature event focusing on Blockchain, IoT, Cyber Security and AI/Big Data.

We also took part in our quarterly TechUK DLT Working Group – the paper we have been working on will be published in June.


Dino and Colin participated at the Prague IETF (Internet Engineering Task Force) at the end of March. They presented the 4th revision of the draft-farinacci-lisp-decent-03 where they showed screenshots of demos for the push-based and the new pull-based decentralized LISP mapping system. Once the main LISP RFCs are accepted as proposed standards, they will request working group adoption for LISP-Decent.

Thanks for reading,


The Nexus Ecosystem

The Nexus Ecosystem

Over recent years there has been a growing interest in the possible application of tokens. Generally, this has been limited to the raising of capital in the form of Initial Coin Offerings (ICOs) on the basis of the ideas of a whitepaper. Conversely, Nexus is building different token functionality to power applications with utility that extends beyond a store of value. Our technology is essential to creating many diverse and vibrant, interconnected networks to form the Nexus ecosystem.

Non-Fungible Tokens (NFT)
Nexus ‘Non fungible tokens’ are designed whereby each token represents total ownership of a physical or digital asset. The digital certificate that represents the physical or digital asset is represented by metadata that underlies the token.

Fungible Tokens
Nexus ‘fungible tokens’ hold identical information and are interchangeable, and can either represent a store of value such as a native token, or enforce partial ownership or a ‘right’ to an underlying digital or physical asset. Any digital or physical asset can be licensed by attributing ownership of the underlying asset through token issuance. ‘Proof of rights’ to this license, are thus determined by the total number of tokens a certain user has. These rights can represent shares of a company, asset, or copyright, and therefore can facilitate the automatic dispersal of revenue streams for payments such as royalties and dividends, for example. Token distribution allows the network to enforce the automatic splitting of license payments, providing users free exchange of their rights to subsequent revenue streams.

In the above diagram, we show how the flow of a split payment functions, assuming that the asset has been created by the owner, and the tokens (TKNs) have been created and distributed to the token accounts. In this example, the split is 50-25-25. First, a user pays a license fee (here it is 1000NXS) for use of an asset represented by meta-data. The asset is then detected to be owned by the token account holders. The token holders are notified to claim their percentage of the payment (DEBIT), which is represented by their total token balance divided by the total token supply. Each token holder is then able to CREDIT their accounts proving their right to this payment with their TKN balance.

Thus, fungible tokens issued on Nexus are far more than a store of value; they have the ability to represent a right to a revenue stream of an underlying asset represented by a NFT. This empowers many people and organisations including: viz. artists, musicians, inventors, scientists, enterprises, agricultural producers, schools, charities and gamers.

Decentralized Exchange
Under development is the Nexus Decentralized Exchange (DEX), which will allow the buying and selling of any registered asset represented by a NFT through its certificate of authenticity in exchange for any token. All Fungible and Non-Fungible tokens will be tradable between buyers and sellers, without a third party, custodian, or any ‘permissions’. Therefore, the entire network itself is a native DEX to items that exist on the Nexus blockchain. No matter what the value of the item being traded, the orders will be fulfilled as soon as receiving enough confirmations.

Wallet Modules
Currently in Beta, the Nexus interface is capable of running third party modules which extend the standard logical and interface layers provided through the wallet. We believe it is important to allow the customization of the interface, to allow developers to build modules that support other coins, exchange trading dashboards, or custom Tritium applications such as online games.

“If an asset or token has to be listed by anyone other than the owner of the asset or token, it’s not a decentralized exchange

The Nexus Ecosystem
Permissionless ‘Sister’ networks and permissioned ‘Hybrid’ networks can be ran, so that services/applications do not need to develop their own blockchains, but can rather leverage the modular Nexus Software Stack. The two types of networks will be able to choose their own consensus rules, and the Nexus blockchain will record intermittent block hashes from them as proofs of immutability. Hybrid networks will have similar properties as sister networks, although they will be less open and designed for managing private services. Both types of networks will use PoS (Proof of Stake) based consensus with their own native tokens. We expect that the growth of services and applications, as part of the Nexus ecosystem, will begin to flourish in the following sectors:

Arts, Music & Science – Patents, Trademarks, Licences, Copyrights, Royalty payments

Education – Certificates, Badges, and Scholarship Credits

Supply chains – Shipping, Logistics

Fund/Capital Raising – Securitised Token Offerings (STOs)

Voting – Token Voting Structures for organisations and companies

Working Groups
Nexus has adopted the Internet Engineering Task Force’s (IETF) time-tested open process through ‘Working Groups’. Our Working Group model connects a decentralized collection of people who work together to set standards or develop new components of our technology.

Coders-wg: Develop code and set higher level standards for the Nexus Architecture

Use cases-wg: Discuss functionality for different sectors

Communications-wg: Write content for the website, WIKI, and social media

Website-wg: Develop the standards and design of the official Nexus Website

Graphics-wg: Develop graphics to support the Nexus Brand

Social-wg: Discuss and design Decentralized Voting Structures

Translations-wg: Translate content for the website

If you are interested in joining any of the working groups please email [email protected]. We welcome the community to set up additional working groups not listed above if you feel it would contribute to the standards and development process.

Read more:

TAO update – simplified



Reputation based Consensus   ~ Simplified

Reputation based Consensus ~ Simplified

The notion of a “trustless” ledger continues to gain in popularity. A public blockchain is often described in this way as it does not require a “trusted” third party. The responsibilities of third parties such as banks, payment or credit card providers is to act as the authority that prevents double spending. Blockchains on the other hand, use various types of consensus mechanisms to reach an agreement on the ledger and to authenticate each transaction. Ironically, trust is still a factor in trustless systems, with trust derived not from a central system but from decentralized consensus.


The cost to attack a decentralized consensus mechanism is directly related to its security. To increase security, Nexus implements reputation as a cost, which is related to how much consistent time a staker contributes resources. A mechanism called “Trust” records past work to create a reputation system, which is currently implemented on the Nexus Proof of Stake channel.

Proof of Stake is a form of mining based on ownership of a digital currency. This ownership represents a “stake” in the sense of an interest in something. By staking, NXS holders earn a NXS Stake Reward. NXS can only be staked inside the official Nexus wallet. In return, stakers are rewarded for operating the wallet (a Nexus node) which provides security to the network.

Trust adds another weighting to the security of the Nexus protocol. A Trust score is defined as the total time a specific user has contributed weight or realtime resources to the network. A Trust score is gradually built over time as one consistently operates a node in an honest, trustworthy, and timely manner to validate transaction data by running a wallet on a computer with continuous internet connection (24 hours a day, 7 days a week). In some circumstances, such as when a node goes offline for a significant period of time, the node’s Trust Score will be reduced. Therefore, one has the incentive to operate a node continually and consistently providing security to the Nexus network.

The Stake Reward rate depends on the node’s Trust Score. The Stake Reward rate is a value that represents your current annual NXS rate of return (%). Unlike most other PoS systems, the Nexus reward rate isn’t constant. The rate starts at 0.5%, and can increase to 3.0% after 12 months of consistent staking. The rate increase is nonlinear, slowing in terms of its increase over time. It takes several weeks of consistent staking to reach 1.0%, and around four months to reach 2.0%. With this rate, you can calculate the average amount of NXS you can expect to receive each day for staking.

The key to a good reputation system lies in the effort required in gaining a reputation versus the comparative ease of losing it. By coupling an economic incentive with greater trust, such as higher returns on verification, there is a non-trivial cost incurred by loss of reputation. Trust in our implementation is gained by consistent block production within a three day moving window. If this time is exceeded, the value of trust decays at a rate of 3x, which means if a node misses one day of staking, it receives a penalty of three days worth of lost trust. This mechanism forms a basic foundation for the discernment of the quality of nodes.

It is possible to stake with only one single NXS at a rate of 0.5%. However, as of 15th April 2019, it takes at least ten thousand NXS to be able to reach the maximum stake reward of 3.0%. The amount of NXS required to achieve a higher stake reward depends on its fiat price, as an increase in the price of NXS increases the required amount to attain the maximum stake reward. Trust supports the viewpoint that “not everyone has money, but everyone has time,” implying that anyone can build trust if they have some time, as only a minimal amount of hardware and NXS is required to begin building a trust score.

Similar to other forms of mining, Proof of Stake mining has a level of difficulty. As more people successfully stake on the network, the difficulty of staking increases. This results in an increasing amount of NXS required to increase the stake reward. Furthermore, a larger balance of NXS in the wallet will increase the frequency of NXS rewards.

Trust adds a layer of protection against attacks that further increases the Nexus network’s Byzantine Fault Tolerance. Together with the other two Nexus mining channels, limitations on block frequency, ten-minute decentralized checkpoints, it is very unlikely for an attacker to perform a successful 51% attack, because it would take an enormous amount of resources and time to gain enough trust from the network, in order to take control of all three channels. Not only does Trust increase the security of the protocol, it also increases the efficiency of the protocol and therefore its potential to scale.

Extending Trust

We believe reputation is an important mechanism to take into consideration when discerning the mathematical truth of a decentralized consensus. Reputation in Tritium will extend beyond just Trust, by implementing signature chains. A signature chain is comparable to having a personal blockchain, which can be accessed by a username, password and pin, and augmented with various hardware password managers or biometric usernames. The result is a transparent ledger of events associated with a given user, that can provide the data set to form more complex reputation systems interpreted from this series of events. We plan on extending our current reputation systems into many more areas, such as our multidimensional chain.

For more information please read

“Reputation based Consensus”

“Tritium Trust White Paper”

“Signature Chains

Reputation based Consensus

Reputation based Consensus

Today, there are a handful of consensus mechanisms that have been designed to create decentralized networks. Though all of these mechanisms serve to protect against sybil attacks and double spending, many have a limited ability to capture the reputation of the nodes in the network. Most follow pBFT (Practical Byzantine Fault Tolerance) using stake-based weighting such as Cosmos or Casper. Though these consensus mechanisms are BFT below 33%, this can be improved through the implementation of a reputation system that utilizes time as an equally available weight, which can extend the security of conventional consensus mechanisms.


Ironically, trust is still a factor in trustless systems, with trust derived not from a central system but from decentralized consensus. Although decentralized consensus mechanisms are resistant to manipulation, they become vulnerable when one party begins to control at least 33% of nodes for pBFT, or 51% of network computational power for PoW (Proof of Work). By studying these threats, we have found that including reputation as a part of the consensus process can improve the byzantine fault tolerance (BFT). Further reading on this topic can be found in the link below, which proposes a reputation protocol that claims a 20% increase in fault tolerance.

Guru: Universal Reputation Module for Distributed Consensus Protocols

Nexus Proof of Stake

Nexus currently implements a reputation or trust-based proof of stake protocol that maintains random selection inherent in pure Nakamoto consensus, but also overlays a reputation to each validating node. The reputation of a node combined with their stake produces a weight that determines their probability of finding the next block. In order to provide the proper incentives for validators to gain trust, the rate of return ranges from 0.5% to 3.0% after a time period of 1 year. Trust in our implementation is gained by consistent block production within a three day moving window. If this time is exceeded, the value of trust decays at a rate of 3x, which means if a node misses one day of staking, it receives a penalty of three days worth of lost trust. This mechanism forms a basic foundation for the discernment of the quality of nodes.

Reputation and Relationships

The system tolerates byzantine faults through the distribution of validators and implementation of relationships between nodes. In our context, reputation is designed as a public indicator of a node’s history whereas relationships are a private indicator of a node’s relationships with other nodes. In this respect, it is easier to prevent a byzantine fault if the probability of an assumed fault is known. In simple terms, this means that one can more easily discern the difference between a byzantine fault and an honest node based on previously experienced faults.

Extending Reputation

We believe reputation is an important resource to take into consideration when discerning the mathematical truth of a decentralized consensus. With the knowledge we have gained through our current implementations, we plan on extending our current reputation systems into many more areas. Through our architectural development named the “TAO” (Tritium, Amine and Obsidian), we are deploying reputation into our multidimensional chain primitives, as part of the immutability and authenticity (Y) axis.

Extending Relationships

We have noticed several benefits of nodes keeping a history of their subjective relationships with one another, that is a private indicator of the quality of data and communication between nodes. This is not suitable for consensus critical rules, but rather for the detection of malicious actors in a system. The result of this, through some of our basic implementations, is an ability for the network to discourage dishonest behavior without experiencing consensus failures. This allows imperfection in detecting qualities of good and bad, while detecting potential byzantine faults in advance and the option not to propagate them. These concepts have been tested, where dishonest blocks could be detected and not relayed by a consistent set of rules. If other nodes on the network still propagated these blocks and built upon them, they would then be seen as a valid part of the blockchain and a false positive realized.

Reputation in Tritium

Reputation in Tritium will extend beyond just trust keys, which are the basis for the legacy client, by implementing signature chains. Signature chains are a hybrid signature scheme that use hash-linking, and asymmetric cryptography to form a primitive user-level blockchain. This chain contains all the actions invoked by a specific user, without revealing their actual identity. The result is a transparent ledger of events associated with a given user, that can provide the dataset to form more complex reputation systems interpreted from this series of events. The enforcement of reputation on the ledger layer is through the 1:3 ratio for staking currently implemented, and the aforementioned relationship system on the network layer.

Mining Reputation

Miners will see their reputation improve through consistent actions performed on the mining network as Amine and Obsidian approach release. This will give a similar variable reward model as nPoS, but with the requirement being mining power in order to produce consistent blocks over time. These reputation models will favor nodes with a consistent history, and will penalize nodes that hop from blockchain to blockchain in pursuit of profit. As your reputation will be a factor in mining profitability, incentives will align miners to contribute more consistent power to the network consensus, providing better security properties. Miner reputation could provide greater resistance to 51% attacks, similar to how reputation can improve the pBFT-model by 20% or more.

For more information please read

“Tritium Trust White Paper”

“Signature Chains”

Nexus Newsletter March 2019

Nexus Newsletter March 2019

March was another busy month of coding, with an additional 68,000 lines of code written and the release of the Tritium testnet. The development team have also been holding weekly zoom meetings, for which we have provided some of the highlights.


Zoom Meeting 27/02/19

Zoom Meeting 06/03/19

Zoom Meeting 20/03/19 Part 1

Zoom Meeting 20/03/19 Part 2


Tritium Testnet

The team has run several successful mining/sync/fork recovery tests. On the back of this, the tritium testnet is now open to public connections (we were previously whitelisting connections to developer IPs only). You are welcome to connect to the testnet to test mining and basic account / API use. Please join #tritium-testnet if you want to participate, as you will need to know the current testnet number to set testnet=xx in your nexus.conf. At the time of writing we are using testnet 11.


To test the tritium core in beta mode on the legacy mainnet, please use the ‘Staging’ branch in github.

To test the tritium features (signature chains, APIs) on the tritium testnet, please use the ‘Merging’ branch in github.



The final improvements to the legacy wallet code are complete, and the Tritium wallet with legacy back-end now syncs in under one hour. We’re closing off some performance issues with the legacy wallet, specifically the rescan function. In order to do this we are making changes to the way we access data in the LLD to perform better serial access, as opposed to random access for which it was designed.



A new LISP-Trace monitoring tool has been written called ‘ltr’ which shows the path a packet takes from source EID to destination EID, as well as the return path. This is a very useful tool for debugging the LISP connectivity and messaging issues.

Encrypted Pointers

The encrypted pointer encrypts the memory location using AES-128. This makes it very difficult for a virus to ‘eavesdrop’ and potentially steal sensitive data by reading process memory, such as your sigchain login or pin. It is also useful for developing applications that rely on critical information in memory, and is available to use in the LLL utilities.


 The code for this can be read here.

 Signature Chain Indexes

 Nodes on the Tritium Protocol keep track of global indexes, meaning that you don’t need to rescan a node if you are logging in to it for the first time. This makes managing notifications (transactions that require your acceptance such as debits and transfers) much more efficient and optimized.



Argon2 is now being used for key and username generation, which is a memory-hard password hashing algorithm with variable complexity arguments, meaning that it can control how many seconds it takes to generate another key or username. Now the time it takes an external ‘hacker’ to offline brute-force a sigchain can be computationally bound by memory-latency, resulting in the leveling of the playing field between all devices. Therefore, an FPGA, ASIC or even a GPU farm have less competitive advantage over a CPU.

 Our current Argon2 settings requires at least 0.3 seconds to generate a new key, meaning one is only able to ‘try’ three passwords per second. Combining this with a minimum requirement of at least 8 alphanumeric [a-Z, 0-9] characters per password, even if the username and PIN were  compromised, the time required to crack the password would be in the order of 2.3 million years. The use of biometric username generation will also be another step in strengthening your credentials and sigchain access by further increasing the physical requirements to gain access.


  The code for this can be read here.


Falcon is a very compact lattice-based cryptographic algorithm and a second round candidate of NIST’s Post-Quantum competition. The computational requirements are at least 1/40th of ECDSA, which means you can verify signatures very very fast. The downside is it is about 1.5kb for both the public key and signature. Though Falcon is based on aged and proven mathematics (NTRU lattices), it has not undergone as much crypto-analysis as ECC or RSA. Falcon is now running on the testnet, and more information can be read about it here:



 Our wrapper and integration of FALCON can be read here.

Tritium API 

The Accounts, Tokens, and Assets APIs are now available for people to test. A recent demo shows how to use some of these commands, and can be found here:

Zoom Meeting 20/03/19 Part 2

These APIs also provide functionality for an asset to be owned by a token, to create what is known as ‘Tokenized Ownership’. Your token balance represents your partial ownership in the underlying asset. Therefore, tokens can allow the function of automatic dividend payouts (split revenue) without the requirement of a third party custodian.

The team has implemented support for sessionless API use. This simplifies the process for users who would like to interact with their sigchain and use the various APIs with the CLI (command-line interface), without having to keep track of, and supply their session ID with each API call. This makes the usage more akin to the legacy RPC CLI. The API will default to sessionless, though can be switched to session-based by adding -apisessions=1 to your config.

We have been working closely with the seed node operators and block explorer developers to shape the requirements for the Ledger API, thanks to @mercuryminer, @psipherious, and @danialsan for their input. A new getblocks method has been added to allow batches of up to 1000 blocks and their transactions to be retrieved in a single call (taking about 3 seconds in testnet), which is crucial for block explorers and other data aggregators.

 Next week work will start on the Network and Legacy APIs, which will serve as drop-in replacements for many of the frequently-used RPC commands, so that users/integrators only have to use the new API rather than having to switch between it and the legacy RPC.

 We have static / unit tested the underlying functionality for most of the APIs, so we are now finishing and testing the individual API methods. These APIs are lower level, without schema or format specified. APIs are in development for Licensing & Royalties, Dividends and Voting.

Universal Miner

Jack is working on a universal mining application that can be used on both the prime and hash channels by GPU and CPU. The immediate priority of this work is to increase the efficiency of prime channel mining using GPUs in order to compete with privately developed mining farms that currently dominate this channel. We have made good progress on this, and hope to release an updated miner with a significant speed increase next month.

Validation Scripts

The team has made significant progress on the validation scripts that will be used to drive more complex contract behaviour. Essentially, a validation script is a set of rules that must evaluate to ‘true’ for a transaction to execute or to be claimed. These rules can include data from global state variables such as unified time, block height, and coin supply, as well as data from the sender / recipient signature chain and the registers that they own.

For example, this opens up the possibility to encode rules such as ‘transfer asset X from sig chain A to sig chain B, as long as 1000 ABC tokens have been deposited into A’s signature chain, and as long as this occurs before the date 01/01/2020’.

To execute these validation scripts, we have built our own 64-bit Register based Virtual Machine, which based on our last tests of memory and computation bottlenecks, it processed 15 million scripts per second on a single thread (~75 million operations / second).  The script tested can be observed at the following link: https://github.com/Nexusoft/LLL-TAO/blob/merging/tests/unit/TAO/Operation/validate.cpp#L41

To verify these results, please compile the source code with LIVE_TESTS=1 to run benchmarks and unit tests.

We are developing a set of API methods that will encapsulate the commonly used validation scripts that we expect people to use for ICOs / STOs, royalty payments, dividends and for the DEX. More advanced users will be able to create their own validation scripts by writing them in our virtual machine assembly, or a higher level domain specific language (DSL) when developed.

We have designed this aspect of the Operations Layer to be sensitive to common mistakes that developers may make, making it more difficult to introduce ‘bugs’ into the contract that could be exploited as security flaws

Module Market

The framework for the module market is near completion. Once complete, it will allow anyone to start developing modules for the nexus wallet. The first official module under development is the internal wallet block explorer.


The foundations of the decentralized exchange (DEX) are validation scripts. Essentially, an asset could be put up for transfer as a validation script. For example, my order requires ‘1000 ABC tokens’ before you can claim ‘asset X’. Once a corresponding transaction fulfils this script the token and asset transfer clears, allowing each party to claim both sides of the exchange without the requirement of a central clearinghouse.

 Running with -dex enabled will require more disk space for a full node, because of all the necessary indexing of the orders. Currently, disk usage is 30% less for a tritium node, versus legacy mode. We don’t expect that the DEX will require too much computation, because it is only going to depend on foreign indexes of transactions to an iterator number.

 If enabled (by issuing the config flag -dex), you’ll be able to see all of the open orders and all of the orders that have ever been executed. From here, the front-end development team will have the data to populate graphs.


AUS Embassy

Paul attended the ADC Global Blockchain Summit in Adelaide earlier this month. The event brought together government, businesses, financiers, regulators, researchers, and innovators to discuss the strategies and practical applications of blockchain technology. Notable contacts were made with regulatory and research organizations such as OECD, CSIRO, and MainChain, in addition to various businesses and educational establishments looking for blockchain tech partners.

 Discussions continue with our lawyers and the tax office over the tax treatment of the ambassador keys, and the general tax structure of the embassy and its subsidiary operating company. The decision has been made to apply to the ACNC to register the Australian Embassy as a charity which, if successful, will provide us with tax exemption and greatly simplify the financial and accounting process.    


UK Embassy

 Nexus UK has attended a number of events such as: 

  • London Blockchain Week
  • Finovate Europe
  • ‘Law and Blockchain’ & ‘Blockchain Unchained’ Seminars

 We also spent some time with our advisors, in particular Jeff Garzik, discussing how Nexus can increase adoption globally both with regards to enterprise solutions and crypto consumers. The UK Embassy has continued to explore a number of high profile business development opportunities with the goal of creating globally adopted use cases.


Dino and Colin will be at the IETF (Internet Engineering Task Force) on the 29th and 30th of March. Please message @jules if you are in Prague.

Alex and Colin will be hosting a meetup in London on Thursday 4th April at 18:00pm. Venue: The Chapel Bar, 29 Penton St, London, N1 9PX. Please come and join us to learn more about our recent developments.








TAO Update #3 – Simplified

TAO Update #3 – Simplified

First up, there’s a ton of new code. 50k+ new lines of it. Great job, Nexus devs!


Lower Level Crypto:

Nexus is monitoring new quantum-resistant signature schemes that use lattices to replace the Elliptic Curve Cryptography scheme that is currently used for private/public keys. Lattices will make your signature chain more secure against quantum computers but requires greater processing, which is already a throughput bottleneck at 4.3k TPS. 

Lower Level Database:

A new keychain called a Binary Hash Map has been created as an alternative to the Binary File Map (another form of indexing). Essentially, if the blockchain is indexed via account, then the account name is run through a hash function which determines its location in the index, called a bucket. A bucket collision is when the hash output is the same for another input.

When this occurs, it searches through multiple hashmaps from the largest index to smallest index and places the information in the first empty bucket at the calculated position. When the key needs to be retrieved, it calculates the bucket location and then checks that bucket in each hashmap until the key is found. Greater bucket collision resistance means that the Binary Hash Map is more efficient.

The Binary LRU cache is pretty self-explanatory. Information that is used most frequently is retained whilst infrequently used data is replaced.

The Transaction Journal is like a page file for the blockchain. Rather than holding every transaction that has been processed and is awaiting placement into a block in RAM, which can be corrupted after power failure, the LLD sets aside a portion of the hard drive in a file and stores the pending writes there. After reboot, it can recover if there was a failure at any point in the ACID transaction, due to checkpointing the journal file before events are committed to the main database.

ACID is not just known for great visual effects and synesthesia, but is a well-known acronym from the world of databases. It stands for Atomicity, Consistency, Isolation and Durability. Meaning:

  • When a transaction is made, every part of the transaction must be valid or the transaction fails.
  • Transactions must be in order, for instance a DEBIT must happen before a CREDIT, or for that instant in time the ledger would be in an invalid state.
  • Once a transaction is made, it is irreversible. You cannot make another transaction that would invalidate a previous transaction. For instance, let’s say I have 1000 NXS and buy something worth 500 NXS. If I try to quickly go and transfer 501 NXS, it would fail.

For further reading about ACID: https://vladmihalcea.com/a-beginners-guide-to-acid-and-database-transactions/

Lower Level Protocol (LLP):

The LLP handles the networking layer of Tritium, managing sockets and connections with other nodes. The benchmarks reported here are run on the underlay without LISP. As can be seen in the photo, and as Colin explained, this benchmark is not performed with any Ledger layer validation and is simply a load test on network capability alone. Under real-world conditions, this will not be the case.


A transaction object contains all the information needed to make a transaction, for instance sending account, receiving account, amount to be transferred, signature data, public key, tx ID etc. When a new transaction is received, the node performs pre-processing to validate the transaction. When a block is received, the node then checks to see if that transaction is included in the broadcast block, and then commits this collection of indexes to disk using an ACID transaction (remember, ‘all or nothing’). This benchmark of 647ms was for post-processing only. Pre-processing takes a little longer, but given an average block time of 50 seconds, there is plenty of time available to perform this processing. 

Preprocessing tests received transactions to ensure that they are valid and do not violate ledger, register, or operations verification. For a short period, I believe about 12 months was the stated duration, nodes will be able to process both Tritium transactions and legacy transactions. Tritium transactions use the new account model with the new signature chains, whereas legacy transactions still use the UTXO system with private/public key security. Legacy addresses can send to Tritium accounts, but Tritium accounts cannot send to Legacy addresses. Any Legacy addresses containing coins after this period will be inaccessible.


All pre-processed transactions are retained in a mempool awaiting inclusion in a block. If a second transaction is received which contradicts or invalidates an already processed transaction, then that second transaction is rejected.

Tritium blocks do not contain all the information that was contained within the transaction object. They only contain a list of all the transaction IDs contained within the block. When the block is received, nodes compare every transaction ID which has passed pre-processing and if the block contains an unknown or unprocessed transaction ID, then that block is not accepted until that transaction is received and processed. This list of transaction IDs also serves to verify the merkle root that was calculated in the block header.


Before Amine is implemented, all the pre and post processing is performed by the same nodes via PoS or PoW miners. Under Amine, they will be separated, with pre-processing or transaction validation performed via the PoS/Trust nodes, and post-processing or block verification performed by miners.


Obsidian will split the consensus process over 3 channels, the L1, L2 and L3 locks. The first 2 locks are the same as explained above. The L3 lock is a hardening stage using proof of work. Instead of miners racing to find a winning block hash, each miner works to find the most weighted hash within a 60 second period. Miners then submit this hash (which is based off the L2 locks, previous block hash, their Signature Chain’s Genesis ID, and randomly generated nonce) to the network and combined into a single merkle root hash. This way each miner gains a portion of the reward proportional to how much collective weight they contributed.

Register pre-states:

This explains that by including the register pre-state within the transaction object, it forms part of the transaction hash which is part of the block header. Light nodes can then check block headers to see if the pre-state of a new transaction formed part of a recorded block. Because this information is recorded within the block header, older pre-states can be pruned and removed from the blockchain, decreasing blockchain bloat. If a node were to attempt to include an older pre-state, it would fail register verification on receive of a new transaction, being that the published pre-state was not consistent with the current register’s state in the LLD instance that holds register data. 

This section goes on to point out that extravagant claims of “100k TPS” are unrealistic without some sort of blockchain pruning or sharding, and that network bandwidth and hard drive storage are bottlenecks to this sort of throughput.

Register post-states:

The post state is the new state of the register which is recorded in the database, and transacting nodes have to include a post-state checksum along with the transaction, which must be the same as that calculated by the validating nodes. This is important for the development of dapps built on Nexus, for if a dapp makes a transaction and expects a certain result, then the checksum provides the means to prevent the register moving to a state which the application did not intend. Its a form of error handling.

Register Types:

This section is basically just reiterating the contents of the Tritium whitepaper.

State registers can be manipulated by primitive operations, which will be detailed further below. These registers contain state information for external applications shared across multiple instances. This might be the location of a sea container or the status of a driver’s license.

State registers come in three flavours, Raw, Append, and Read-Only.

Raw registers have no security parameters besides ownership, only the owner of the register can modify the content of this register.

Append registers can only be added to and retains their original data and state history within the register database. This is useful for tracking ownership of property, land titles etc.

Read-only registers are unable to be changed even by the owner. It is useful for holding constants or information that will not change.

Register Objects:

Currently, there are two types of objects, accounts and tokens. These are specialized registers which are used internally within the Operation layer. As such, their data format is pre-set and their contents can only be manipulated with specialized operations.

Register Operations:

The Register operation assigns a new memory address to a new register. Think of this as declaring a variable within a program. Eventually, when sharding is implemented, this will need to allow assigning and accessing memory addresses in remote shards through inter-shard communication.

The Write and Append operations are self-explanatory.

The Transfer operation changes ownership of State or Token registers from one signature chain to another. This operation can only be performed by the owner of said register

A successful transaction of funds uses both a Debit and Credit operation. These operations can only be applied to Account registers. In the event of a successful Debit operation but lack a corresponding Credit operation, the issuing Debit account is able to redeem those unclaimed funds.

The next two operations, Validate and Require, operate along similar lines to IF control statements within programs. These allow conditions to be placed on the execution of operations and form the basis of Nexus’s Advanced Contracts.

Using Colin’s example of a decentralized exchange of tokens, the user wishing to sell tokens would execute the Require operation, which debits the funds and then waits for the conditions to be fulfilled.  The buyer then performs a Validation operation which triggers the evaluation of User A’s conditions and removes the funds from User B’s account. Both users are then able to execute a Credit operation to deposit the corresponding funds into their accounts.