March was another busy month of coding, with an additional 68,000 lines of code written and the release of the Tritium testnet. The development team have also been holding weekly zoom meetings, for which we have provided some of the highlights.
The team has run several successful mining/sync/fork recovery tests. On the back of this, the tritium testnet is now open to public connections (we were previously whitelisting connections to developer IPs only). You are welcome to connect to the testnet to test mining and basic account / API use. Please join #tritium-testnet if you want to participate, as you will need to know the current testnet number to set testnet=xx in your nexus.conf. At the time of writing we are using testnet 11.
To test the tritium core in beta mode on the legacy mainnet, please use the ‘Staging’ branch in github.
To test the tritium features (signature chains, APIs) on the tritium testnet, please use the ‘Merging’ branch in github.
The final improvements to the legacy wallet code are complete, and the Tritium wallet with legacy back-end now syncs in under one hour. We’re closing off some performance issues with the legacy wallet, specifically the rescan function. In order to do this we are making changes to the way we access data in the LLD to perform better serial access, as opposed to random access for which it was designed.
A new LISP-Trace monitoring tool has been written called ‘ltr’ which shows the path a packet takes from source EID to destination EID, as well as the return path. This is a very useful tool for debugging the LISP connectivity and messaging issues.
The encrypted pointer encrypts the memory location using AES-128. This makes it very difficult for a virus to ‘eavesdrop’ and potentially steal sensitive data by reading process memory, such as your sigchain login or pin. It is also useful for developing applications that rely on critical information in memory, and is available to use in the LLL utilities.
The code for this can be read here.
Signature Chain Indexes
Nodes on the Tritium Protocol keep track of global indexes, meaning that you don’t need to rescan a node if you are logging in to it for the first time. This makes managing notifications (transactions that require your acceptance such as debits and transfers) much more efficient and optimized.
Argon2 is now being used for key and username generation, which is a memory-hard password hashing algorithm with variable complexity arguments, meaning that it can control how many seconds it takes to generate another key or username. Now the time it takes an external ‘hacker’ to offline brute-force a sigchain can be computationally bound by memory-latency, resulting in the leveling of the playing field between all devices. Therefore, an FPGA, ASIC or even a GPU farm have less competitive advantage over a CPU.
Our current Argon2 settings requires at least 0.3 seconds to generate a new key, meaning one is only able to ‘try’ three passwords per second. Combining this with a minimum requirement of at least 8 alphanumeric [a-Z, 0-9] characters per password, even if the username and PIN were compromised, the time required to crack the password would be in the order of 2.3 million years. The use of biometric username generation will also be another step in strengthening your credentials and sigchain access by further increasing the physical requirements to gain access.
The code for this can be read here.
Falcon is a very compact lattice-based cryptographic algorithm and a second round candidate of NIST’s Post-Quantum competition. The computational requirements are at least 1/40th of ECDSA, which means you can verify signatures very very fast. The downside is it is about 1.5kb for both the public key and signature. Though Falcon is based on aged and proven mathematics (NTRU lattices), it has not undergone as much crypto-analysis as ECC or RSA. Falcon is now running on the testnet, and more information can be read about it here:
Our wrapper and integration of FALCON can be read here.
The Accounts, Tokens, and Assets APIs are now available for people to test. A recent demo shows how to use some of these commands, and can be found here:
These APIs also provide functionality for an asset to be owned by a token, to create what is known as ‘Tokenized Ownership’. Your token balance represents your partial ownership in the underlying asset. Therefore, tokens can allow the function of automatic dividend payouts (split revenue) without the requirement of a third party custodian.
The team has implemented support for sessionless API use. This simplifies the process for users who would like to interact with their sigchain and use the various APIs with the CLI (command-line interface), without having to keep track of, and supply their session ID with each API call. This makes the usage more akin to the legacy RPC CLI. The API will default to sessionless, though can be switched to session-based by adding -apisessions=1 to your config.
We have been working closely with the seed node operators and block explorer developers to shape the requirements for the Ledger API, thanks to @mercuryminer, @psipherious, and @danialsan for their input. A new getblocks method has been added to allow batches of up to 1000 blocks and their transactions to be retrieved in a single call (taking about 3 seconds in testnet), which is crucial for block explorers and other data aggregators.
Next week work will start on the Network and Legacy APIs, which will serve as drop-in replacements for many of the frequently-used RPC commands, so that users/integrators only have to use the new API rather than having to switch between it and the legacy RPC.
We have static / unit tested the underlying functionality for most of the APIs, so we are now finishing and testing the individual API methods. These APIs are lower level, without schema or format specified. APIs are in development for Licensing & Royalties, Dividends and Voting.
Jack is working on a universal mining application that can be used on both the prime and hash channels by GPU and CPU. The immediate priority of this work is to increase the efficiency of prime channel mining using GPUs in order to compete with privately developed mining farms that currently dominate this channel. We have made good progress on this, and hope to release an updated miner with a significant speed increase next month.
The team has made significant progress on the validation scripts that will be used to drive more complex contract behaviour. Essentially, a validation script is a set of rules that must evaluate to ‘true’ for a transaction to execute or to be claimed. These rules can include data from global state variables such as unified time, block height, and coin supply, as well as data from the sender / recipient signature chain and the registers that they own.
For example, this opens up the possibility to encode rules such as ‘transfer asset X from sig chain A to sig chain B, as long as 1000 ABC tokens have been deposited into A’s signature chain, and as long as this occurs before the date 01/01/2020’.
To execute these validation scripts, we have built our own 64-bit Register based Virtual Machine, which based on our last tests of memory and computation bottlenecks, it processed 15 million scripts per second on a single thread (~75 million operations / second). The script tested can be observed at the following link: https://github.com/Nexusoft/LLL-TAO/blob/merging/tests/unit/TAO/Operation/validate.cpp#L41
To verify these results, please compile the source code with LIVE_TESTS=1 to run benchmarks and unit tests.
We are developing a set of API methods that will encapsulate the commonly used validation scripts that we expect people to use for ICOs / STOs, royalty payments, dividends and for the DEX. More advanced users will be able to create their own validation scripts by writing them in our virtual machine assembly, or a higher level domain specific language (DSL) when developed.
We have designed this aspect of the Operations Layer to be sensitive to common mistakes that developers may make, making it more difficult to introduce ‘bugs’ into the contract that could be exploited as security flaws
The framework for the module market is near completion. Once complete, it will allow anyone to start developing modules for the nexus wallet. The first official module under development is the internal wallet block explorer.
The foundations of the decentralized exchange (DEX) are validation scripts. Essentially, an asset could be put up for transfer as a validation script. For example, my order requires ‘1000 ABC tokens’ before you can claim ‘asset X’. Once a corresponding transaction fulfils this script the token and asset transfer clears, allowing each party to claim both sides of the exchange without the requirement of a central clearinghouse.
Running with -dex enabled will require more disk space for a full node, because of all the necessary indexing of the orders. Currently, disk usage is 30% less for a tritium node, versus legacy mode. We don’t expect that the DEX will require too much computation, because it is only going to depend on foreign indexes of transactions to an iterator number.
If enabled (by issuing the config flag -dex), you’ll be able to see all of the open orders and all of the orders that have ever been executed. From here, the front-end development team will have the data to populate graphs.
Paul attended the ADC Global Blockchain Summit in Adelaide earlier this month. The event brought together government, businesses, financiers, regulators, researchers, and innovators to discuss the strategies and practical applications of blockchain technology. Notable contacts were made with regulatory and research organizations such as OECD, CSIRO, and MainChain, in addition to various businesses and educational establishments looking for blockchain tech partners.
Discussions continue with our lawyers and the tax office over the tax treatment of the ambassador keys, and the general tax structure of the embassy and its subsidiary operating company. The decision has been made to apply to the ACNC to register the Australian Embassy as a charity which, if successful, will provide us with tax exemption and greatly simplify the financial and accounting process.
Nexus UK has attended a number of events such as:
- London Blockchain Week
- Finovate Europe
- ‘Law and Blockchain’ & ‘Blockchain Unchained’ Seminars
We also spent some time with our advisors, in particular Jeff Garzik, discussing how Nexus can increase adoption globally both with regards to enterprise solutions and crypto consumers. The UK Embassy has continued to explore a number of high profile business development opportunities with the goal of creating globally adopted use cases.
Dino and Colin will be at the IETF (Internet Engineering Task Force) on the 29th and 30th of March. Please message @jules if you are in Prague.
Alex and Colin will be hosting a meetup in London on Thursday 4th April at 18:00pm. Venue: The Chapel Bar, 29 Penton St, London, N1 9PX. Please come and join us to learn more about our recent developments.