TAO Update #3 – Simplified

TAO Update #3 – Simplified

First up, there’s a ton of new code. 50k+ new lines of it. Great job, Nexus devs!

 

Lower Level Crypto:

Nexus is monitoring new quantum-resistant signature schemes that use lattices to replace the Elliptic Curve Cryptography scheme that is currently used for private/public keys. Lattices will make your signature chain more secure against quantum computers but requires greater processing, which is already a throughput bottleneck at 4.3k TPS. 

Lower Level Database:

A new keychain called a Binary Hash Map has been created as an alternative to the Binary File Map (another form of indexing). Essentially, if the blockchain is indexed via account, then the account name is run through a hash function which determines its location in the index, called a bucket. A bucket collision is when the hash output is the same for another input.

When this occurs, it searches through multiple hashmaps from the largest index to smallest index and places the information in the first empty bucket at the calculated position. When the key needs to be retrieved, it calculates the bucket location and then checks that bucket in each hashmap until the key is found. Greater bucket collision resistance means that the Binary Hash Map is more efficient.

The Binary LRU cache is pretty self-explanatory. Information that is used most frequently is retained whilst infrequently used data is replaced.

The Transaction Journal is like a page file for the blockchain. Rather than holding every transaction that has been processed and is awaiting placement into a block in RAM, which can be corrupted after power failure, the LLD sets aside a portion of the hard drive in a file and stores the pending writes there. After reboot, it can recover if there was a failure at any point in the ACID transaction, due to checkpointing the journal file before events are committed to the main database.

ACID is not just known for great visual effects and synesthesia, but is a well-known acronym from the world of databases. It stands for Atomicity, Consistency, Isolation and Durability. Meaning:

  • When a transaction is made, every part of the transaction must be valid or the transaction fails.
  • Transactions must be in order, for instance a DEBIT must happen before a CREDIT, or for that instant in time the ledger would be in an invalid state.
  • Once a transaction is made, it is irreversible. You cannot make another transaction that would invalidate a previous transaction. For instance, let’s say I have 1000 NXS and buy something worth 500 NXS. If I try to quickly go and transfer 501 NXS, it would fail.

For further reading about ACID: https://vladmihalcea.com/a-beginners-guide-to-acid-and-database-transactions/

Lower Level Protocol (LLP):

The LLP handles the networking layer of Tritium, managing sockets and connections with other nodes. The benchmarks reported here are run on the underlay without LISP. As can be seen in the photo, and as Colin explained, this benchmark is not performed with any Ledger layer validation and is simply a load test on network capability alone. Under real-world conditions, this will not be the case.

Ledger:

A transaction object contains all the information needed to make a transaction, for instance sending account, receiving account, amount to be transferred, signature data, public key, tx ID etc. When a new transaction is received, the node performs pre-processing to validate the transaction. When a block is received, the node then checks to see if that transaction is included in the broadcast block, and then commits this collection of indexes to disk using an ACID transaction (remember, ‘all or nothing’). This benchmark of 647ms was for post-processing only. Pre-processing takes a little longer, but given an average block time of 50 seconds, there is plenty of time available to perform this processing. 

Preprocessing tests received transactions to ensure that they are valid and do not violate ledger, register, or operations verification. For a short period, I believe about 12 months was the stated duration, nodes will be able to process both Tritium transactions and legacy transactions. Tritium transactions use the new account model with the new signature chains, whereas legacy transactions still use the UTXO system with private/public key security. Legacy addresses can send to Tritium accounts, but Tritium accounts cannot send to Legacy addresses. Any Legacy addresses containing coins after this period will be inaccessible.

Tritium:

All pre-processed transactions are retained in a mempool awaiting inclusion in a block. If a second transaction is received which contradicts or invalidates an already processed transaction, then that second transaction is rejected.

Tritium blocks do not contain all the information that was contained within the transaction object. They only contain a list of all the transaction IDs contained within the block. When the block is received, nodes compare every transaction ID which has passed pre-processing and if the block contains an unknown or unprocessed transaction ID, then that block is not accepted until that transaction is received and processed. This list of transaction IDs also serves to verify the merkle root that was calculated in the block header.

Amine:

Before Amine is implemented, all the pre and post processing is performed by the same nodes via PoS or PoW miners. Under Amine, they will be separated, with pre-processing or transaction validation performed via the PoS/Trust nodes, and post-processing or block verification performed by miners.

Obsidian:

Obsidian will split the consensus process over 3 channels, the L1, L2 and L3 locks. The first 2 locks are the same as explained above. The L3 lock is a hardening stage using proof of work. Instead of miners racing to find a winning block hash, each miner works to find the most weighted hash within a 60 second period. Miners then submit this hash (which is based off the L2 locks, previous block hash, their Signature Chain’s Genesis ID, and randomly generated nonce) to the network and combined into a single merkle root hash. This way each miner gains a portion of the reward proportional to how much collective weight they contributed.

Register pre-states:

This explains that by including the register pre-state within the transaction object, it forms part of the transaction hash which is part of the block header. Light nodes can then check block headers to see if the pre-state of a new transaction formed part of a recorded block. Because this information is recorded within the block header, older pre-states can be pruned and removed from the blockchain, decreasing blockchain bloat. If a node were to attempt to include an older pre-state, it would fail register verification on receive of a new transaction, being that the published pre-state was not consistent with the current register’s state in the LLD instance that holds register data. 

This section goes on to point out that extravagant claims of “100k TPS” are unrealistic without some sort of blockchain pruning or sharding, and that network bandwidth and hard drive storage are bottlenecks to this sort of throughput.

Register post-states:

The post state is the new state of the register which is recorded in the database, and transacting nodes have to include a post-state checksum along with the transaction, which must be the same as that calculated by the validating nodes. This is important for the development of dapps built on Nexus, for if a dapp makes a transaction and expects a certain result, then the checksum provides the means to prevent the register moving to a state which the application did not intend. Its a form of error handling.

Register Types:

This section is basically just reiterating the contents of the Tritium whitepaper.

State registers can be manipulated by primitive operations, which will be detailed further below. These registers contain state information for external applications shared across multiple instances. This might be the location of a sea container or the status of a driver’s license.

State registers come in three flavours, Raw, Append, and Read-Only.

Raw registers have no security parameters besides ownership, only the owner of the register can modify the content of this register.

Append registers can only be added to and retains their original data and state history within the register database. This is useful for tracking ownership of property, land titles etc.

Read-only registers are unable to be changed even by the owner. It is useful for holding constants or information that will not change.

Register Objects:

Currently, there are two types of objects, accounts and tokens. These are specialized registers which are used internally within the Operation layer. As such, their data format is pre-set and their contents can only be manipulated with specialized operations.

Register Operations:

The Register operation assigns a new memory address to a new register. Think of this as declaring a variable within a program. Eventually, when sharding is implemented, this will need to allow assigning and accessing memory addresses in remote shards through inter-shard communication.

The Write and Append operations are self-explanatory.

The Transfer operation changes ownership of State or Token registers from one signature chain to another. This operation can only be performed by the owner of said register

A successful transaction of funds uses both a Debit and Credit operation. These operations can only be applied to Account registers. In the event of a successful Debit operation but lack a corresponding Credit operation, the issuing Debit account is able to redeem those unclaimed funds.

The next two operations, Validate and Require, operate along similar lines to IF control statements within programs. These allow conditions to be placed on the execution of operations and form the basis of Nexus’s Advanced Contracts.

Using Colin’s example of a decentralized exchange of tokens, the user wishing to sell tokens would execute the Require operation, which debits the funds and then waits for the conditions to be fulfilled.  The buyer then performs a Validation operation which triggers the evaluation of User A’s conditions and removes the funds from User B’s account. Both users are then able to execute a Credit operation to deposit the corresponding funds into their accounts.

Nexus Newsletter February 2019

Nexus Newsletter February 2019

January was an extremely busy month, culminating in the release of the open source code for Tritium. An additional 58,627 lines of code have been written since the end of last October. Meanwhile, the UK and Australian Embassies have continued to develop their outreach in their respective regions. All three Embassies have provided an update of their recent progress.

US Embassy

On the 8th of January Colin spoke at CES Las Vegas on the topic of “How blockchain is remaking the Media/Entertainment Business”.

Please watch the recording of his discussion here.

On the 31st of January, the team released the source code for Tritium. Colin published the next update of the TAO series explaining what the code contains and what you are currently able to do with it.

Please read the TAO update #3  here.

Tritium Daemon update

We have been hard at work preparing the Tritium Daemon ready for full release out of public beta testing. Our current data shows incredible optimizations with memory usage at 90 MB, syncing from scratch on Linux at just over 1.25 hours, and syncing on a Mac laptop at 3.5 hours from genesis. This is a drastic improvement from the current legacy synchronization which requires 30 hours or more.

Interface update

The source code and public beta of the Tritium 0.8.7 Interface has been released which still contains legacy cores. The devs are busy making a branch called ‘TritiumCores` which will contain the WIP for tritium cores.

You can view the progress of on github here.

We are getting ready to integrate the interface with a Tritium daemon for all the features Tritium beta has to offer: fast synchronization, instant loading, wallet stability and minimum hardware requirements.

Please download the 0.8.7 version here.

APIs update

The API, as it stand contains three main interfaces, namely Account, LISP, and Supply Chain. Each of these are tailored for industry specific applications, such as supply chain management, account verification and integration services for existing systems. As the API deployment continues, we will keep you up to date with new APIs that are ready for operation. Over the forthcoming weeks, we will continue to add more API calls and validation scripts as we get the testnet ready for deployment. If you would like to contribute to development, please make sure to submit pull requests and begin discussions on github.

The Three Pillars

The technology of Nexus is a cornerstone to the development of the community and business adoption. We believe technology, community and enterprise form the three pillars of Nexus, and thus we are happy to see the technology advancing steadily, making strides that have become important for adoption and ease of use.

 “Community is the foundation to the growth of Nexus, and the technology and adoption is what supports this.”

 Community Contributions

 We would like to give a special thank you to Shea for writing a simplified version of the last TAO update.

 Please read the simplified TAO update #3 here.

 UK Embassy

Events

Through our membership with TechUK we have attended/are due to attend a variety of events throughout February:

– Australian-UK Fintech Reception at the Australian High Commission

– TechUK Digital ID Paper launch in Houses of Parliament

– IoT Secure by Design

– Future of Payments in the UK

– DLT Working Group

The DLT working group is of a particular interest, as we have co-written a UK Blockchain White Paper with TechUK and other influential members, due to be published later this year. This paper will be distributed to the 900 businesses who are TechUK members, and will be presented to the UK government with the aim to help encourage blockchain adoption throughout the UK.

PR

Through the efforts of our UK PR agency Nexus was mentioned in the Independent, the Financial Times (both national news outlets) and Yahoo Finance (International) in December of 2018. To be mentioned in such news sources improves our credibility within the digital currency space.

Financial Times (FT)

Independent

Yahoo Finance

Exchange listing

BC Bitcoin, a bespoke UK based Cryptocurrency brokerage, listed NXS. NXS can now be traded with GBP, EUR and USD.

BC Bitcoin website

BC Bitcoin discuss Nexus

As always, we continue to take a number of business development related meetings.

AUS Embassy

Since its launch in late November, the Australian Embassy has been working hard to set the foundations for a busy 2019. PricewaterhouseCoopers (PwC) were engaged to assist in the establishment of the business entities and advise on the correct tax framework. They found that the tax treatment of the Ambassador keys did not fit within any of the current Australian Taxation Office (ATO) tax guidelines relating to cryptocurrencies, so this process continues. On the back of these discussions, Paul has been invited to join the Australian Chamber of Digital Commerce (ADCA) Tax Working Group, which directly advises the ATO on tax legislation.

The Embassy has been actively looking for a marketing and PR agency within the Australian and Asia-Pacific regions and several candidates have been found. Most recently we met with the Chief Editor of the new Blockchain Australia magazine, which circulates in the Australian Financial Review. These agencies will expose Nexus to industry leaders and key decision makers through specific publications and articles.

The Embassy has been attending the monthly ‘Crypto Sydney – Intelligence Traded’ networking events developing some great contacts.

If you would like to join the guys in Sydney you can sign up to the event here.

We are planning to have a presence at the ADC Global Blockchain Summit in Adelaide this March, as well as the APAC Blockchain Conference in Sydney in June, the largest blockchain event in the Asia Pacific region, to meet with high profile businesses and explore potential use cases for Nexus.

Respective details can be found here:

https://adcforum.org/the-adc-global-blockchain-summit/

https://apacblockchain.com.au/

Paul joined the development team in November and has hit the ground running, working alongside the other developers around the globe to complete the Tritium core. The recent downturn in NXS price has curtailed our plans to expand the development team within the Australian Embassy, but we are confident to get our plans back on track later in the year. Nicco and Mike continue to work as volunteer directors.

We would like to give a big thank you to all the dev team for their hard work and our community for their continued support, and look forward to all your feedback from the forthcoming testing of Tritium.

Upcoming Events

Prague Workshop

Dino and Colin will be at the IETF (Internet Engineering Task Force) in Prague in the last week of March. We will be organising a workshop on the afternoon of the 29th and the 30th. We will confirm the location of the venue at a later date on social media. We hope to meet many of you there.

 

Cheers,

Nexus

TAO Update #3

TAO Update #3

For this edition of the TAO update series, I will explain what has been completed thus far, what is left to do, and what you can do with Tritium after you read this article. So, let’s get started with the usual git pull origin master.

From https://github.com/Nexusoft/LLL-TAO
* branch merging -> FETCH_HEAD
Updating 1c774b5..4358843
Fast-forward
sdk/nexus-sdk-primer.py | 174 ++
sdk/nexus-sdk-test.py | 122 +
sdk/nexus_sdk.py | 223 ++
src/LLC/bignum.cpp | 819 ++++++
src/LLC/hash/SK.h | 100 +-
src/LLC/hash/SK/KeccakDuplex.h | 12 +-
src/LLC/hash/SK/KeccakHash.h | 10 +-
src/LLC/hash/SK/KeccakSponge.h | 10 +-
src/LLC/hash/SK/skein.cpp | 576 +++ — 
src/LLC/hash/SK/skein.h | 71 +-
src/LLC/hash/SK/skein_block.cpp | 24 +-
src/LLC/hash/SK/skein_iv.h | 54 +-
src/LLC/hash/SK/skein_port.h | 27 +-
src/LLC/hash/Skein3Fish/include/skein.h | 34 +-
src/LLC/hash/Skein3Fish/include/skeinApi.h | 50 +-
src/LLC/hash/Skein3Fish/include/skein_iv.h | 2 +-
src/LLC/hash/Skein3Fish/include/skein_port.h | 30 +-
src/LLC/hash/Skein3Fish/include/threefishApi.h | 48 +-
src/LLC/hash/Skein3Fish/skein_block.c | 24 +-
src/LLC/hash/Skein3Fish/threefish1024Block.c | 2 +-
src/LLC/hash/Skein3Fish/threefish256Block.c | 2 +-
src/LLC/hash/Skein3Fish/threefish512Block.c | 2 +-
src/LLC/hash/Skein3Fish/threefishApi.c | 7 +-
src/LLC/hash/macro.h | 14 +-
src/LLC/include/key.h | 141 +-
src/LLC/include/random.h | 71 +-
src/LLC/key.cpp | 460 +++-
src/LLC/random.cpp | 12 +-
src/LLC/types/bignum.h | 1359 +++++ — — –
src/LLC/types/uint1024.h | 134 +-
src/LLD/cache/binary_lru.h | 472 ++++
src/LLD/cache/template_lru.h | 482 ++++
src/LLD/global.cpp | 175 +-
src/LLD/include/address.h | 76 +
src/LLD/include/enum.h | 55 +
src/LLD/include/global.h | 45 +-
src/LLD/include/journal.h | 52 –
src/LLD/include/ledger.h | 474 +++-
src/LLD/include/legacy.h | 156 ++
src/LLD/include/local.h | 103 +-
src/LLD/include/register.h | 211 +-
src/LLD/include/trust.h | 80 +
src/LLD/include/version.h | 6 +-
src/LLD/keychain/hashmap.h | 872 +++++++
src/LLD/keychain/hashtree.h | 392 +++
src/LLD/templates/hashmap.h | 240 — 
src/LLD/templates/key.h | 165 +-
src/LLD/templates/pool.h | 337 — -
src/LLD/templates/sector.h | 1360 +++++++ — -
src/LLD/templates/transaction.h | 120 +-
src/LLP/baseaddress.cpp | 564 ++++
src/LLP/corenode.cpp | 176 ++
src/LLP/ddos.cpp | 143 ++
src/LLP/hosts.cpp | 171 +-
src/LLP/include/baseaddress.h | 355 +++
src/LLP/include/global.h | 29 +
src/LLP/include/hosts.h | 147 +-
src/LLP/include/inv.h | 131 +-
src/LLP/include/legacy.h | 327 — -
src/LLP/include/legacyaddress.h | 92 +
src/LLP/include/manager.h | 269 ++
src/LLP/include/network.h | 263 +-
src/LLP/include/permissions.h | 32 +-
src/LLP/include/port.h | 99 +
src/LLP/include/tritium.h | 310 — -
src/LLP/include/trustaddress.h | 137 +
src/LLP/include/version.h | 12 +-
src/LLP/inv.cpp | 215 +-
src/LLP/legacy.cpp | 608 ++++-
src/LLP/legacyaddress.cpp | 85 +
src/LLP/manager.cpp | 433 ++++
src/LLP/miner.cpp | 318 +++
src/LLP/network.cpp | 545 — — 
src/LLP/packets/http.h | 193 ++
src/LLP/packets/legacy.h | 185 +-
src/LLP/packets/packet.h | 101 +-
src/LLP/packets/tritium.h | 130 +-
src/LLP/rpcnode.cpp | 198 ++
src/LLP/socket.cpp | 194 +-
src/LLP/templates/connection.h | 329 ++-
src/LLP/templates/data.h | 342 ++-
src/LLP/templates/ddos.h | 227 +-
src/LLP/templates/events.h | 8 +-
src/LLP/templates/server.h | 742 +++++-
src/LLP/templates/socket.h | 103 +-
src/LLP/templates/types.h | 41 –
src/LLP/time.cpp | 179 ++
src/LLP/tritium.cpp | 168 +-
src/LLP/trustaddress.cpp | 184 ++
src/LLP/types/corenode.h | 90 +
src/LLP/types/http.h | 256 ++
src/LLP/types/legacy.h | 444 ++++
src/LLP/types/miner.h | 170 ++
src/LLP/types/rpcnode.h | 114 +
src/LLP/types/time.h | 98 +
src/LLP/types/tritium.h | 341 +++
src/Legacy/addressbook.cpp | 244 ++
src/Legacy/ambassador.cpp | 87 +
src/Legacy/basickeystore.cpp | 117 +
src/Legacy/create.cpp | 518 ++++
src/Legacy/crypter.cpp | 252 ++
src/Legacy/cryptokeystore.cpp | 380 +++
src/Legacy/db.cpp | 763 ++++++
src/Legacy/include/ambassador.h | 132 +
src/Legacy/include/constants.h | 62 +
src/Legacy/include/create.h | 122 +
src/Legacy/include/evaluate.h | 173 ++
src/Legacy/include/money.h | 103 +
src/Legacy/include/signature.h | 124 +
src/Legacy/keypool.cpp | 305 +++
src/Legacy/keystore.cpp | 42 +
src/Legacy/legacy.cpp | 607 +++++
src/Legacy/locator.cpp | 53 +
src/Legacy/mempool.cpp | 218 ++
src/Legacy/merkle.cpp | 58 +
src/Legacy/minter.cpp | 808 ++++++
src/Legacy/outpoint.cpp | 35 +
src/Legacy/reservekey.cpp | 74 +
src/Legacy/script.cpp | 391 +++
src/Legacy/signature.cpp | 225 ++
src/Legacy/transaction.cpp | 1080 ++++++++
src/Legacy/txin.cpp | 83 +
src/Legacy/txout.cpp | 91 +
src/Legacy/types/inpoint.h | 80 +
src/Legacy/types/legacy.h | 158 ++
src/Legacy/types/locator.h | 155 ++
src/Legacy/types/merkle.h | 147 ++
src/Legacy/types/minter.h | 332 +++
src/Legacy/types/outpoint.h | 146 ++
src/Legacy/types/transaction.h | 470 ++++
src/Legacy/types/txin.h | 176 ++
src/Legacy/types/txout.h | 171 ++
src/Legacy/wallet.cpp | 2165 ++++++++++++++++
src/Legacy/wallet/accountingentry.h | 115 +
src/Legacy/wallet/addressbook.h | 202 ++
src/Legacy/wallet/basickeystore.h | 149 ++
src/Legacy/wallet/crypter.h | 207 ++
src/Legacy/wallet/cryptokeystore.h | 242 ++
src/Legacy/wallet/db.h | 526 ++++
src/Legacy/wallet/keypool.h | 239 ++
src/Legacy/wallet/keypoolentry.h | 82 +
src/Legacy/wallet/keystore.h | 174 ++
src/Legacy/wallet/masterkey.h | 105 +
src/Legacy/wallet/output.h | 93 +
src/Legacy/wallet/reservekey.h | 136 +
src/Legacy/wallet/wallet.h | 1179 +++++++++
src/Legacy/wallet/walletaccount.h | 72 +
src/Legacy/wallet/walletdb.h | 585 +++++
src/Legacy/wallet/walletkey.h | 76 +
src/Legacy/wallet/wallettx.h | 570 +++++
src/Legacy/walletdb.cpp | 830 ++++++
src/Legacy/wallettx.cpp | 600 +++++
src/TAO/API/RPC/account.cpp | 1592 ++++++++++++
src/TAO/API/RPC/daemon.cpp | 97 +
src/TAO/API/RPC/info.cpp | 279 ++
src/TAO/API/RPC/network.cpp | 450 ++++
src/TAO/API/RPC/rpc.cpp | 218 ++
src/TAO/API/RPC/wallet.cpp | 486 ++++
src/TAO/API/accounts/accounts.cpp | 85 +
src/TAO/API/accounts/create.cpp | 133 +
src/TAO/API/accounts/login.cpp | 122 +
src/TAO/API/cmd.cpp | 291 +++
src/TAO/API/include/accounts.h | 163 ++
src/TAO/API/include/cmd.h | 45 +
src/TAO/API/include/ledger.h | 78 +
src/TAO/API/include/register.h | 129 +
src/TAO/API/include/rpc.h | 924 +++++++
src/TAO/API/include/rpcserver.h | 77 –
src/TAO/API/include/supply.h | 129 +
src/TAO/API/ledger/create.cpp | 96 +
src/TAO/API/rpcdump.cpp | 234 — 
src/TAO/API/supply/supply.cpp | 321 +++
src/TAO/API/types/base.h | 122 +
src/TAO/API/types/exception.h | 62 +
src/TAO/API/types/function.h | 92 +
src/TAO/Ledger/block.cpp | 441 ++ — 
src/TAO/Ledger/chainstate.cpp | 154 ++
src/TAO/Ledger/checkpoints.cpp | 145 +-
src/TAO/Ledger/create.cpp | 341 +++
src/TAO/Ledger/difficulty.cpp | 958 +++ — — 
src/TAO/Ledger/global.cpp | 287 — -
src/TAO/Ledger/include/block.h | 57 –
src/TAO/Ledger/include/chainstate.h | 80 +
src/TAO/Ledger/include/checkpoints.h | 107 +-
src/TAO/Ledger/include/constants.h | 104 +
src/TAO/Ledger/include/create.h | 55 +-
src/TAO/Ledger/include/difficulty.h | 169 +-
src/TAO/Ledger/include/enum.h | 35 +
src/TAO/Ledger/include/global.h | 370 — -
src/TAO/Ledger/include/prime.h | 145 +-
src/TAO/Ledger/include/supply.h | 224 +-
src/TAO/Ledger/include/time.h | 39 +-
src/TAO/Ledger/include/timelocks.h | 108 +
src/TAO/Ledger/include/trust.h | 226 +-
src/TAO/Ledger/mempool.cpp | 179 ++
src/TAO/Ledger/prime.cpp | 237 +-
src/TAO/Ledger/state.cpp | 743 ++++++
src/TAO/Ledger/supply.cpp | 359 + — 
src/TAO/Ledger/transaction.cpp | 200 +-
src/TAO/Ledger/tritium.cpp | 655 +++++
src/TAO/Ledger/trust.cpp | 1080 + — — — -
src/TAO/Ledger/trustkey.cpp | 167 ++
src/TAO/Ledger/types/block.h | 449 ++ — 
src/TAO/Ledger/types/data.h | 78 –
src/TAO/Ledger/types/locator.h | 109 –
src/TAO/Ledger/types/mempool.h | 244 ++
src/TAO/Ledger/types/sigchain.h | 120 +-
src/TAO/Ledger/types/state.h | 410 ++-
src/TAO/Ledger/types/transaction.h | 166 +-
src/TAO/Ledger/types/tritium.h | 181 ++
src/TAO/Ledger/types/trustkey.h | 236 ++
src/TAO/Operation/append.cpp | 111 +
src/TAO/Operation/authorize.cpp | 41 +
src/TAO/Operation/coinbase.cpp | 68 +
src/TAO/Operation/credit.cpp | 377 +++
src/TAO/Operation/debit.cpp | 119 +
src/TAO/Operation/include/enum.h | 80 +-
src/TAO/Operation/include/execute.h | 541 ++ — 
src/TAO/Operation/include/operations.h | 190 ++
src/TAO/Operation/include/stream.h | 108 +-
src/TAO/Operation/include/validate.h | 10 +-
src/TAO/Operation/register.cpp | 148 ++
src/TAO/Operation/transfer.cpp | 83 +
src/TAO/Operation/trust.cpp | 31 +
src/TAO/Operation/write.cpp | 113 +
src/TAO/Register/include/enum.h | 87 +-
src/TAO/Register/include/object.h | 119 –
src/TAO/Register/include/rollback.h | 41 +
src/TAO/Register/include/state.h | 296 ++-
src/TAO/Register/include/stream.h | 100 +
src/TAO/Register/include/verify.h | 43 +
src/TAO/Register/objects/account.h | 35 +-
src/TAO/Register/objects/escrow.h | 36 +-
src/TAO/Register/objects/order.h | 49 –
src/TAO/Register/objects/token.h | 62 +-
src/TAO/Register/objects/trust.h | 130 +
src/TAO/Register/rollback.cpp | 251 ++
src/TAO/Register/verify.cpp | 256 ++
src/Util/args.cpp | 229 +-
src/Util/base58.cpp | 394 + — 
src/Util/base64.cpp | 164 ++
src/Util/config.cpp | 414 + — 
src/Util/debug.cpp | 348 + — 
src/Util/filesystem.cpp | 201 +-
src/Util/include/allocators.h | 64 +-
src/Util/include/args.h | 203 +-
src/Util/include/base58.h | 319 ++-
src/Util/include/base64.h | 196 +-
src/Util/include/config.h | 108 +-
src/Util/include/convert.h | 311 ++-
src/Util/include/debug.h | 229 +-
src/Util/include/fifo_map.h | 547 ++++
src/Util/include/filesystem.h | 104 +-
src/Util/include/hex.h | 182 +-
src/Util/include/json.h | 4678 ++++++++++++++++++++++ 
src/Util/include/memory.h | 37 +
src/Util/include/mmaplib.h | 246 +-
src/Util/include/mutex.h | 118 +-
src/Util/include/parse.h | 148 — 
src/Util/include/runtime.h | 274 +-
src/Util/include/signals.h | 79 +-
src/Util/include/sorting.h | 35 +-
src/Util/include/string.h | 295 +++
src/Util/include/strlcpy.h | 30 +-
src/Util/include/urlencode.h | 90 +
src/Util/include/version.h | 33 +-
src/Util/macro/header.h | 12 +-
src/Util/memory.cpp | 33 +
src/Util/signals.cpp | 66 +
src/Util/templates/basestream.h | 243 ++
src/Util/templates/containers.h | 147 +-
src/Util/templates/mruset.h | 204 +-
src/Util/templates/serialize.h | 1363 +++++ — — –
src/Util/version.cpp | 64 +
345 files changed, 58627 insertions(+), 25757 deletions(-)

As you can see, there has been an additional 58,627 lines of code since the last TAO update, which equates to roughly 3 months of solid coding since the last git pull. This averages out to around 651 lines of new code every day since the end of October. Anyhow, let’s begin by first taking a look at the acronym to our framework: TAO

This word comes from a Chinese classical text, The Tao Te Ching, which has been studied by some of the greatest philosophers of our time. It represents an idea that contains the principles of balance, and order in the greater concepts of the mind.

“The Tao is hidden, and has no name; but it is the Tao which is skillful at imparting (to all things what they need) and making them complete”

Lower Level Library

The Lower Level Library (LLL) is the foundation of the TAO, which includes: viz. Crypto, Database, and Protocol (Network Layer).

Lower Level Crypto

There is not much to report here, other than cleaning up some memcpy from the Skein and Keccak functions, along with the research of some promising candidates for a lattice based signature scheme. Right now the NIST competition is in the first round of the review process. We will observe how this evolves over the next year to identify which candidates to experiment with.

In the future, we may try out hybrid signature schemes on a test network to see the effectiveness of lattice and elliptic curve hybrid signatures. The data and computational overhead would be higher, but the security parameters of our public keys would inherit a higher degree of quantum resistance compared to that provided from our current use of Skein and Keccak.

Lower Level Database

The following new components have been added to the Lower Level Database:

  • Binary Hash Map  — This is a hashmap with very low memory footprint and on disk indexing, which handles bucket collisions at O(n) reverse iterations (linear time), and is designed for write intensive applications. The write capacity has peaked at around 450k writes / second, with reads peaking at 25k reads / second from disk, and 1.4m reads per second if cached.
  • Binary LRU Cache  — LRU stands for Least Recently Used, and means that the cache will keep only the elements that have been recently used, and discards the elements that are the oldest. This makes for an efficient cache implementation compared to FIFO (First in first out)
  • Transaction Journal — This is an anti corruption measure when handling an ACID transaction that recovers the database from invalid states in the case of power failures, program crashes, or random restarts.

Each one of these components is a part of the modular framework, that you will be able to see if you go to the src/LLD/cache, src/LLD/keychain, src/LLD/templates folders. The most exciting piece is the addition of the Transaction Journal. Before I give a deeper explanation of this, let us review what is meant by the term ‘ACID’.

  • Atomicity — All transactions are seen as individual units that together must complete as a whole.
  • Consistency — All transactions must bring the database from one consistent state to another.
  • Isolation — Transaction reads and writes with concurrent execution must leave the database in a valid state, as if it was being processed in series.
  • Durability — Once a transaction is committed, it must stay so even in the event of a power failure. This usually means committing the transaction to disk.

Fun huh? I can explain this some more. Think of a database transaction as a commitment of many pieces of data that synchronize together in an amalgamation of information. To understand this better, let us use a real life example such as Kim will only give John an apple, if Carry gives Sue a Peach. If this was a database transaction, what this would mean is that all the prerequisites would need to be committed together, and if any of them were to fail, the entire transaction would fail.

Let’s combine this with an ACID expression.

  • Atomicity — Kim, John, Carry, and Sue (individual units) exchanging fruit (the whole).
  • Consistency — Carry -> give Peach to Sue -> then Kim -> gives John an Apple
  • Isolation  — If carry and John both execute the giving of their Peach and Apple close to the same time, the ordering must be correct in the consistency sequence, which means the peach must be given to Sue before the apple is given to John.
  • Durability  — If Carry and John agree to the exchange, but never fully execute by exchanging the Peach and the Apple due to an error, such as the apple being forgotten by Kim, then the Apple and the Peach may never reach John or Sue. In this case, the commitment existed (in memory), but it never obtained durability since the physical object did not complete exchange.

I hope this above helps you understand the importance of an ACID transaction, of which one of the most important pieces is the ‘Durability’ component. When implemented with the proper logic, this can result in a database that cannot be corrupted, even under conditions of power failure. Let me explain how this is achieved.

The Transaction Journal

Before, the implementation of the Transaction Journal, every sequence of a transaction was executed in memory. In this way the database only records the state accompanied by the pending disk write, once the transaction has committed. Transaction journaling introduces an on disk checkpointing system that detects if there was an interruption during the transaction commit process. When the database re-initializes, it is able to detect any corruption, allowing the journal to be used to restore the database to the most recent transaction checkpoint even across many database instances. Therefore, at the sacrifice of a bit of speed, we can achieve higher levels of durability for the database engine. The latest statistics in our 100k Read and Write test ran as low as 0.33 seconds with the binary hash map, down from the 0.86 seconds when using the binary file map.

Lower Level Protocol

I’m sure many of you remember, our last test ran as high as 200,000 requests / second. I’m happy to report that the new numbers stand at:


A request is a message from one computer sent to another computer which is generally a request for a piece of data on the remote computer, such as a web page for a web server. Our latest test above shows the peak performance of the Lower Level Protocol at 452,171 requests / s, over a double increase in performance compared to the last test we submitted. The above demonstrates the capabilities of the Network layer, without Ledger layer validation which confirms that the network can handle very large workloads. It is important to have efficiency in all parts of a system in order for it to scale effectively. The efficiency of an application comes directly from the level of physical resources required to perform the task at hand.

Ledger

The ledger contains two components to its processing, the transaction objects and the blocks that act to commit transactions to disk, and therefore the database. Think of any blockchain as a verification database system, where the data is required to be processed before it is allowed to be written to the disk. Along with this set pre-processing, every single node in the network must agree on the outcome of the process, arriving at the same state in a synchronized ACID transaction which is carried by a block. In the case of Nexus, we follow a similar model. However, we perform the Consistency preprocessing before allowing for synchronized Isolation and Atomicity, and perform the post-processing verification afterwards as the final block receipt, allowing for Durability.

A tritium transaction object contains aspects of the ledger for pre and post processing, and the register pre states and post states, and finally the operations payload that is responsible for mutating the state.

Post-Processing

The software stack for Tritium has come a long way in recent months. Now that we have a foundation provided by the Lower Level Database and Lower Level Protocol, it has been fun to plug in some of the features that form the layers above. Below is a more recent stress test that verifies a block that is at full capacity (approximately 2MB). This block as you can see contained 32,252 transactions, and processed in 647 ms.

The software stack for Tritium has come a long way in recent months. Now that we have a foundation provided by the Lower Level Database and Lower Level Protocol, it has been fun to plug in some of the features that form the layers above. Below is a more recent stress test that verifies a block that is at full capacity (approximately 2MB). This block as you can see contained 32,252 transactions, and processed in 647 ms.

This test verified the time required for ‘post-processing’ which is the processing required after a block is received and is then added to the chain. The required time for ‘pre-processing’ which is the processing required before a block is received, was not included in this benchmark test. Let’s dig a little bit deeper into what all this means, and how these specific elements are prerequisite to Amine.

Pre-Processing

Pre-processing is the processing that is required for an object before it becomes a part of the ledger. This will generally be checking for conflicts within the database system such as spends or register pre-states, and then more complex pre-processing which would be signature verification. It is important to note that our tests have shown that signature verification is the biggest bottleneck in the processing of any transaction or contract in Tritium. Since we use a 512-bit standard for key sizes which raises our security to around 2²⁵⁶ bits (2¹²⁸ for Bitcoin, since ECC only retains about half of the key length in usable security due to different types of attacks), we have more signature data that is required to be processed when a transaction is received.

Tritium

Preprocessing in Tritium will be performed through the memory pool. Since Tritium blocks do not include the whole transaction object, they only contain references to objects that they are committing to disk (think of a block as a sort of ACID transaction). This means that if a block is submitted that contains a txid (transaction i.d.) that has never been publicly known by the nodes on the network, this block will not be able to propagate until the receiving nodes are able to run the pre-processing for that particular txid. Consequently, if a miner tried to submit a malicious transaction in a block as an attempt to double spend a transaction already accepted in the memory pool, they would find it increasingly difficult to get it added to the main chain (i.e. verified by validating nodes). This is because none of the nodes would have the pre-processing data required to accept this block, and that their conflicting transaction would in most cases fail to be accepted with it being that it had a direct conflict with another transaction that had already passed preprocessing.

Amine

Preprocessing in Amine will be aggregated into a two processing layers, namely Trust and Miners. This means that Trust nodes will be mainly providing preprocessing to the network, and miners providing the post-processing.

Post-Processing

Post-Processing is the processing required when a block is received, in order to fully commit components of the data, and change the register pre-states into their post states with verified checksums. The example above was pure post-processing which showed that our post-processing layers scale quite nicely, with a maximum of around 40–50k tx/s if split into a two-tier (pre/post) processing system. Our two-tier processing system will be the main aspect of the Amine architecture upgrade, along with additional operations and registers, and deeper/more advanced LISP functionality (we will explain how LISP shards will function in a later update).

Obsidian

With Obsidian, the two-tier process will become a three-tier process, which when integrated will have pre-processing (L1 processing channels), post-processing (L2 trust channels), and hardening (L3 distributed mining). It is important to understand how the present Tritium architecture is setting the foundation for all that is to follow. As many of you will know, as with any undertaking, once the foundation is set, it is not easily changed unless one takes apart the entire system. This is why it was so important to give Tritium the time it needed.

Register

Register pre-processing and post-processing is divided into two tiers as well, the pre-state and the post-state. This is important to know, so that you can understand how the registers act to modulate their states. Understanding this will help discover some of the benefits of pre-states in a chain, and how a node can prune prior pre-state data based on the verification of a transaction in a block object. A register post-state could be considered one individual unit of Atomicity.

Pre-States

Register pre-states contain the current database state of the given register before the operations execute. It is packaged into a binary format inside the transaction object as the means of verifying that the initial claimed state is the same state that the current network contains of the register.

The benefits of this come two-fold, one that you are able to rollback the chain without having to iterate back an unspecified number of blocks to find the state mutation of the previous register, and two that you are able to know the state of the register without having to calculate all its previous states. This adds additional benefits, such as being able to run nodes in ‘lighter’ mode, where nodes are only required to verify chain headers (which contain references to all of the transactions in a block), to know that a transaction with a given pre-state was included in a block. This allows for ‘light’ verification of a pre-state, i.e that the transaction was confirmed with the consensus of the network at a given block height, and therefore is indeed valid.

With the growth of the network and size of the ledger (one aspect of scaling to consider), we can prune the data held by the ledger by removing old pre-states, which lowers the data requirement and creates a more efficient and sustainable network over an extended period of time. By implementing this architecture now, we won’t end up with an over-baked architecture in the future that can’t handle the overwhelming volume of data that has been processed in the past.

When you hear of projects boasting 100k tx/s, or even 1M tx/s, let’s look at what this really entails:

  • On average, a tritium transaction will be a minimum of 144 bytes, and a maximum of 1168 bytes.
  • Let us take a best case scenario, with a normal OP::DEBIT / OP::CREDIT being around 24 bytes, so an example of a transaction that is 166 bytes.
  • Let us now multiply this number by 100,000 transactions which equals 16,600,000 bytes per second, or 16 MB per second. This means your internet connection would need to support at a minimum 16MB per second, or a 128 Mbps connection.
  • Now beyond that, let us look at the damage as it compounds. 16MB per second multiplied by 86,400 seconds (1 day) is 1,382,400 MB, which is 1.3 TB per day. Multiply this by 365 for a one year period and we have 504 TB per year consumed. This is obviously not possible on consumer grade hardware.

The above proof shows that the claims of such grandiose scale are most likely rooted in either folly, or malarky. For us, our pre-processing and post-processing systems, LISP data shards, Lower Level Database, and register Pre-States will help scaling significantly, but there is no way of knowing the exact scale that will be able to be achieved until demonstrated in real world conditions, over a long period of time. Right now, our results are promising, seeing that we are achieving a reasonable scale in post-processing, and managing architecture that is able to shard the pre-processing to exceed the 4.3k tx/s bottleneck from signature verification.

Post-States

Every register has a pre-state which is used by the operations layer for execution to move the register into its post-state. A post-state is what is recorded in the register database as the new state of the register after the transaction has completed. In order to not weigh down the register script (some of the binary data packed into a transaction), we included what is called a post-state checksum at the end of a register pre-state. Therefore, any validating node will compare their calculated post-state to the post-state checksum that was included with the submitted transaction.

The benefits of this, is that a transacting node is required to do the calculations themselves, to prove that they have done honest work. Other validating nodes verify this calculation by comparing their new register state checksum to the post-state checksum included in the register.

History

For those that are able to house extra data on their hard-drive, their node can be enabled to show the history of the registers without much processing required. Since the keychain object that is used for the register database is a binary hash map, you can enable it to operate in APPEND mode, which will append new data to the end of the corresponding database files, enabling a user to reverse iterate from the end of a hashmap collision, which will show the sequence of the register history. This is very useful for registers used in supply chains or other ‘history’ related chains, such as the transfer of ownership of titles and deeds for example.

Types

There are a few different types of registers that determine what types of operations can be executed on them. As you know from the tritium white paper there are object registers and state registers. Let’s briefly explain what each one is for:

State

A state register is one that holds the state for a component of an external application, with no specification on the data format which means that specialized operations can not be applied to these register, only primitive.

  • TYPE::RAW  — A raw register is a register with given number of bytes that can be written to or appended to at any time. It is the most versatile type of register with no security parameters applied to it. Each WRITE is immutable, but with it being RAW, it can only be overwritten by the owner of it. WRITE is only permissible if done from the signature chain that is the current owner.
  • TYPE::APPEND  — An append register is similar to a raw register in that it is created with a given number of bytes, but this type of register can only have an APPEND operation applied to it to change the data state. This means that in the database itself, the original data always exists before it, and so does the history of all APPEND operations. A WRITE operation on this type of register will fail, even if done by the current owner. Therefore, an APPEND register has security parameters associated with it that make it useful for applications that would like to be able to update a register without losing the data that existed before it. This makes every APPEND immutable but able to be modified.
  • TYPE::READONLY — This type of register is useful for a ‘write once’ type of register. It is only possible to use the ‘OP::REGISTER’ operation for this type, since it can only be written to once. This would be similar to a ‘const’ type in any language, and contains security properties that are useful for certificates of authenticity, titles, deeds, or contracts that the creator/publisher would like never to be modified.

Objects

Object registers are more specialized, as it is necessary for the operations layer to be able to recognize the data type that they contain. This is useful for specialized operations that require knowledge of the format of the data that the register contains. The following Objects are defined and useable in the current source code.

  • TYPE::ACCOUNT  — This is a specialized register that contains details regarding someone’s account. An account can contain the balance of any type of token, as it is denoted by a token identifier. Token identifier 0 is a reserved identifier and is used for the native NXS token.
  • TYPE::TOKEN  — This is a specialized register that contains the details of a token, and claims that token identifier for use of the specific token. This register contains information regarding the significant figures of a token, and other parameters to define the total supply, and the total supply that has been made available to the public.

Operations

The operations layer now contains a foundational set of processes, which act as the ‘Primitive’ operations. These together allow the creation of records, history, tokens, transfers, and non-fungible tokens. Let us go through each operation one by one, to explain what each one is capable of doing.

Register

This operational code creates a new register with a memory address assigned to it. The memory address must be unique, and will index the data of the register. Think of it as an abstracted memory address that comes from getting the memory location of a variable (in the programming language C/C++, this would be with the symbol ‘&’ which is an abstract of a machine address), but it lives in the Nexus Blockchain. This will be further abstracted towards Amine, when addresses will not only be ‘locally accessible, but will be ’network accessible’. Though replicating the exact same state across the system does provide added levels of redundancy, it evidently limits the potential of the system to scale. Creating shards of the data work load into ‘network accessible’ groups is therefore necessary, where specialized processing is performed by different groups and types of nodes, whilst retaining the levels of redundancy that replication provides.

Machine specific addressing is one of the innovations that is designed to solve the data overhead problem outlined in the above section regarding scaling. The two most notable bottlenecks that limit scaling are signature verification and the increasing amount of data overhead that compounds very quickly as volume increases. A scalable system is not one that can simply ‘process’ X many transactions per second, but one that can still function after processing X many transactions per seconds for years on end. Even if one were to use conventional data structures that go as low as O(log n), when the system scales to billions of keys, the processing can still become quite large, especially when indexing from disk.

Write

This primitive operation initiates a ‘write’ on a register, which overwrites all the data of the pre-state with the new data of the post-state. It has certain limitations such as the register must be a TYPE::RAW type, and the total number of bytes being written must be the same as what it had prior. This type of operation is generally best suited for applications that are submitting raw data into the ledger, to enable the immutable storage of certain events such as submitting a proof hash into the public ledger from a hybrid system, or having their application require certain JSON to be submitted into a register

Append

This primitive operation acts on a register of TYPE::APPEND, and adds data to the end of the register, without modifying the original data. Useful examples of this operation would be, flagging a title to that is claimed by an insurance company, or updating specifics about an item along a supply chain. Since the original data is always retained in the append sequence, updates to a register via OP::APPEND provide a useful audit and history mechanism.

Transfer

This allows the ownership of a register to be transferred from one signature chain to another. A transfer can also be instantiated to another register such as a TYPE::TOKEN if someone would like a token to govern the ownership of a register. This is how joint ownership can be provided between individuals, as the TYPE::TOKEN then represents the ownership. This can also be useful for showing the chain of custody between parties of a supply chain. If one wants to create non-fungible tokens, this would be the method that is used to transfer the ownership of the non-fungible token, with the non-fungible token generally being a TYPE::READONLY register with an identifier specifying parameters regarding an object. This could be a simple digital item with JSON specifications, and the transfer operation would be the proof of ownership of that digital item or non-fungible token.

Debit

This operation is responsible for the commitment of funds from one account to another. It is quite like the ‘authorize’ of a debit card transaction. When this operation is instantiated, the funds do not move to the receiving account until the other user (the receiver) issues their credit. The acceptance of the transaction by the receiver completes the commitment. This operation works only on a TYPE::ACCOUNT object register, and can handle the debiting from any type of token by any identifier.

Credit

This operation is responsible for the final commitment of funds from one account to another. Together the debit and credit produce a ‘two-way signature’, which reduces the chance of funds being lost due to the use of an incorrect address. If the funds are not accepted by the receiver within a specified time-window, they are then redeemable by the OP::DEBIT issuer. Therefore, funds will never be lost if sent to an invalid address. Another additional benefit of this is allowing a user to reject funds sent to their account if there is question of who the funds came from. It also provides the option to generate a whitelist of addresses from which the user will automatically accept transactions from. This is important for monetary safety, as if you receive a mysterious deposit in your account, there is no knowing who or why it reached you.

What are the next operations?

The next two operations are very important, as they unlock the ‘validation scripts’ which act as small computer programs that define the movement of NXS. Validation scripts enable the full potential of the operations layer, allowing functions such as the decentralized exchange of assets to tokens, tokens to tokens, irrevocable trusts, programmable accounts, etc.

Validate

The validate function will execute the corresponding OP::REQUIRE with the necessary parameters. If the validate executes to true, then the required will be satisfied and therefore the validated transaction will execute.

Require

This will set a boolean expression that will be required to evaluate to true in order for a transaction to be claimable. Such an example would be OP::REQUIRE TIMESTAMP GREATER_THAN 1549220657, meaning that a corresponding transaction would not be able to execute until the timestamp has been reached.

Introducing the DEX

The DEX will work as a native extension of the OP::REQUIRE and OP::VALIDATE operations. It can be thought of as this:

  • User A wishes to sell 55 of Token Identifier 77. They want to sell it for Token Identifier 0 (NXS).
  • They choose their price: OP::DEBIT <from-account> <claim-account> 55 OP::REQUIRE TIMESTAMP LESS_THAN 1549220657 AND OP::DEBIT <my-account> 10.
  • In this above script <my-account> will be an account with identifier 0, and <from-account> will be of token identifier 77.
  • User B wishes to buy the 55 of Token ID 77. They send a transaction such as: OP::VALIDATE <txid> OP::DEBIT <from-account> <to-account> 10
  • Since this includes an OP::VALIDATE, it triggers the validation of the corresponding OP::REQUIRE submitting the parameters it is verifying. Since the OP::DEBIT was one of the parameters to the OP::REQUIRE, this will evaluate to true, satisfying the validation script.
  • User A can now submit a transaction: OP::CREDIT <txid> <claim-account> 55
  • User B can now submit a transaction OP::CREDIT <txid> <claim-account> 10

In the above sequence, 4 transactions are executed to facilitate the decentralized exchange between two different types of tokens. This process can also be programmed for the decentralized exchange of an asset to a token, or even an asset to an asset. I will explain more on how this works and how we see the growth of the DEX in the next TAO update.

API

The API as it stands contains two types, Accounts and Supply. The implementation details for now are therefore for the purpose of demonstration only, using only a simple combination of operations such as OP::APPEND, OP::TRANSFER, and OP::REGISTER, for example. Please keep your eyes peeled for additional API calls that will be shown in the API documentation. I will explain how to interact with the API below:

Use a web browser to access the JSON responses.

You can use a web browser to make API requests to your Tritium node. This is achieved by submitting a GET request to the API endpoint. This will always be the IP address of the node, and port 8080 followed by <api>/<method>.

An example would be:

http://localhost:8080/accounts/login?username=user&password=pass&pin=1234

The above request will log you into the API and returns a session identifier. The session identifier should be included in all subsequent requests to the API for methods that require authorization. Your PIN is required for any transaction requiring authorization to ensure that even in a case where your username and password were compromised, your PIN will still be required in order to access your account. This gives similar properties to 2FA that most login systems utilize today.

Create a login page in your website powered by the Tritium daemon

You can embed a custom HTML form into your website to use a Tritium daemon as a secondary login system that gives verification properties to your web service. In the future, a login over the API will also trigger a unique EID that is coupled with the login, making your service immutable to IP spoofing. The API handles application/x-www-form-urlencoded , so make sure to include your parameters in your form as follows:

<form method=”POST” action=”http://localhost:8080/accounts/login“>
<input type=”text” name=”username”>
<input type=”text” name=”password”>
<input type=”text” name=”pin”>
<input type=”submit”>
</form>

The page you are sent to afterwards will include the JSON response data that includes the genesis ID and the session identifier to be used for all subsequent calls to the API that require authorization. This way you can give a user secure access to their signature chain through your service node in your online service. Importantly this gives users a way to access their sig chain without needing to run a full node, and without giving up custodianship of their funds and account information.

Embed contracts into your web application.

Since the API supports application/x-www-form-urlencoded, you are able to embed any contract functionality into your existing web application, either by forwarding forms through the API and applying a forwarding url to pass through, or by making custom forms that use the POST aspect of the API to process webforms. The above HTML example is a basic webform which can be integrated with your existing login system. To extend this, you can make calls to the API via AJAX or more complex forms inside your system. This means that to build with Nexus Advanced Contracts, all you need is to hire a web developer who is able to ‘plug and play’ the correct sequence of API calls into your web service.

Use contracts or tokens in your regular desktop application

The API also supports application/json to make requests to the API via any of our provided Software Development Kits (SDKs), so that your native application can take advantage of the API. Currently, we provide a Python SDK for use in any external python application, which can be found in the repository in the folder named ‘SDK’. We would like to encourage developers to build software development kits in their languages of choice for the API and contribute to the open source development of Nexus.

Documentation

Please refer to the following API documentation for up to date documentation on all API’s and calls that are available:

https://github.com/Nexusoft/LLL-TAO/blob/master/docs/how-to-api.md

As any new call is implemented for -testnet or -private mode, the corresponding documentation will be included. Please give feedback if you find any information difficult to understand, and we will modify the documentation to communicate it in a clearer manner

Logical / Interface

We are making progress on the App Store, which will be a developer friendly area to buy, sell and share Nexus apps. Our current design is ‘module’ based. However, this is only the first iteration of the App Store. We will give more details on how the App Store will develop, and how we will provide security to the applications supported by the App Store.

Request for new standards

Standards in the API and requests for new calls can be formally submitted and discussed on this mailing list here: [email protected]. Requests to lower layers such as new register types or operations can be submitted to the same location. Please do give feedback if you find anything you believe could be improved.

Command-line Flags Available

The following flags are available for use with the Tritium Daemon. Some are experimental and are undergoing debugging, while others are hardened and are ready for use.

-fastsync (experimental) — this flag will reduce your required synchronization time by a factor of 2.

-beta  — sync your tritium node on the mainnet with legacy rules. This will allow you to run a Tritium node on the mainnet, which gives you access to all the nice Tritium features such as sub second load time, quick synchronization time, and database stability

-private  — run your node in private mode to access the API functionality and build local contracts. post-processing is done via a private block, and clears in sub-second intervals

-legacy — use legacy specific RPC formatting for nodes that need to retain backwards compatible formatting

-indexheight —  add foreign indexes for all blocks by height as well as by hash, allows the indexing of blocks by height from disk, but requires extra disk space.

-testnet —  run your node in testnet mode over LISP or the regular underlay. This will synchronize you to the test network, and require mining to produce valid blocks and commit post-processing data from your API calls.

Branching

Our repository has specific semantics for each branch. The following list will briefly describe the purpose of each branch, and what they mean for you are testing:

  • Personal — any branch that is named after a user such as viz, jack, scottsimon, paul, or dino. We recommend NOT building from a personal branch, as the code you pull will be incomplete or in development.
  • Merging — this branch is used to merge code between developers. Any code that exists on merging is still considered ‘unstable’, so if you decide to test off of the merging branch, do so with a debugger (debug instructions below). We recommend NOT using this code unless you are a qualified tester or developer.
  • Staging  — this branch is used for pre-releases. This means that code is in Beta, and is ready for wider public testing. Once code reaches staging, we will periodically include pre-release candidates and binaries with revisions and stability fixes. This branch is for public testing before the release of official binaries.
  • Master  — this branch will be the least updated, so if you are looking for the most recent code, any of the aforementioned branches will keep you up to date. Code is only pushed to master when a FULL release is made, accompanied by a release candidate, binaries, and a change log and description. The code on master can only be merged from pre-releases in staging.

Debugging

  • First, you will need to have a debugger handy. If you are on Linux, make sure to have gdb installed. This can be installed via: sudo apt-get install gdb
  • For OSX, the debugger will be included with your X-Code command-line tools named lldb
  • Next, make sure to build the source clean by issuing this command: make -f makefile.cli clean
  • Next compile it with: make -j 8 -f makefile.cli ENABLE_DEBUG=1
  • Once this completes, you will need to start Tritium up with your debugger such as: gdb nexus
  • This will then enter you into a new command-line console, in which you want to type: run -beta -fastsync -gdb
  • Using the -gdb flag, the daemon will close if you press the return key, due to the debugger generally catching all the signals before the application.
  • If you ever run across a point where the program crashes, get the backtrace by issuing the following command: bt
  • Take this backtrace and submit it to the #dev channel in slack for assessment.

If you have already been testing or are looking to start helping test the core, I would like to extend a big thank you for all your help!

Docker

Check out docker if you want to deploy nodes over LISP. You can find docker documentation here:

https://github.com/Nexusoft/LLL-TAO/blob/master/docs/how-to-docker.md

Conclusion

Well, that is about all I have to report as of now, I hope that you continue to watch the progress on our repositories, continue to give us feedback, and of course, have fun doing it! Remember, if you’re not having fun, you’re not doing what you love, so on that note, I will leave you to ponder on what it is that brings you the greatest joy. In the meantime:

Let grace be our guide, Amine.

Cheers,

Viz.

Nexus Advanced Contracts

Nexus Advanced Contracts

Enterprise adoption is instrumental to blockchain technology becoming mainstream, and Nexus Advanced Contracts are the next step in leading this progression. Existing Smart Contracts have experienced issues in relation to ease of use and scalability due to a Turing complete system. Addressing these issues, Nexus has produced what is in essence a ‘Register-based Virtual Machine’, set for release in January 2019 with the Tritium upgrade. Tritium will allow developers to access the technology of Advanced Contracts simply through an API set. Before an explanation of Advanced Contracts is given, some context will be provided as to how conventional Smart Contracts function.

 

 

Smart Contracts

Smart Contracts are self-executing. Their design is to enforce the terms and conditions of a contract through programmable logic, reducing the need for third party intermediaries such as brokers and banks. Smart Contracts are an additional layer of processing above the ledger layer, i.e what is known as ‘the blockchain’, and are comparable to small computer programs that hold a state of information. The calculations of the contract are carried out by the processing nodes of a blockchain, which change the state of the information. Given that the calculations or processing is carried out by distributed consensus, the state of a Smart Contract is immutable.

Bitcoin was the first cryptocurrency with built-in Smart Contract capabilities, which it calls ‘scripts’. Scripts are not Turing complete and contain byte code. Ethereum augmented these capabilities into its ‘Turing Complete Smart Contracts’, which are generic to developers’ needs. Ethereum gives developers more access to contract functionality on a blockchain through a custom programming language called Solidity, which is then compiled into assembly language that is run on the Ethereum Virtual Machine (EVM). The EVM is a ‘Stack-based Virtual Machine’ that processes each instruction in turn.

Though very capable, Ethereum has experienced some issues in regards to security, performance, and ease-of-use, predominantly because of its Turing complete design. Some notable cases include the $75m DAO hack on Ethereum, and the $286m Parity bug. Vulnerabilities existed due to the large complexity of a Turing complete system, and the resulting difficulty of resolving bugs in a protocol written in immutable code. The complexity of operations that support universal computation or Turing complete designs also limit scalability. A universal system has a higher degree of complexity, and can not therefore compete with technology that is designed for more specialized tasks. An example of this observation would be the comparison between a CPU (Central Processing Unit) with a ASIC (Application Specific Integrated Circuit) in the mining of cryptocurrency. A CPU can’t compete against a SHA256 miner, as its complexity and design is geared to support universal general computation, not specialized computation. A similar conclusion could be drawn when a comparison is made between the system design of Ethereum (universal), and Nexus (specialized).

 

Nexus Advanced Contracts

Nexus has developed a ‘Register-based Virtual Machine’, a specialized contracting engine with greater capabilities than the EVM. Unlike the the EVM, which is defined by only two distinct layers of processing and is dependent on a Turing complete system, the Nexus contract engine is facilitated through the seven individual layers of the Nexus Software Stack, each designated to carry out specialized processes.

The third layer of processing is called the Register Layer. Here, the states of individual pieces of information contained by Advanced Contracts are recorded in architectural components called registers. Registers are used by typical computer processors and provide easy access to memory storage of frequently used information or values. With respect to Nexus Advanced Contracts, each register is owned by a Signature Chain. Therefore, the ownership and write access of a register is validated by the second layer, the Ledger Layer. The fourth layer is the Operation Layer which defines the rules of the state changes to a register, called ‘operations’. The operations are carried out by validating nodes that change the state of the registers by distributed consensus. The design provides the required functionality of a contract engine, without the over complexity and complications of a Turing complete system.

The ownership of a register can be transferred providing many proof of ownership use cases. Examples of such include titles, deeds, digital certificates and records, agreements, or any other digital means of representing tangible assets or time-stamped events. A register can also be owned and governed by another register, creating a relationship between many users. Relations can be used as proofs on the Operation Layer to provide additional functionality. An example of this would be a register that holds metadata representing the ownership of an item, and it being owned by another ‘token register’. The token ownership signifies partial ownership of the item, which provides the possibility for further use cases such as royalty payments with split ownership.

Conditions or stipulations can also be coded into Advanced Contracts by validation scripts or Boolean logic. Validation scripts require a transaction to fulfill a certain set of conditions to execute, which allows a user to program in stipulations on the exchange of NXS, tokens or any other digital asset. This allows a user to void transaction orders, place time locks on funds, or exchange any digital asset without a central intermediary.

Advanced Contracts which will be accessible through an API set will be able to improve many existing processes, including digital ownership, tokenization of assets and enterprises, digital rights, royalty payments, supply chain management, escrow services, financial applications, legal documentation of digital signatures, and many more.

 

Standardization

The standards of object registers, operation codes, and API methods will be defined through working group consensus, to ensure a consistent connection between developers and users. Nexus borrows a similar model to the Internet Engineering Task Force (IETF) that provides the working groups for all RFC (Request for Comments) standards. This is important to drive a vibrant ecosystem forward. Just as we have seen with the success of the internet, we hope to continue this success in the next era of global connection: blockchain, artificial intelligence, and satellite communication.

Read more:

Nexus API article

Nexus Signature Chains article

 

Parity Bug

The DAO Hack

Building a sustainable supply chain in uncertain times

Building a sustainable supply chain in uncertain times

It is clear that organisations can only operate effectively with easy access to products and services. Likewise, no organisation can continue to grow if late payments and poor procurement processes remain in place. This is where blockchain technology can play a crucial role, in both the modernisation and improvement of the logistics and operations which are vital to the performance of supply chain systems.

Read the full article here https://sctimes.io/news/article.aspx?tid=7&aid=6061.

Developer-Friendly APIs — Nexus Blockchain

Developer-Friendly APIs — Nexus Blockchain

With the highly anticipated release of the Nexus Tritium Mainnet scheduled for the end of January 2019, application developers will be able to interact with the functionalities of the Nexus blockchain through an easy to use, feature-rich API set. APIs will create user-friendliness for developers who will be able to build in a wide range of languages, and interoperability for existing private systems to interact with the Nexus blockchain. Nexus has designed its software stack based on the Open System Interconnection (OSI) network reference model, with the fifth layer as the API layer.

Nexus software stack

What is an API?

An API is an Application Programming Interface. While a user interacts with a system through a user interface, an API allows developers to interact through a programmatic interface. The way this works is that the API provides a list or set of simple commands that execute a series of operations, which would otherwise require specialist programming knowledge. This allows a developer to request or submit data to a system providing functionality to a higher-level application. For example, Facebook’s Graph API allows access to “Login with Facebook” and other features of their system.

Hybrid Blockchain

The distributed validation method provided by a public blockchain or Distributed Ledger Technology (DLT) (on-chain) is very secure in comparison to that of a private blockchain (side-chain) or centralized database (off-chain), because it is validated by many nodes forming a global consensus. However, private blockchains which are serviced by their own nodes provide other benefits that are much easier to develop and scale. One such benefit is to record proofs of private, sensitive, or proprietary data that are generally stored in a private database. This provides the private database the ability to edit or delete this data, in order to comply with regulations such as General Data Protection Regulation (GDPR), while maintaining the positive qualities of immutable proofs from the private blockchain. An optimum balance between a Public Ledger, Private Ledgers, and Private Databases, will provide the performance and efficiency necessary for global adoption.

Nexus hybrid blockchain

Nexus is developing the systems to enable private networks to utilize the public ledger, creating what is essentially a hybrid system, through an array of both private and public ‘template’ use case APIs. Public APIs will be provided by Nexus as open source technology, while Private APIs will be developed with businesses as their proprietary technology.

Public API

Through the Nexus API, developers building higher-level applications for consumers and producers of digital data will be able to access the various functionalities of the Nexus blockchain: Advanced Contracts, Cryptographic Identity, and the DLT. The Tritium wallet will provide the interface where all Public APIs will be accessible through HTTP-JSON, providing a set of single commands which will execute a series of events down through the Nexus software stack rather than using a specific Turing-complete language, requiring specialist programming knowledge. This will allow developers to build in a wide range of languages, such as C++, C#, Java, Python, and Javascript.

Nexus welcomes any interested parties to participate in our working groups to help shape the standardization process for the Nexus Software Stack, as we continue to develop the standardization body for DLT, similar to how the Internet Engineering Task Force (IETF) shapes the internet.

Private API

In addition to accessing the Public APIs, developers will be able to build their own Private APIs, providing the privacy of a permissioned system required to keep proprietary information and logic concealed, while harnessing the security of a public blockchain. This is possible through the use of state recording checkpoints between the private and public networks to ensure that agreements in the private network are also recorded in the public network, shown by the diagram below.

Given that only the aggregated state of the private ledger is recorded, sensitive or private data is not stored on the public ledger. Therefore, private APIs can secure proprietary contract logic, such as private supply chains, notaries, consumer verification services, etc., providing private services that the public layers are unable to. Since a private API functions as its own private network that synchronizes to the public network, one can expect the level of reliability and security of DLT. A private network can be operated under a software services license, or by the commissioner of a said API service. The final result, is a robust service that provides interoperability with existing private systems.

Nexus Private API Service for Enterprise range from hosting solutions to full private API buildout. Private APIs can be custom-built either by Nexus on behalf of a private client, or by any third party with or without consultation. Private testnets can also be provided during development to avoid loading the public and final private ledger with redundant data.

Blockchain Accessibility

It is often claimed that the ratio of demand to supply for blockchain developers is 20:1, which has led to the high costs associated with blockchain development and low business adoption. Since most programmers are already comfortable interacting with an API, building on the Nexus API can be as simple as developing a web-app. Through improvements in accessibility, Nexus is set to significantly reduce the barriers to entry for blockchain technology.

Enterprise API enquiries contact: [email protected]

Working Groups contact: [email protected]