Technology Pioneer Who Scaled The Internet Is Now Helping Blockchain Do The Same

Technology Pioneer Who Scaled The Internet Is Now Helping Blockchain Do The Same

Two innovative technologies, LISP and Nexus, have come together to create a one-of-a-kind scalable blockchain that many believe is the next generation blockchain solution the industry has been waiting for. LISP (Locator/ID Separation Protocol) was created by Silicon Valley’s, Dino Farinacci, to revolutionize scaling the Internet. The Nexus Hybrid Blockchain is being developed by twenty-eight year old Founder, Colin Cantrell, to solve the challenges of first generation blockchain architecture. Together, these two pioneers are advancing blockchain technology on the network layer in a way that has never been done before.

Dino is a software engineer and the largest individual contributor to running code on the Internet. He was the first ever Cisco Fellow appointed in 1997 and currently holds over 40 Internet and networking related patents. For the last 30 years, Dino has been a member of the Internet Engineering Task Force (IETF), which develops standards for the Internet we use every day. When Dino left Cisco in 2012, he wanted to pursue and focus on next generation use-cases for this new LISP technology. LISP is currently being tested by tech giants such as Comcast, Bloomberg, NBC and Cisco.

In 2017, he met Nexus Founder, Colin Cantrell, who is a self taught coder creating the Nexus blockchain from the ground up. Dino recognized the blockchain community was neglecting the real value of the network layer and making the same mistakes that were made when designing the Internet. Nexus was open to experiment, trial and deploy a LISP overlay, while Dino was eager to apply his networking experience to blockchain. The collaboration between Dino and Nexus was a perfect fit. Two years later, what started out as a passion project for Dino, is now the next generation scalable blockchain set to release this summer. Nexus Director of Business Development, Brian Vena said, “What Dino and Colin have created not only advances blockchain technology, but it will heavily impact our daily lives when it comes to the future of IoT and 5G. It also finally provides businesses a cost effective way to integrate a scalable blockchain solution with their current systems through easy to use plug and play APIs and advanced contracts that can be written in any coding language.”

The original Internet architecture was not built to handle the growing number of devices being used around the world or their ability to roam. This same architecture has now run out of IPv4 addresses (the Internet equivalent of phones numbers) which are required for devices and services to connect to the Internet. In order to solve this problem, Dino built the LISP overlay architecture to support both IPv4 and IPv6 addresses which will help make the Internet scale. With LISP, separating identity and location changes how you use the Internet. It allows you to roam, use multiple connections at one time and scale the core of the Internet. Scaling the core of the Internet is crucial so it can grow and support more devices and newer applications that are coming. “Today, people want performance, scale and accountability, and that’s exactly what LISP and the Nexus Hybrid Blockchain create together,” said Dino.

Nexus is the only blockchain using LISP, allowing it to scale along with the future advancements of the Internet and the new devices that connect to the network. The addition of LISP gives Nexus a scaling advantage by selecting the shortest paths between locations of Nexus nodes, allowing them to be located anywhere on the Internet, along with residential environments, cloud providers and mobile carriers. Using LISP also allows Nexus connections to remain active while the node moves around or temporarily goes off the network so re-connection and application state synchronization can be avoided. This increases the speed and performance of a Nexus node that no other blockchain in the world has.

The integration of Nexus and the LISP overlay also helps achieve scalability through reduced network latency in a truly unique manner. Just like the Internet, the 32-bit IPv4 address used by most network protocols will be unable to support the future growth of networked devices. Nexus and the LISP overlay will use 128-bit IPv6 EID addresses that can accommodate far more devices on the network. When asked about the future of LISP and Nexus, Dino believes the partnership will take advantage of more LISP features such as multi-homing, mobility, better security through the LISP mapping system’s access control features, crypto-EIDs for anti-spoofing and multicast miner pools. Dino says, “What the LISP layer provides you is an up to date network database and the Nexus Blockchain provides you with an immutable tracking database, the two can be used to provide robust and comprehensive data analytics. This is a data lake of information for machine learning models at multiple layers in the software stack that we have never seen before.”

Nexus has spent the last two years meeting with key executive decision makers and gathering market research in the areas of fraud, supply chain, digital rights and identity. This information has led Nexus to adapt their technical architecture and build a hybrid blockchain solution that allows businesses to utilize the benefits of both a public and private blockchain. The Nexus architecture solves the challenges of scalability and integration for a vastly improved user experience. APIs allow advanced contracts to be written in any language, ensuring easy integration, reduced development costs and a more efficient developer experience. With the Nexus mainnet set to release this summer, businesses looking for alternatives to first generation blockchains will now have a viable solution through the combination of Nexus and LISP.

Article by John Saviano, Nexus

For more information on Nexus and LISP, please visit:

www.nexusearth.com

www.lispers.net

Read Dino’s book, “The LISP Network: Evolution to the Next-Generation of Data Networkshttp://www.ciscopress.com/store/lisp-network-evolution-to-the-next-generation-of-data-9780134540320

TAO Update #3

TAO Update #3

For this edition of the TAO update series, I will explain what has been completed thus far, what is left to do, and what you can do with Tritium after you read this article. So, let’s get started with the usual git pull origin master.

From https://github.com/Nexusoft/LLL-TAO
* branch merging -> FETCH_HEAD
Updating 1c774b5..4358843
Fast-forward
sdk/nexus-sdk-primer.py | 174 ++
sdk/nexus-sdk-test.py | 122 +
sdk/nexus_sdk.py | 223 ++
src/LLC/bignum.cpp | 819 ++++++
src/LLC/hash/SK.h | 100 +-
src/LLC/hash/SK/KeccakDuplex.h | 12 +-
src/LLC/hash/SK/KeccakHash.h | 10 +-
src/LLC/hash/SK/KeccakSponge.h | 10 +-
src/LLC/hash/SK/skein.cpp | 576 +++ — 
src/LLC/hash/SK/skein.h | 71 +-
src/LLC/hash/SK/skein_block.cpp | 24 +-
src/LLC/hash/SK/skein_iv.h | 54 +-
src/LLC/hash/SK/skein_port.h | 27 +-
src/LLC/hash/Skein3Fish/include/skein.h | 34 +-
src/LLC/hash/Skein3Fish/include/skeinApi.h | 50 +-
src/LLC/hash/Skein3Fish/include/skein_iv.h | 2 +-
src/LLC/hash/Skein3Fish/include/skein_port.h | 30 +-
src/LLC/hash/Skein3Fish/include/threefishApi.h | 48 +-
src/LLC/hash/Skein3Fish/skein_block.c | 24 +-
src/LLC/hash/Skein3Fish/threefish1024Block.c | 2 +-
src/LLC/hash/Skein3Fish/threefish256Block.c | 2 +-
src/LLC/hash/Skein3Fish/threefish512Block.c | 2 +-
src/LLC/hash/Skein3Fish/threefishApi.c | 7 +-
src/LLC/hash/macro.h | 14 +-
src/LLC/include/key.h | 141 +-
src/LLC/include/random.h | 71 +-
src/LLC/key.cpp | 460 +++-
src/LLC/random.cpp | 12 +-
src/LLC/types/bignum.h | 1359 +++++ — — –
src/LLC/types/uint1024.h | 134 +-
src/LLD/cache/binary_lru.h | 472 ++++
src/LLD/cache/template_lru.h | 482 ++++
src/LLD/global.cpp | 175 +-
src/LLD/include/address.h | 76 +
src/LLD/include/enum.h | 55 +
src/LLD/include/global.h | 45 +-
src/LLD/include/journal.h | 52 –
src/LLD/include/ledger.h | 474 +++-
src/LLD/include/legacy.h | 156 ++
src/LLD/include/local.h | 103 +-
src/LLD/include/register.h | 211 +-
src/LLD/include/trust.h | 80 +
src/LLD/include/version.h | 6 +-
src/LLD/keychain/hashmap.h | 872 +++++++
src/LLD/keychain/hashtree.h | 392 +++
src/LLD/templates/hashmap.h | 240 — 
src/LLD/templates/key.h | 165 +-
src/LLD/templates/pool.h | 337 — -
src/LLD/templates/sector.h | 1360 +++++++ — -
src/LLD/templates/transaction.h | 120 +-
src/LLP/baseaddress.cpp | 564 ++++
src/LLP/corenode.cpp | 176 ++
src/LLP/ddos.cpp | 143 ++
src/LLP/hosts.cpp | 171 +-
src/LLP/include/baseaddress.h | 355 +++
src/LLP/include/global.h | 29 +
src/LLP/include/hosts.h | 147 +-
src/LLP/include/inv.h | 131 +-
src/LLP/include/legacy.h | 327 — -
src/LLP/include/legacyaddress.h | 92 +
src/LLP/include/manager.h | 269 ++
src/LLP/include/network.h | 263 +-
src/LLP/include/permissions.h | 32 +-
src/LLP/include/port.h | 99 +
src/LLP/include/tritium.h | 310 — -
src/LLP/include/trustaddress.h | 137 +
src/LLP/include/version.h | 12 +-
src/LLP/inv.cpp | 215 +-
src/LLP/legacy.cpp | 608 ++++-
src/LLP/legacyaddress.cpp | 85 +
src/LLP/manager.cpp | 433 ++++
src/LLP/miner.cpp | 318 +++
src/LLP/network.cpp | 545 — — 
src/LLP/packets/http.h | 193 ++
src/LLP/packets/legacy.h | 185 +-
src/LLP/packets/packet.h | 101 +-
src/LLP/packets/tritium.h | 130 +-
src/LLP/rpcnode.cpp | 198 ++
src/LLP/socket.cpp | 194 +-
src/LLP/templates/connection.h | 329 ++-
src/LLP/templates/data.h | 342 ++-
src/LLP/templates/ddos.h | 227 +-
src/LLP/templates/events.h | 8 +-
src/LLP/templates/server.h | 742 +++++-
src/LLP/templates/socket.h | 103 +-
src/LLP/templates/types.h | 41 –
src/LLP/time.cpp | 179 ++
src/LLP/tritium.cpp | 168 +-
src/LLP/trustaddress.cpp | 184 ++
src/LLP/types/corenode.h | 90 +
src/LLP/types/http.h | 256 ++
src/LLP/types/legacy.h | 444 ++++
src/LLP/types/miner.h | 170 ++
src/LLP/types/rpcnode.h | 114 +
src/LLP/types/time.h | 98 +
src/LLP/types/tritium.h | 341 +++
src/Legacy/addressbook.cpp | 244 ++
src/Legacy/ambassador.cpp | 87 +
src/Legacy/basickeystore.cpp | 117 +
src/Legacy/create.cpp | 518 ++++
src/Legacy/crypter.cpp | 252 ++
src/Legacy/cryptokeystore.cpp | 380 +++
src/Legacy/db.cpp | 763 ++++++
src/Legacy/include/ambassador.h | 132 +
src/Legacy/include/constants.h | 62 +
src/Legacy/include/create.h | 122 +
src/Legacy/include/evaluate.h | 173 ++
src/Legacy/include/money.h | 103 +
src/Legacy/include/signature.h | 124 +
src/Legacy/keypool.cpp | 305 +++
src/Legacy/keystore.cpp | 42 +
src/Legacy/legacy.cpp | 607 +++++
src/Legacy/locator.cpp | 53 +
src/Legacy/mempool.cpp | 218 ++
src/Legacy/merkle.cpp | 58 +
src/Legacy/minter.cpp | 808 ++++++
src/Legacy/outpoint.cpp | 35 +
src/Legacy/reservekey.cpp | 74 +
src/Legacy/script.cpp | 391 +++
src/Legacy/signature.cpp | 225 ++
src/Legacy/transaction.cpp | 1080 ++++++++
src/Legacy/txin.cpp | 83 +
src/Legacy/txout.cpp | 91 +
src/Legacy/types/inpoint.h | 80 +
src/Legacy/types/legacy.h | 158 ++
src/Legacy/types/locator.h | 155 ++
src/Legacy/types/merkle.h | 147 ++
src/Legacy/types/minter.h | 332 +++
src/Legacy/types/outpoint.h | 146 ++
src/Legacy/types/transaction.h | 470 ++++
src/Legacy/types/txin.h | 176 ++
src/Legacy/types/txout.h | 171 ++
src/Legacy/wallet.cpp | 2165 ++++++++++++++++
src/Legacy/wallet/accountingentry.h | 115 +
src/Legacy/wallet/addressbook.h | 202 ++
src/Legacy/wallet/basickeystore.h | 149 ++
src/Legacy/wallet/crypter.h | 207 ++
src/Legacy/wallet/cryptokeystore.h | 242 ++
src/Legacy/wallet/db.h | 526 ++++
src/Legacy/wallet/keypool.h | 239 ++
src/Legacy/wallet/keypoolentry.h | 82 +
src/Legacy/wallet/keystore.h | 174 ++
src/Legacy/wallet/masterkey.h | 105 +
src/Legacy/wallet/output.h | 93 +
src/Legacy/wallet/reservekey.h | 136 +
src/Legacy/wallet/wallet.h | 1179 +++++++++
src/Legacy/wallet/walletaccount.h | 72 +
src/Legacy/wallet/walletdb.h | 585 +++++
src/Legacy/wallet/walletkey.h | 76 +
src/Legacy/wallet/wallettx.h | 570 +++++
src/Legacy/walletdb.cpp | 830 ++++++
src/Legacy/wallettx.cpp | 600 +++++
src/TAO/API/RPC/account.cpp | 1592 ++++++++++++
src/TAO/API/RPC/daemon.cpp | 97 +
src/TAO/API/RPC/info.cpp | 279 ++
src/TAO/API/RPC/network.cpp | 450 ++++
src/TAO/API/RPC/rpc.cpp | 218 ++
src/TAO/API/RPC/wallet.cpp | 486 ++++
src/TAO/API/accounts/accounts.cpp | 85 +
src/TAO/API/accounts/create.cpp | 133 +
src/TAO/API/accounts/login.cpp | 122 +
src/TAO/API/cmd.cpp | 291 +++
src/TAO/API/include/accounts.h | 163 ++
src/TAO/API/include/cmd.h | 45 +
src/TAO/API/include/ledger.h | 78 +
src/TAO/API/include/register.h | 129 +
src/TAO/API/include/rpc.h | 924 +++++++
src/TAO/API/include/rpcserver.h | 77 –
src/TAO/API/include/supply.h | 129 +
src/TAO/API/ledger/create.cpp | 96 +
src/TAO/API/rpcdump.cpp | 234 — 
src/TAO/API/supply/supply.cpp | 321 +++
src/TAO/API/types/base.h | 122 +
src/TAO/API/types/exception.h | 62 +
src/TAO/API/types/function.h | 92 +
src/TAO/Ledger/block.cpp | 441 ++ — 
src/TAO/Ledger/chainstate.cpp | 154 ++
src/TAO/Ledger/checkpoints.cpp | 145 +-
src/TAO/Ledger/create.cpp | 341 +++
src/TAO/Ledger/difficulty.cpp | 958 +++ — — 
src/TAO/Ledger/global.cpp | 287 — -
src/TAO/Ledger/include/block.h | 57 –
src/TAO/Ledger/include/chainstate.h | 80 +
src/TAO/Ledger/include/checkpoints.h | 107 +-
src/TAO/Ledger/include/constants.h | 104 +
src/TAO/Ledger/include/create.h | 55 +-
src/TAO/Ledger/include/difficulty.h | 169 +-
src/TAO/Ledger/include/enum.h | 35 +
src/TAO/Ledger/include/global.h | 370 — -
src/TAO/Ledger/include/prime.h | 145 +-
src/TAO/Ledger/include/supply.h | 224 +-
src/TAO/Ledger/include/time.h | 39 +-
src/TAO/Ledger/include/timelocks.h | 108 +
src/TAO/Ledger/include/trust.h | 226 +-
src/TAO/Ledger/mempool.cpp | 179 ++
src/TAO/Ledger/prime.cpp | 237 +-
src/TAO/Ledger/state.cpp | 743 ++++++
src/TAO/Ledger/supply.cpp | 359 + — 
src/TAO/Ledger/transaction.cpp | 200 +-
src/TAO/Ledger/tritium.cpp | 655 +++++
src/TAO/Ledger/trust.cpp | 1080 + — — — -
src/TAO/Ledger/trustkey.cpp | 167 ++
src/TAO/Ledger/types/block.h | 449 ++ — 
src/TAO/Ledger/types/data.h | 78 –
src/TAO/Ledger/types/locator.h | 109 –
src/TAO/Ledger/types/mempool.h | 244 ++
src/TAO/Ledger/types/sigchain.h | 120 +-
src/TAO/Ledger/types/state.h | 410 ++-
src/TAO/Ledger/types/transaction.h | 166 +-
src/TAO/Ledger/types/tritium.h | 181 ++
src/TAO/Ledger/types/trustkey.h | 236 ++
src/TAO/Operation/append.cpp | 111 +
src/TAO/Operation/authorize.cpp | 41 +
src/TAO/Operation/coinbase.cpp | 68 +
src/TAO/Operation/credit.cpp | 377 +++
src/TAO/Operation/debit.cpp | 119 +
src/TAO/Operation/include/enum.h | 80 +-
src/TAO/Operation/include/execute.h | 541 ++ — 
src/TAO/Operation/include/operations.h | 190 ++
src/TAO/Operation/include/stream.h | 108 +-
src/TAO/Operation/include/validate.h | 10 +-
src/TAO/Operation/register.cpp | 148 ++
src/TAO/Operation/transfer.cpp | 83 +
src/TAO/Operation/trust.cpp | 31 +
src/TAO/Operation/write.cpp | 113 +
src/TAO/Register/include/enum.h | 87 +-
src/TAO/Register/include/object.h | 119 –
src/TAO/Register/include/rollback.h | 41 +
src/TAO/Register/include/state.h | 296 ++-
src/TAO/Register/include/stream.h | 100 +
src/TAO/Register/include/verify.h | 43 +
src/TAO/Register/objects/account.h | 35 +-
src/TAO/Register/objects/escrow.h | 36 +-
src/TAO/Register/objects/order.h | 49 –
src/TAO/Register/objects/token.h | 62 +-
src/TAO/Register/objects/trust.h | 130 +
src/TAO/Register/rollback.cpp | 251 ++
src/TAO/Register/verify.cpp | 256 ++
src/Util/args.cpp | 229 +-
src/Util/base58.cpp | 394 + — 
src/Util/base64.cpp | 164 ++
src/Util/config.cpp | 414 + — 
src/Util/debug.cpp | 348 + — 
src/Util/filesystem.cpp | 201 +-
src/Util/include/allocators.h | 64 +-
src/Util/include/args.h | 203 +-
src/Util/include/base58.h | 319 ++-
src/Util/include/base64.h | 196 +-
src/Util/include/config.h | 108 +-
src/Util/include/convert.h | 311 ++-
src/Util/include/debug.h | 229 +-
src/Util/include/fifo_map.h | 547 ++++
src/Util/include/filesystem.h | 104 +-
src/Util/include/hex.h | 182 +-
src/Util/include/json.h | 4678 ++++++++++++++++++++++ 
src/Util/include/memory.h | 37 +
src/Util/include/mmaplib.h | 246 +-
src/Util/include/mutex.h | 118 +-
src/Util/include/parse.h | 148 — 
src/Util/include/runtime.h | 274 +-
src/Util/include/signals.h | 79 +-
src/Util/include/sorting.h | 35 +-
src/Util/include/string.h | 295 +++
src/Util/include/strlcpy.h | 30 +-
src/Util/include/urlencode.h | 90 +
src/Util/include/version.h | 33 +-
src/Util/macro/header.h | 12 +-
src/Util/memory.cpp | 33 +
src/Util/signals.cpp | 66 +
src/Util/templates/basestream.h | 243 ++
src/Util/templates/containers.h | 147 +-
src/Util/templates/mruset.h | 204 +-
src/Util/templates/serialize.h | 1363 +++++ — — –
src/Util/version.cpp | 64 +
345 files changed, 58627 insertions(+), 25757 deletions(-)

As you can see, there has been an additional 58,627 lines of code since the last TAO update, which equates to roughly 3 months of solid coding since the last git pull. This averages out to around 651 lines of new code every day since the end of October. Anyhow, let’s begin by first taking a look at the acronym to our framework: TAO

This word comes from a Chinese classical text, The Tao Te Ching, which has been studied by some of the greatest philosophers of our time. It represents an idea that contains the principles of balance, and order in the greater concepts of the mind.

“The Tao is hidden, and has no name; but it is the Tao which is skillful at imparting (to all things what they need) and making them complete”

Lower Level Library

The Lower Level Library (LLL) is the foundation of the TAO, which includes: viz. Crypto, Database, and Protocol (Network Layer).

Lower Level Crypto

There is not much to report here, other than cleaning up some memcpy from the Skein and Keccak functions, along with the research of some promising candidates for a lattice based signature scheme. Right now the NIST competition is in the first round of the review process. We will observe how this evolves over the next year to identify which candidates to experiment with.

In the future, we may try out hybrid signature schemes on a test network to see the effectiveness of lattice and elliptic curve hybrid signatures. The data and computational overhead would be higher, but the security parameters of our public keys would inherit a higher degree of quantum resistance compared to that provided from our current use of Skein and Keccak.

Lower Level Database

The following new components have been added to the Lower Level Database:

  • Binary Hash Map  — This is a hashmap with very low memory footprint and on disk indexing, which handles bucket collisions at O(n) reverse iterations (linear time), and is designed for write intensive applications. The write capacity has peaked at around 450k writes / second, with reads peaking at 25k reads / second from disk, and 1.4m reads per second if cached.
  • Binary LRU Cache  — LRU stands for Least Recently Used, and means that the cache will keep only the elements that have been recently used, and discards the elements that are the oldest. This makes for an efficient cache implementation compared to FIFO (First in first out)
  • Transaction Journal — This is an anti corruption measure when handling an ACID transaction that recovers the database from invalid states in the case of power failures, program crashes, or random restarts.

Each one of these components is a part of the modular framework, that you will be able to see if you go to the src/LLD/cache, src/LLD/keychain, src/LLD/templates folders. The most exciting piece is the addition of the Transaction Journal. Before I give a deeper explanation of this, let us review what is meant by the term ‘ACID’.

  • Atomicity — All transactions are seen as individual units that together must complete as a whole.
  • Consistency — All transactions must bring the database from one consistent state to another.
  • Isolation — Transaction reads and writes with concurrent execution must leave the database in a valid state, as if it was being processed in series.
  • Durability — Once a transaction is committed, it must stay so even in the event of a power failure. This usually means committing the transaction to disk.

Fun huh? I can explain this some more. Think of a database transaction as a commitment of many pieces of data that synchronize together in an amalgamation of information. To understand this better, let us use a real life example such as Kim will only give John an apple, if Carry gives Sue a Peach. If this was a database transaction, what this would mean is that all the prerequisites would need to be committed together, and if any of them were to fail, the entire transaction would fail.

Let’s combine this with an ACID expression.

  • Atomicity — Kim, John, Carry, and Sue (individual units) exchanging fruit (the whole).
  • Consistency — Carry -> give Peach to Sue -> then Kim -> gives John an Apple
  • Isolation  — If carry and John both execute the giving of their Peach and Apple close to the same time, the ordering must be correct in the consistency sequence, which means the peach must be given to Sue before the apple is given to John.
  • Durability  — If Carry and John agree to the exchange, but never fully execute by exchanging the Peach and the Apple due to an error, such as the apple being forgotten by Kim, then the Apple and the Peach may never reach John or Sue. In this case, the commitment existed (in memory), but it never obtained durability since the physical object did not complete exchange.

I hope this above helps you understand the importance of an ACID transaction, of which one of the most important pieces is the ‘Durability’ component. When implemented with the proper logic, this can result in a database that cannot be corrupted, even under conditions of power failure. Let me explain how this is achieved.

The Transaction Journal

Before, the implementation of the Transaction Journal, every sequence of a transaction was executed in memory. In this way the database only records the state accompanied by the pending disk write, once the transaction has committed. Transaction journaling introduces an on disk checkpointing system that detects if there was an interruption during the transaction commit process. When the database re-initializes, it is able to detect any corruption, allowing the journal to be used to restore the database to the most recent transaction checkpoint even across many database instances. Therefore, at the sacrifice of a bit of speed, we can achieve higher levels of durability for the database engine. The latest statistics in our 100k Read and Write test ran as low as 0.33 seconds with the binary hash map, down from the 0.86 seconds when using the binary file map.

Lower Level Protocol

I’m sure many of you remember, our last test ran as high as 200,000 requests / second. I’m happy to report that the new numbers stand at:


A request is a message from one computer sent to another computer which is generally a request for a piece of data on the remote computer, such as a web page for a web server. Our latest test above shows the peak performance of the Lower Level Protocol at 452,171 requests / s, over a double increase in performance compared to the last test we submitted. The above demonstrates the capabilities of the Network layer, without Ledger layer validation which confirms that the network can handle very large workloads. It is important to have efficiency in all parts of a system in order for it to scale effectively. The efficiency of an application comes directly from the level of physical resources required to perform the task at hand.

Ledger

The ledger contains two components to its processing, the transaction objects and the blocks that act to commit transactions to disk, and therefore the database. Think of any blockchain as a verification database system, where the data is required to be processed before it is allowed to be written to the disk. Along with this set pre-processing, every single node in the network must agree on the outcome of the process, arriving at the same state in a synchronized ACID transaction which is carried by a block. In the case of Nexus, we follow a similar model. However, we perform the Consistency preprocessing before allowing for synchronized Isolation and Atomicity, and perform the post-processing verification afterwards as the final block receipt, allowing for Durability.

A tritium transaction object contains aspects of the ledger for pre and post processing, and the register pre states and post states, and finally the operations payload that is responsible for mutating the state.

Post-Processing

The software stack for Tritium has come a long way in recent months. Now that we have a foundation provided by the Lower Level Database and Lower Level Protocol, it has been fun to plug in some of the features that form the layers above. Below is a more recent stress test that verifies a block that is at full capacity (approximately 2MB). This block as you can see contained 32,252 transactions, and processed in 647 ms.

The software stack for Tritium has come a long way in recent months. Now that we have a foundation provided by the Lower Level Database and Lower Level Protocol, it has been fun to plug in some of the features that form the layers above. Below is a more recent stress test that verifies a block that is at full capacity (approximately 2MB). This block as you can see contained 32,252 transactions, and processed in 647 ms.

This test verified the time required for ‘post-processing’ which is the processing required after a block is received and is then added to the chain. The required time for ‘pre-processing’ which is the processing required before a block is received, was not included in this benchmark test. Let’s dig a little bit deeper into what all this means, and how these specific elements are prerequisite to Amine.

Pre-Processing

Pre-processing is the processing that is required for an object before it becomes a part of the ledger. This will generally be checking for conflicts within the database system such as spends or register pre-states, and then more complex pre-processing which would be signature verification. It is important to note that our tests have shown that signature verification is the biggest bottleneck in the processing of any transaction or contract in Tritium. Since we use a 512-bit standard for key sizes which raises our security to around 2²⁵⁶ bits (2¹²⁸ for Bitcoin, since ECC only retains about half of the key length in usable security due to different types of attacks), we have more signature data that is required to be processed when a transaction is received.

Tritium

Preprocessing in Tritium will be performed through the memory pool. Since Tritium blocks do not include the whole transaction object, they only contain references to objects that they are committing to disk (think of a block as a sort of ACID transaction). This means that if a block is submitted that contains a txid (transaction i.d.) that has never been publicly known by the nodes on the network, this block will not be able to propagate until the receiving nodes are able to run the pre-processing for that particular txid. Consequently, if a miner tried to submit a malicious transaction in a block as an attempt to double spend a transaction already accepted in the memory pool, they would find it increasingly difficult to get it added to the main chain (i.e. verified by validating nodes). This is because none of the nodes would have the pre-processing data required to accept this block, and that their conflicting transaction would in most cases fail to be accepted with it being that it had a direct conflict with another transaction that had already passed preprocessing.

Amine

Preprocessing in Amine will be aggregated into a two processing layers, namely Trust and Miners. This means that Trust nodes will be mainly providing preprocessing to the network, and miners providing the post-processing.

Post-Processing

Post-Processing is the processing required when a block is received, in order to fully commit components of the data, and change the register pre-states into their post states with verified checksums. The example above was pure post-processing which showed that our post-processing layers scale quite nicely, with a maximum of around 40–50k tx/s if split into a two-tier (pre/post) processing system. Our two-tier processing system will be the main aspect of the Amine architecture upgrade, along with additional operations and registers, and deeper/more advanced LISP functionality (we will explain how LISP shards will function in a later update).

Obsidian

With Obsidian, the two-tier process will become a three-tier process, which when integrated will have pre-processing (L1 processing channels), post-processing (L2 trust channels), and hardening (L3 distributed mining). It is important to understand how the present Tritium architecture is setting the foundation for all that is to follow. As many of you will know, as with any undertaking, once the foundation is set, it is not easily changed unless one takes apart the entire system. This is why it was so important to give Tritium the time it needed.

Register

Register pre-processing and post-processing is divided into two tiers as well, the pre-state and the post-state. This is important to know, so that you can understand how the registers act to modulate their states. Understanding this will help discover some of the benefits of pre-states in a chain, and how a node can prune prior pre-state data based on the verification of a transaction in a block object. A register post-state could be considered one individual unit of Atomicity.

Pre-States

Register pre-states contain the current database state of the given register before the operations execute. It is packaged into a binary format inside the transaction object as the means of verifying that the initial claimed state is the same state that the current network contains of the register.

The benefits of this come two-fold, one that you are able to rollback the chain without having to iterate back an unspecified number of blocks to find the state mutation of the previous register, and two that you are able to know the state of the register without having to calculate all its previous states. This adds additional benefits, such as being able to run nodes in ‘lighter’ mode, where nodes are only required to verify chain headers (which contain references to all of the transactions in a block), to know that a transaction with a given pre-state was included in a block. This allows for ‘light’ verification of a pre-state, i.e that the transaction was confirmed with the consensus of the network at a given block height, and therefore is indeed valid.

With the growth of the network and size of the ledger (one aspect of scaling to consider), we can prune the data held by the ledger by removing old pre-states, which lowers the data requirement and creates a more efficient and sustainable network over an extended period of time. By implementing this architecture now, we won’t end up with an over-baked architecture in the future that can’t handle the overwhelming volume of data that has been processed in the past.

When you hear of projects boasting 100k tx/s, or even 1M tx/s, let’s look at what this really entails:

  • On average, a tritium transaction will be a minimum of 144 bytes, and a maximum of 1168 bytes.
  • Let us take a best case scenario, with a normal OP::DEBIT / OP::CREDIT being around 24 bytes, so an example of a transaction that is 166 bytes.
  • Let us now multiply this number by 100,000 transactions which equals 16,600,000 bytes per second, or 16 MB per second. This means your internet connection would need to support at a minimum 16MB per second, or a 128 Mbps connection.
  • Now beyond that, let us look at the damage as it compounds. 16MB per second multiplied by 86,400 seconds (1 day) is 1,382,400 MB, which is 1.3 TB per day. Multiply this by 365 for a one year period and we have 504 TB per year consumed. This is obviously not possible on consumer grade hardware.

The above proof shows that the claims of such grandiose scale are most likely rooted in either folly, or malarky. For us, our pre-processing and post-processing systems, LISP data shards, Lower Level Database, and register Pre-States will help scaling significantly, but there is no way of knowing the exact scale that will be able to be achieved until demonstrated in real world conditions, over a long period of time. Right now, our results are promising, seeing that we are achieving a reasonable scale in post-processing, and managing architecture that is able to shard the pre-processing to exceed the 4.3k tx/s bottleneck from signature verification.

Post-States

Every register has a pre-state which is used by the operations layer for execution to move the register into its post-state. A post-state is what is recorded in the register database as the new state of the register after the transaction has completed. In order to not weigh down the register script (some of the binary data packed into a transaction), we included what is called a post-state checksum at the end of a register pre-state. Therefore, any validating node will compare their calculated post-state to the post-state checksum that was included with the submitted transaction.

The benefits of this, is that a transacting node is required to do the calculations themselves, to prove that they have done honest work. Other validating nodes verify this calculation by comparing their new register state checksum to the post-state checksum included in the register.

History

For those that are able to house extra data on their hard-drive, their node can be enabled to show the history of the registers without much processing required. Since the keychain object that is used for the register database is a binary hash map, you can enable it to operate in APPEND mode, which will append new data to the end of the corresponding database files, enabling a user to reverse iterate from the end of a hashmap collision, which will show the sequence of the register history. This is very useful for registers used in supply chains or other ‘history’ related chains, such as the transfer of ownership of titles and deeds for example.

Types

There are a few different types of registers that determine what types of operations can be executed on them. As you know from the tritium white paper there are object registers and state registers. Let’s briefly explain what each one is for:

State

A state register is one that holds the state for a component of an external application, with no specification on the data format which means that specialized operations can not be applied to these register, only primitive.

  • TYPE::RAW  — A raw register is a register with given number of bytes that can be written to or appended to at any time. It is the most versatile type of register with no security parameters applied to it. Each WRITE is immutable, but with it being RAW, it can only be overwritten by the owner of it. WRITE is only permissible if done from the signature chain that is the current owner.
  • TYPE::APPEND  — An append register is similar to a raw register in that it is created with a given number of bytes, but this type of register can only have an APPEND operation applied to it to change the data state. This means that in the database itself, the original data always exists before it, and so does the history of all APPEND operations. A WRITE operation on this type of register will fail, even if done by the current owner. Therefore, an APPEND register has security parameters associated with it that make it useful for applications that would like to be able to update a register without losing the data that existed before it. This makes every APPEND immutable but able to be modified.
  • TYPE::READONLY — This type of register is useful for a ‘write once’ type of register. It is only possible to use the ‘OP::REGISTER’ operation for this type, since it can only be written to once. This would be similar to a ‘const’ type in any language, and contains security properties that are useful for certificates of authenticity, titles, deeds, or contracts that the creator/publisher would like never to be modified.

Objects

Object registers are more specialized, as it is necessary for the operations layer to be able to recognize the data type that they contain. This is useful for specialized operations that require knowledge of the format of the data that the register contains. The following Objects are defined and useable in the current source code.

  • TYPE::ACCOUNT  — This is a specialized register that contains details regarding someone’s account. An account can contain the balance of any type of token, as it is denoted by a token identifier. Token identifier 0 is a reserved identifier and is used for the native NXS token.
  • TYPE::TOKEN  — This is a specialized register that contains the details of a token, and claims that token identifier for use of the specific token. This register contains information regarding the significant figures of a token, and other parameters to define the total supply, and the total supply that has been made available to the public.

Operations

The operations layer now contains a foundational set of processes, which act as the ‘Primitive’ operations. These together allow the creation of records, history, tokens, transfers, and non-fungible tokens. Let us go through each operation one by one, to explain what each one is capable of doing.

Register

This operational code creates a new register with a memory address assigned to it. The memory address must be unique, and will index the data of the register. Think of it as an abstracted memory address that comes from getting the memory location of a variable (in the programming language C/C++, this would be with the symbol ‘&’ which is an abstract of a machine address), but it lives in the Nexus Blockchain. This will be further abstracted towards Amine, when addresses will not only be ‘locally accessible, but will be ’network accessible’. Though replicating the exact same state across the system does provide added levels of redundancy, it evidently limits the potential of the system to scale. Creating shards of the data work load into ‘network accessible’ groups is therefore necessary, where specialized processing is performed by different groups and types of nodes, whilst retaining the levels of redundancy that replication provides.

Machine specific addressing is one of the innovations that is designed to solve the data overhead problem outlined in the above section regarding scaling. The two most notable bottlenecks that limit scaling are signature verification and the increasing amount of data overhead that compounds very quickly as volume increases. A scalable system is not one that can simply ‘process’ X many transactions per second, but one that can still function after processing X many transactions per seconds for years on end. Even if one were to use conventional data structures that go as low as O(log n), when the system scales to billions of keys, the processing can still become quite large, especially when indexing from disk.

Write

This primitive operation initiates a ‘write’ on a register, which overwrites all the data of the pre-state with the new data of the post-state. It has certain limitations such as the register must be a TYPE::RAW type, and the total number of bytes being written must be the same as what it had prior. This type of operation is generally best suited for applications that are submitting raw data into the ledger, to enable the immutable storage of certain events such as submitting a proof hash into the public ledger from a hybrid system, or having their application require certain JSON to be submitted into a register

Append

This primitive operation acts on a register of TYPE::APPEND, and adds data to the end of the register, without modifying the original data. Useful examples of this operation would be, flagging a title to that is claimed by an insurance company, or updating specifics about an item along a supply chain. Since the original data is always retained in the append sequence, updates to a register via OP::APPEND provide a useful audit and history mechanism.

Transfer

This allows the ownership of a register to be transferred from one signature chain to another. A transfer can also be instantiated to another register such as a TYPE::TOKEN if someone would like a token to govern the ownership of a register. This is how joint ownership can be provided between individuals, as the TYPE::TOKEN then represents the ownership. This can also be useful for showing the chain of custody between parties of a supply chain. If one wants to create non-fungible tokens, this would be the method that is used to transfer the ownership of the non-fungible token, with the non-fungible token generally being a TYPE::READONLY register with an identifier specifying parameters regarding an object. This could be a simple digital item with JSON specifications, and the transfer operation would be the proof of ownership of that digital item or non-fungible token.

Debit

This operation is responsible for the commitment of funds from one account to another. It is quite like the ‘authorize’ of a debit card transaction. When this operation is instantiated, the funds do not move to the receiving account until the other user (the receiver) issues their credit. The acceptance of the transaction by the receiver completes the commitment. This operation works only on a TYPE::ACCOUNT object register, and can handle the debiting from any type of token by any identifier.

Credit

This operation is responsible for the final commitment of funds from one account to another. Together the debit and credit produce a ‘two-way signature’, which reduces the chance of funds being lost due to the use of an incorrect address. If the funds are not accepted by the receiver within a specified time-window, they are then redeemable by the OP::DEBIT issuer. Therefore, funds will never be lost if sent to an invalid address. Another additional benefit of this is allowing a user to reject funds sent to their account if there is question of who the funds came from. It also provides the option to generate a whitelist of addresses from which the user will automatically accept transactions from. This is important for monetary safety, as if you receive a mysterious deposit in your account, there is no knowing who or why it reached you.

What are the next operations?

The next two operations are very important, as they unlock the ‘validation scripts’ which act as small computer programs that define the movement of NXS. Validation scripts enable the full potential of the operations layer, allowing functions such as the decentralized exchange of assets to tokens, tokens to tokens, irrevocable trusts, programmable accounts, etc.

Validate

The validate function will execute the corresponding OP::REQUIRE with the necessary parameters. If the validate executes to true, then the required will be satisfied and therefore the validated transaction will execute.

Require

This will set a boolean expression that will be required to evaluate to true in order for a transaction to be claimable. Such an example would be OP::REQUIRE TIMESTAMP GREATER_THAN 1549220657, meaning that a corresponding transaction would not be able to execute until the timestamp has been reached.

Introducing the DEX

The DEX will work as a native extension of the OP::REQUIRE and OP::VALIDATE operations. It can be thought of as this:

  • User A wishes to sell 55 of Token Identifier 77. They want to sell it for Token Identifier 0 (NXS).
  • They choose their price: OP::DEBIT <from-account> <claim-account> 55 OP::REQUIRE TIMESTAMP LESS_THAN 1549220657 AND OP::DEBIT <my-account> 10.
  • In this above script <my-account> will be an account with identifier 0, and <from-account> will be of token identifier 77.
  • User B wishes to buy the 55 of Token ID 77. They send a transaction such as: OP::VALIDATE <txid> OP::DEBIT <from-account> <to-account> 10
  • Since this includes an OP::VALIDATE, it triggers the validation of the corresponding OP::REQUIRE submitting the parameters it is verifying. Since the OP::DEBIT was one of the parameters to the OP::REQUIRE, this will evaluate to true, satisfying the validation script.
  • User A can now submit a transaction: OP::CREDIT <txid> <claim-account> 55
  • User B can now submit a transaction OP::CREDIT <txid> <claim-account> 10

In the above sequence, 4 transactions are executed to facilitate the decentralized exchange between two different types of tokens. This process can also be programmed for the decentralized exchange of an asset to a token, or even an asset to an asset. I will explain more on how this works and how we see the growth of the DEX in the next TAO update.

API

The API as it stands contains two types, Accounts and Supply. The implementation details for now are therefore for the purpose of demonstration only, using only a simple combination of operations such as OP::APPEND, OP::TRANSFER, and OP::REGISTER, for example. Please keep your eyes peeled for additional API calls that will be shown in the API documentation. I will explain how to interact with the API below:

Use a web browser to access the JSON responses.

You can use a web browser to make API requests to your Tritium node. This is achieved by submitting a GET request to the API endpoint. This will always be the IP address of the node, and port 8080 followed by <api>/<method>.

An example would be:

http://localhost:8080/accounts/login?username=user&password=pass&pin=1234

The above request will log you into the API and returns a session identifier. The session identifier should be included in all subsequent requests to the API for methods that require authorization. Your PIN is required for any transaction requiring authorization to ensure that even in a case where your username and password were compromised, your PIN will still be required in order to access your account. This gives similar properties to 2FA that most login systems utilize today.

Create a login page in your website powered by the Tritium daemon

You can embed a custom HTML form into your website to use a Tritium daemon as a secondary login system that gives verification properties to your web service. In the future, a login over the API will also trigger a unique EID that is coupled with the login, making your service immutable to IP spoofing. The API handles application/x-www-form-urlencoded , so make sure to include your parameters in your form as follows:

<form method=”POST” action=”http://localhost:8080/accounts/login“>
<input type=”text” name=”username”>
<input type=”text” name=”password”>
<input type=”text” name=”pin”>
<input type=”submit”>
</form>

The page you are sent to afterwards will include the JSON response data that includes the genesis ID and the session identifier to be used for all subsequent calls to the API that require authorization. This way you can give a user secure access to their signature chain through your service node in your online service. Importantly this gives users a way to access their sig chain without needing to run a full node, and without giving up custodianship of their funds and account information.

Embed contracts into your web application.

Since the API supports application/x-www-form-urlencoded, you are able to embed any contract functionality into your existing web application, either by forwarding forms through the API and applying a forwarding url to pass through, or by making custom forms that use the POST aspect of the API to process webforms. The above HTML example is a basic webform which can be integrated with your existing login system. To extend this, you can make calls to the API via AJAX or more complex forms inside your system. This means that to build with Nexus Advanced Contracts, all you need is to hire a web developer who is able to ‘plug and play’ the correct sequence of API calls into your web service.

Use contracts or tokens in your regular desktop application

The API also supports application/json to make requests to the API via any of our provided Software Development Kits (SDKs), so that your native application can take advantage of the API. Currently, we provide a Python SDK for use in any external python application, which can be found in the repository in the folder named ‘SDK’. We would like to encourage developers to build software development kits in their languages of choice for the API and contribute to the open source development of Nexus.

Documentation

Please refer to the following API documentation for up to date documentation on all API’s and calls that are available:

https://github.com/Nexusoft/LLL-TAO/blob/master/docs/how-to-api.md

As any new call is implemented for -testnet or -private mode, the corresponding documentation will be included. Please give feedback if you find any information difficult to understand, and we will modify the documentation to communicate it in a clearer manner

Logical / Interface

We are making progress on the App Store, which will be a developer friendly area to buy, sell and share Nexus apps. Our current design is ‘module’ based. However, this is only the first iteration of the App Store. We will give more details on how the App Store will develop, and how we will provide security to the applications supported by the App Store.

Request for new standards

Standards in the API and requests for new calls can be formally submitted and discussed on this mailing list here: [email protected]. Requests to lower layers such as new register types or operations can be submitted to the same location. Please do give feedback if you find anything you believe could be improved.

Command-line Flags Available

The following flags are available for use with the Tritium Daemon. Some are experimental and are undergoing debugging, while others are hardened and are ready for use.

-fastsync (experimental) — this flag will reduce your required synchronization time by a factor of 2.

-beta  — sync your tritium node on the mainnet with legacy rules. This will allow you to run a Tritium node on the mainnet, which gives you access to all the nice Tritium features such as sub second load time, quick synchronization time, and database stability

-private  — run your node in private mode to access the API functionality and build local contracts. post-processing is done via a private block, and clears in sub-second intervals

-legacy — use legacy specific RPC formatting for nodes that need to retain backwards compatible formatting

-indexheight —  add foreign indexes for all blocks by height as well as by hash, allows the indexing of blocks by height from disk, but requires extra disk space.

-testnet —  run your node in testnet mode over LISP or the regular underlay. This will synchronize you to the test network, and require mining to produce valid blocks and commit post-processing data from your API calls.

Branching

Our repository has specific semantics for each branch. The following list will briefly describe the purpose of each branch, and what they mean for you are testing:

  • Personal — any branch that is named after a user such as viz, jack, scottsimon, paul, or dino. We recommend NOT building from a personal branch, as the code you pull will be incomplete or in development.
  • Merging — this branch is used to merge code between developers. Any code that exists on merging is still considered ‘unstable’, so if you decide to test off of the merging branch, do so with a debugger (debug instructions below). We recommend NOT using this code unless you are a qualified tester or developer.
  • Staging  — this branch is used for pre-releases. This means that code is in Beta, and is ready for wider public testing. Once code reaches staging, we will periodically include pre-release candidates and binaries with revisions and stability fixes. This branch is for public testing before the release of official binaries.
  • Master  — this branch will be the least updated, so if you are looking for the most recent code, any of the aforementioned branches will keep you up to date. Code is only pushed to master when a FULL release is made, accompanied by a release candidate, binaries, and a change log and description. The code on master can only be merged from pre-releases in staging.

Debugging

  • First, you will need to have a debugger handy. If you are on Linux, make sure to have gdb installed. This can be installed via: sudo apt-get install gdb
  • For OSX, the debugger will be included with your X-Code command-line tools named lldb
  • Next, make sure to build the source clean by issuing this command: make -f makefile.cli clean
  • Next compile it with: make -j 8 -f makefile.cli ENABLE_DEBUG=1
  • Once this completes, you will need to start Tritium up with your debugger such as: gdb nexus
  • This will then enter you into a new command-line console, in which you want to type: run -beta -fastsync -gdb
  • Using the -gdb flag, the daemon will close if you press the return key, due to the debugger generally catching all the signals before the application.
  • If you ever run across a point where the program crashes, get the backtrace by issuing the following command: bt
  • Take this backtrace and submit it to the #dev channel in slack for assessment.

If you have already been testing or are looking to start helping test the core, I would like to extend a big thank you for all your help!

Docker

Check out docker if you want to deploy nodes over LISP. You can find docker documentation here:

https://github.com/Nexusoft/LLL-TAO/blob/master/docs/how-to-docker.md

Conclusion

Well, that is about all I have to report as of now, I hope that you continue to watch the progress on our repositories, continue to give us feedback, and of course, have fun doing it! Remember, if you’re not having fun, you’re not doing what you love, so on that note, I will leave you to ponder on what it is that brings you the greatest joy. In the meantime:

Let grace be our guide, Amine.

Cheers,

Viz.

Nexus Contracts

Nexus Contracts

Enterprise adoption is instrumental to blockchain technology becoming mainstream, and Nexus Contracts are the next step in leading this progression. Existing Smart Contracts have experienced issues in relation to ease of use and scalability due to a Turing complete system. Addressing these issues, Nexus has produced what is in essence a ‘Register-based Contract Engine’, set for release with the Tritium upgrade. Tritium will allow developers to access the technology of Nexus Contracts simply through an API set. Before an explanation of Nexus Contracts is given, some context will be provided as to how conventional Smart Contracts function.

Smart Contracts

Smart Contracts are self-executing. Their design is to enforce the terms and conditions of a contract through programmable logic, reducing the need for third party intermediaries such as brokers and banks. Smart Contracts are an additional layer of processing above the ledger layer, i.e what is known as ‘the blockchain’, and are comparable to small computer programs that hold a state of information. The calculations of the contract are carried out by the processing nodes of a blockchain, which change the state of the information. Given that the calculations or processing is carried out by distributed consensus, the state of a Smart Contract is immutable.

Bitcoin was the first cryptocurrency with built-in Smart Contract capabilities, which it calls ‘scripts’. Scripts are not Turing complete and contain byte code. Ethereum augmented these capabilities into its ‘Turing Complete Smart Contracts’, which are generic to developers’ needs. Ethereum gives developers more access to contract functionality on a blockchain through a custom programming language called Solidity, which is then compiled into assembly language that is run on the Ethereum Virtual Machine (EVM). The EVM is a ‘Stack-based Virtual Machine’ that processes each instruction in turn.

Though very capable, Ethereum has experienced some issues in regards to security, performance, and ease-of-use, predominantly because of its Turing complete design. Some notable cases include the $75m DAO hack on Ethereum, and the $286m Parity bug. Vulnerabilities existed due to the large complexity of a Turing complete system, and the resulting difficulty of resolving bugs in a protocol written in immutable code. The complexity of operations that support universal computation or Turing complete designs also limit scalability. A universal system has a higher degree of complexity, and can not therefore compete with technology that is designed for more specialized tasks. An example of this observation would be the comparison between a CPU (Central Processing Unit) with a ASIC (Application Specific Integrated Circuit) in the mining of cryptocurrency. A CPU can’t compete against a SHA256 miner, as its complexity and design is geared to support universal general computation, not specialized computation. A similar conclusion could be drawn when a comparison is made between the system design of Ethereum (universal), and Nexus (specialized).

Nexus Contracts

Nexus has developed a ‘Register-based Contract Engine’, with greater capabilities than the EVM. Unlike the the EVM, which is defined by only two distinct layers of processing and is dependent on a Turing complete system, the Nexus contract engine is facilitated through the seven individual layers of the Nexus Software Stack, each designated to carry out specialized processes.

The third layer of processing is called the Register Layer. Here, the states of individual pieces of information contained by Nexus Contracts are recorded in architectural components called registers. Registers are used by typical computer processors and provide easy access to memory storage of frequently used information or values. With respect to Nexus Contracts, each register is owned by a Signature Chain. Therefore, the ownership and write access of a register is validated by the second layer, the Ledger Layer. The fourth layer is the Operation Layer which defines the rules of the state changes to a register, called ‘operations’. The operations are carried out by validating nodes that change the state of the registers by distributed consensus. The design provides the required functionality of a contract engine, without the over complexity and complications of a Turing complete system.

The ownership of a register can be transferred providing many proof of ownership use cases. Examples of such include titles, deeds, digital certificates and records, agreements, or any other digital means of representing tangible assets or time-stamped events. A register can also be owned and governed by another register, creating a relationship between many users. Relations can be used as proofs on the Operation Layer to provide additional functionality. An example of this would be a register that holds metadata representing the ownership of an item, and it being owned by another ‘token register’. The token ownership signifies partial ownership of the item, which provides the possibility for further use cases such as royalty payments with split ownership.

Conditions or stipulations can also be coded into Nexus Contracts by validation scripts or Boolean logic. Validation scripts require a transaction to fulfill a certain set of conditions to execute, which allows a user to program in stipulations on the exchange of NXS, tokens or any other digital asset. This allows a user to void transaction orders, place time locks on funds, or exchange any digital asset without a central intermediary.

Nexus Contracts which will be accessible through an API set will be able to improve many existing processes, including digital ownership, tokenization of assets and enterprises, digital rights, royalty payments, supply chain management, escrow services, financial applications, legal documentation of digital signatures, and many more.

Standardization

The standards of object registers, operation codes, and API methods will be defined through working group consensus, to ensure a consistent connection between developers and users. Nexus borrows a similar model to the Internet Engineering Task Force (IETF) that provides the working groups for all RFC (Request for Comments) standards. This is important to drive a vibrant ecosystem forward. Just as we have seen with the success of the internet, we hope to continue this success in the next era of global connection: blockchain, artificial intelligence, and satellite communication.

Read more:

Nexus API

Parity Bug

The DAO Hack

Building a sustainable supply chain in uncertain times

Building a sustainable supply chain in uncertain times

It is clear that organisations can only operate effectively with easy access to products and services. Likewise, no organisation can continue to grow if late payments and poor procurement processes remain in place. This is where blockchain technology can play a crucial role, in both the modernisation and improvement of the logistics and operations which are vital to the performance of supply chain systems.

Read the full article here https://sctimes.io/news/article.aspx?tid=7&aid=6061.

Developer-Friendly APIs — Nexus Blockchain

Developer-Friendly APIs — Nexus Blockchain

With the release of the Tritium Mainnet, application developers will be able to interact with the functionalities of the Nexus blockchain through an easy to use, feature-rich API set. APIs will create user-friendliness for developers who will be able to build in a wide range of languages, and interoperability for existing private systems to interact with the Nexus blockchain. Nexus has designed its software stack based on the Open System Interconnection (OSI) network reference model, with the fifth layer as the API layer.

Nexus software stack

What is an API?

An API is an Application Programming Interface. While a user interacts with a system through a user interface, an API allows developers to interact through a programmatic interface. The way this works is that the API provides a list or set of simple commands that execute a series of operations, which would otherwise require specialist programming knowledge. This allows a developer to request or submit data to a system providing functionality to a higher-level application. For example, Facebook’s Graph API allows access to “Login with Facebook” and other features of their system.

Hybrid Blockchain

The distributed validation method provided by a public blockchain or Distributed Ledger Technology (DLT) (on-chain) is very secure in comparison to that of a private blockchain (side-chain) or centralized database (off-chain), because it is validated by many nodes forming a global consensus. However, private blockchains which are serviced by their own nodes provide other benefits that are much easier to develop and scale. One such benefit is to record proofs of private, sensitive, or proprietary data that are generally stored in a private database. This provides the private database the ability to edit or delete this data, in order to comply with regulations such as General Data Protection Regulation (GDPR), while maintaining the positive qualities of immutable proofs from the private blockchain. An optimum balance between a Public Ledger, Private Ledgers, and Private Databases, will provide the performance and efficiency necessary for global adoption.

Nexus hybrid blockchain

Nexus is developing the systems to enable private networks to utilize the public ledger, creating what is essentially a hybrid system, through an array of both private and public ‘template’ use case APIs. Public APIs will be provided by Nexus as open source technology, while Private APIs will be developed with businesses as their proprietary technology.

Public API

Through the Nexus API, developers building higher-level applications for consumers and producers of digital data will be able to access the various functionalities of the Nexus blockchain: Advanced Contracts, Cryptographic Identity, and the DLT. The Tritium wallet will provide the interface where all Public APIs will be accessible through HTTP-JSON, providing a set of single commands which will execute a series of events down through the Nexus software stack rather than using a specific Turing-complete language, requiring specialist programming knowledge. This will allow developers to build in a wide range of languages, such as C++, C#, Java, Python, and Javascript.

Nexus welcomes any interested parties to participate in our working groups to help shape the standardization process for the Nexus Software Stack, as we continue to develop the standardization body for DLT, similar to how the Internet Engineering Task Force (IETF) shapes the internet.

Private API

In addition to accessing the Public APIs, developers will be able to build their own Private APIs, providing the privacy of a permissioned system required to keep proprietary information and logic concealed, while harnessing the security of a public blockchain. This is possible through the use of state recording checkpoints between the private and public networks to ensure that agreements in the private network are also recorded in the public network, shown by the diagram below.

Given that only the aggregated state of the private ledger is recorded, sensitive or private data is not stored on the public ledger. Therefore, private APIs can secure proprietary contract logic, such as private supply chains, notaries, consumer verification services, etc., providing private services that the public layers are unable to. Since a private API functions as its own private network that synchronizes to the public network, one can expect the level of reliability and security of DLT. A private network can be operated under a software services license, or by the commissioner of a said API service. The final result, is a robust service that provides interoperability with existing private systems.

Nexus Private API Service for Enterprise range from hosting solutions to full private API buildout. Private APIs can be custom-built either by Nexus on behalf of a private client, or by any third party with or without consultation. Private testnets can also be provided during development to avoid loading the public and final private ledger with redundant data.

Blockchain Accessibility

It is often claimed that the ratio of demand to supply for blockchain developers is 20:1, which has led to the high costs associated with blockchain development and low business adoption. Since most programmers are already comfortable interacting with an API, building on the Nexus API can be as simple as developing a web-app. Through improvements in accessibility, Nexus is set to significantly reduce the barriers to entry for blockchain technology.

Enterprise API enquiries contact: [email protected]

Working Groups contact: [email protected]