We are delighted to welcome Gabriel René as our newest Advisor. Gabriel serves as Executive Director of the VERSES Foundation, an organization at the intersection of Augmented and Virtual Reality, IoT, Blockchain, and Artificial Intelligence. He is a technologist, entrepreneur, researcher, media producer with a 25-year career in the High-Tech, Telecom and Entertainment Industries. With a focus on emerging technologies and their applications across the industrial and enterprise mobile and spatial computing markets, Gabriel adds high value to the real world applications developed by Nexus.
Gabriel has helped multiple start-ups and founders navigate their way to success as an advisor and board member. He has worked with and advised some of the largest brands in the world, spanning media conglomerates, telecoms, mobile manufacturers, governments and major brands like Verizon, Sony, Intel, Coca-Cola, Microsoft, Qualcomm, Apple, Samsung, Universal, AT&T, Boost Mobile, Obama Campaign, Condé Nast, Cannes Film Festival and more. The addition of Gabriel René to the Nexus Advisory Committee further expands the potential of the Nexus Blockchain within enterprise adoption.
As a valued Advisory Committee member, Gabriel will leverage his expertise to advise Nexus in strategic marketing, business development and philanthropic endeavors. Gabriel will support Nexus in designing and architecting their multidimensional blockchain for a 3D world. He will also add special focus on interoperability with IoT, AI and Spatial Computing technologies. “Nexus’ blockchain technology has the potential to enable scalable blockchain solutions. I look forward to helping Nexus stay at the top of innovative new technologies improving trust, transparency and security.”
Gabriel’s expertise in the Spatial Web will allow Nexus to explore new possibilities of using blockchain and LISP technology to advance Web 3.0. VERSES plans to utilize blockchain technology to allow for a trusted data layer for the provenance of information related to activities, objects and interactions that occur spatially. Nexus Founder and architect Colin Cantrell said, “When I first met Gabriel we had an incredible synergy and it quickly became apparent that our visions were aligned. I am excited to be working with him to see our visions become a reality”.
Two innovative technologies, LISP and Nexus, have come together to create a one-of-a-kind scalable blockchain that many believe is the next generation blockchain solution the industry has been waiting for. LISP (Locator/ID Separation Protocol) was created by Silicon Valley’s, Dino Farinacci, to revolutionize scaling the Internet. The Nexus Hybrid Blockchain is being developed by twenty-eight year old Founder, Colin Cantrell, to solve the challenges of first generation blockchain architecture. Together, these two pioneers are advancing blockchain technology on the network layer in a way that has never been done before.
Dino is a software engineer and the largest individual contributor to running code on the Internet. He was the first ever Cisco Fellow appointed in 1997 and currently holds over 40 Internet and networking related patents. For the last 30 years, Dino has been a member of the Internet Engineering Task Force (IETF), which develops standards for the Internet we use every day. When Dino left Cisco in 2012, he wanted to pursue and focus on next generation use-cases for this new LISP technology. LISP is currently being tested by tech giants such as Comcast, Bloomberg, NBC and Cisco.
In 2017, he met Nexus Founder, Colin Cantrell, who is a self taught coder creating the Nexus blockchain from the ground up. Dino recognized the blockchain community was neglecting the real value of the network layer and making the same mistakes that were made when designing the Internet. Nexus was open to experiment, trial and deploy a LISP overlay, while Dino was eager to apply his networking experience to blockchain. The collaboration between Dino and Nexus was a perfect fit. Two years later, what started out as a passion project for Dino, is now the next generation scalable blockchain set to release this summer. Nexus Director of Business Development, Brian Vena said, “What Dino and Colin have created not only advances blockchain technology, but it will heavily impact our daily lives when it comes to the future of IoT and 5G. It also finally provides businesses a cost effective way to integrate a scalable blockchain solution with their current systems through easy to use plug and play APIs and advanced contracts that can be written in any coding language.”
The original Internet architecture was not built to handle the growing number of devices being used around the world or their ability to roam. This same architecture has now run out of IPv4 addresses (the Internet equivalent of phones numbers) which are required for devices and services to connect to the Internet. In order to solve this problem, Dino built the LISP overlay architecture to support both IPv4 and IPv6 addresses which will help make the Internet scale. With LISP, separating identity and location changes how you use the Internet. It allows you to roam, use multiple connections at one time and scale the core of the Internet. Scaling the core of the Internet is crucial so it can grow and support more devices and newer applications that are coming. “Today, people want performance, scale and accountability, and that’s exactly what LISP and the Nexus Hybrid Blockchain create together,” said Dino.
Nexus is the only blockchain using LISP, allowing it to scale along with the future advancements of the Internet and the new devices that connect to the network. The addition of LISP gives Nexus a scaling advantage by selecting the shortest paths between locations of Nexus nodes, allowing them to be located anywhere on the Internet, along with residential environments, cloud providers and mobile carriers. Using LISP also allows Nexus connections to remain active while the node moves around or temporarily goes off the network so re-connection and application state synchronization can be avoided. This increases the speed and performance of a Nexus node that no other blockchain in the world has.
The integration of Nexus and the LISP overlay also helps achieve scalability through reduced network latency in a truly unique manner. Just like the Internet, the 32-bit IPv4 address used by most network protocols will be unable to support the future growth of networked devices. Nexus and the LISP overlay will use 128-bit IPv6 EID addresses that can accommodate far more devices on the network. When asked about the future of LISP and Nexus, Dino believes the partnership will take advantage of more LISP features such as multi-homing, mobility, better security through the LISP mapping system’s access control features, crypto-EIDs for anti-spoofing and multicast miner pools. Dino says, “What the LISP layer provides you is an up to date network database and the Nexus Blockchain provides you with an immutable tracking database, the two can be used to provide robust and comprehensive data analytics. This is a data lake of information for machine learning models at multiple layers in the software stack that we have never seen before.”
Nexus has spent the last two years meeting with key executive decision makers and gathering market research in the areas of fraud, supply chain, digital rights and identity. This information has led Nexus to adapt their technical architecture and build a hybrid blockchain solution that allows businesses to utilize the benefits of both a public and private blockchain. The Nexus architecture solves the challenges of scalability and integration for a vastly improved user experience. APIs allow advanced contracts to be written in any language, ensuring easy integration, reduced development costs and a more efficient developer experience. With the Nexus mainnet set to release this summer, businesses looking for alternatives to first generation blockchains will now have a viable solution through the combination of Nexus and LISP.
Article by John Saviano, Nexus
For more information on Nexus and LISP, please visit:
this edition of the TAO update series, I will explain what has been
completed thus far, what is left to do, and what you can do with Tritium
after you read this article. So, let’s get started with the usual git pull origin master.
As you can see, there has been an additional 58,627 lines of code since the last TAO update, which equates to roughly 3 months of solid coding since the last git pull. This averages out to around 651 lines of new code every day since the end of October. Anyhow, let’s begin by first taking a look at the acronym to our framework: TAO
This word comes from a Chinese classical text, The Tao Te Ching,
which has been studied by some of the greatest philosophers of our
time. It represents an idea that contains the principles of balance, and
order in the greater concepts of the mind.
Tao is hidden, and has no name; but it is the Tao which is skillful at
imparting (to all things what they need) and making them complete”
Lower Level Library
The Lower Level Library (LLL) is the foundation of the TAO, which includes: viz. Crypto, Database, and Protocol (Network Layer).
Lower Level Crypto
is not much to report here, other than cleaning up some memcpy from the
Skein and Keccak functions, along with the research of some promising
candidates for a lattice based signature scheme. Right now the NIST
competition is in the first round of the review process. We will observe
how this evolves over the next year to identify which candidates to
the future, we may try out hybrid signature schemes on a test network
to see the effectiveness of lattice and elliptic curve hybrid
signatures. The data and computational overhead would be higher, but the
security parameters of our public keys would inherit a higher degree of
quantum resistance compared to that provided from our current use of
Skein and Keccak.
Lower Level Database
The following new components have been added to the Lower Level Database:
Binary Hash Map
— This is a hashmap with very low memory footprint and on disk indexing,
which handles bucket collisions at O(n) reverse iterations (linear
time), and is designed for write intensive applications. The write
capacity has peaked at around 450k writes / second, with reads peaking
at 25k reads / second from disk, and 1.4m reads per second if cached.
Binary LRU Cache
— LRU stands for Least Recently Used, and means that the cache will keep
only the elements that have been recently used, and discards the
elements that are the oldest. This makes for an efficient cache
implementation compared to FIFO (First in first out)
Transaction Journal — This
is an anti corruption measure when handling an ACID transaction that
recovers the database from invalid states in the case of power failures,
program crashes, or random restarts.
one of these components is a part of the modular framework, that you
will be able to see if you go to the src/LLD/cache, src/LLD/keychain,
src/LLD/templates folders. The most exciting piece is the addition of
the Transaction Journal. Before I give a deeper explanation of this, let
us review what is meant by the term ‘ACID’.
Atomicity — All transactions are seen as individual units that together must complete as a whole.
Consistency — All transactions must bring the database from one consistent state to another.
Isolation — Transaction
reads and writes with concurrent execution must leave the database in a
valid state, as if it was being processed in series.
Durability — Once
a transaction is committed, it must stay so even in the event of a
power failure. This usually means committing the transaction to disk.
huh? I can explain this some more. Think of a database transaction as a
commitment of many pieces of data that synchronize together in an
amalgamation of information. To understand this better, let us use a
real life example such as Kim will only give John an apple, if Carry
gives Sue a Peach. If this was a database transaction, what this would
mean is that all the prerequisites would need to be committed together,
and if any of them were to fail, the entire transaction would fail.
Let’s combine this with an ACID expression.
Atomicity — Kim, John, Carry, and Sue (individual units) exchanging fruit (the whole).
Consistency — Carry -> give Peach to Sue -> then Kim -> gives John an Apple
— If carry and John both execute the giving of their Peach and Apple
close to the same time, the ordering must be correct in the consistency
sequence, which means the peach must be given to Sue before the apple is
given to John.
— If Carry and John agree to the exchange, but never fully execute by
exchanging the Peach and the Apple due to an error, such as the apple
being forgotten by Kim, then the Apple and the Peach may never reach
John or Sue. In this case, the commitment existed (in memory), but it
never obtained durability since the physical object did not complete
I hope this above helps you understand the importance of an ACID
transaction, of which one of the most important pieces is the
‘Durability’ component. When implemented with the proper logic, this can
result in a database that cannot be corrupted, even under conditions of
power failure. Let me explain how this is achieved.
The Transaction Journal
the implementation of the Transaction Journal, every sequence of a
transaction was executed in memory. In this way the database only
records the state accompanied by the pending disk write, once the
transaction has committed. Transaction journaling introduces an on disk
checkpointing system that detects if there was an interruption during
the transaction commit process. When the database re-initializes, it is
able to detect any corruption, allowing the journal to be used to
restore the database to the most recent transaction checkpoint even
across many database instances. Therefore, at the sacrifice of a bit of
speed, we can achieve higher levels of durability for the database
engine. The latest statistics in our 100k Read and Write test ran as low
as 0.33 seconds with the binary hash map, down from the 0.86 seconds when using the binary file map.
Lower Level Protocol
I’m sure many of you remember, our last test ran as high as 200,000 requests / second. I’m happy to report that the new numbers stand at:
request is a message from one computer sent to another computer which is
generally a request for a piece of data on the remote computer, such as
a web page for a web server. Our latest test above shows the peak
performance of the Lower Level Protocol at 452,171 requests / s, over a
double increase in performance compared to the last test we submitted.
The above demonstrates the capabilities of the Network layer, without Ledger
layer validation which confirms that the network can handle very large
workloads. It is important to have efficiency in all parts of a system
in order for it to scale effectively. The efficiency of an application
comes directly from the level of physical resources required to perform
the task at hand.
ledger contains two components to its processing, the transaction
objects and the blocks that act to commit transactions to disk, and
therefore the database. Think of any blockchain as a verification
database system, where the data is required to be processed before it is
allowed to be written to the disk. Along with this set pre-processing,
every single node in the network must agree on the outcome of the
process, arriving at the same state in a synchronized ACID transaction which is carried by a block. In the case of Nexus, we follow a similar model. However, we perform the Consistency preprocessing before allowing for synchronized Isolation and Atomicity, and perform the post-processing verification afterwards as the final block receipt, allowing for Durability.
tritium transaction object contains aspects of the ledger for pre and
post processing, and the register pre states and post states, and
finally the operations payload that is responsible for mutating the
software stack for Tritium has come a long way in recent months. Now
that we have a foundation provided by the Lower Level Database and Lower
Level Protocol, it has been fun to plug in some of the features that
form the layers above. Below is a more recent stress test that verifies a
block that is at full capacity (approximately 2MB). This block as you
can see contained 32,252 transactions, and processed in 647 ms.
The software stack for Tritium has come a long way in recent months. Now that we have a foundation provided by the Lower Level Database and Lower Level Protocol, it has been fun to plug in some of the features that form the layers above. Below is a more recent stress test that verifies a block that is at full capacity (approximately 2MB). This block as you can see contained 32,252 transactions, and processed in 647 ms.
test verified the time required for ‘post-processing’ which is the
processing required after a block is received and is then added to the
chain. The required time for ‘pre-processing’ which is the processing
required before a block is received, was not included in this benchmark
test. Let’s dig a little bit deeper into what all this means, and how
these specific elements are prerequisite to Amine.
is the processing that is required for an object before it becomes a
part of the ledger. This will generally be checking for conflicts within
the database system such as spends or register pre-states, and then
more complex pre-processing which would be signature verification. It is
important to note that our tests have shown that signature verification
is the biggest bottleneck in the processing of any transaction or
contract in Tritium. Since we use a 512-bit standard for key sizes which
raises our security to around 2²⁵⁶ bits (2¹²⁸ for Bitcoin, since ECC
only retains about half of the key length in usable security due to
different types of attacks), we have more signature data that is
required to be processed when a transaction is received.
in Tritium will be performed through the memory pool. Since Tritium
blocks do not include the whole transaction object, they only contain
references to objects that they are committing to disk (think of a block
as a sort of ACID
transaction). This means that if a block is submitted that contains a
txid (transaction i.d.) that has never been publicly known by the nodes
on the network, this block will not be able to propagate until the
receiving nodes are able to run the pre-processing for that particular
txid. Consequently, if a miner tried to submit a malicious transaction
in a block as an attempt to double spend a transaction already accepted
in the memory pool, they would find it increasingly difficult to get it
added to the main chain (i.e. verified by validating nodes). This is
because none of the nodes would have the pre-processing data required to
accept this block, and that their conflicting transaction would in most
cases fail to be accepted with it being that it had a direct conflict
with another transaction that had already passed preprocessing.
Preprocessing in Amine
will be aggregated into a two processing layers, namely Trust and
Miners. This means that Trust nodes will be mainly providing
preprocessing to the network, and miners providing the post-processing.
is the processing required when a block is received, in order to fully
commit components of the data, and change the register pre-states into
their post states with verified checksums. The example above was pure
post-processing which showed that our post-processing layers scale quite
nicely, with a maximum of around 40–50k tx/s if split into a two-tier
(pre/post) processing system. Our two-tier processing system will be the
main aspect of the Amine architecture upgrade, along with additional operations and registers, and deeper/more advanced LISP functionality (we will explain how LISP shards will function in a later update).
With Obsidian, the two-tier process will become a three-tier process, which when integrated will have pre-processing (L1 processing channels), post-processing (L2 trust channels), and hardening (L3 distributed mining).
It is important to understand how the present Tritium architecture is
setting the foundation for all that is to follow. As many of you will
know, as with any undertaking, once the foundation is set, it is not
easily changed unless one takes apart the entire system. This is why it
was so important to give Tritium the time it needed.
pre-processing and post-processing is divided into two tiers as well,
the pre-state and the post-state. This is important to know, so that you
can understand how the registers act to modulate their states.
Understanding this will help discover some of the benefits of pre-states
in a chain, and how a node can prune prior pre-state data based on the
verification of a transaction in a block object. A register post-state
could be considered one individual unit of Atomicity.
pre-states contain the current database state of the given register
before the operations execute. It is packaged into a binary format
inside the transaction object as the means of verifying that the initial
claimed state is the same state that the current network contains of
benefits of this come two-fold, one that you are able to rollback the
chain without having to iterate back an unspecified number of blocks to
find the state mutation of the previous register, and two that you are
able to know the state of the register without having to calculate all
its previous states. This adds additional benefits, such as being able
to run nodes in ‘lighter’ mode, where nodes are only required to verify
chain headers (which contain references to all of the transactions in a
block), to know that a transaction with a given pre-state was included
in a block. This allows for ‘light’ verification of a pre-state, i.e
that the transaction was confirmed with the consensus of the network at a
given block height, and therefore is indeed valid.
the growth of the network and size of the ledger (one aspect of scaling
to consider), we can prune the data held by the ledger by removing old
pre-states, which lowers the data requirement and creates a more
efficient and sustainable network over an extended period of time. By
implementing this architecture now, we won’t end up with an over-baked
architecture in the future that can’t handle the overwhelming volume of
data that has been processed in the past.
When you hear of projects boasting 100k tx/s, or even 1M tx/s, let’s look at what this really entails:
On average, a tritium transaction will be a minimum of 144 bytes, and a maximum of 1168 bytes.
Let us take a best case scenario, with a normal OP::DEBIT / OP::CREDIT being around 24 bytes, so an example of a transaction that is 166 bytes.
Let us now multiply this number by 100,000 transactions which equals 16,600,000 bytes per second, or 16 MB per second. This means your internet connection would need to support at a minimum 16MB per second, or a 128 Mbps connection.
Now beyond that, let us look at the damage as it compounds. 16MB per second multiplied by 86,400 seconds (1 day) is 1,382,400 MB, which is 1.3 TB per day. Multiply this by 365 for a one year period and we have 504 TB per year consumed. This is obviously not possible on consumer grade hardware.
above proof shows that the claims of such grandiose scale are most
likely rooted in either folly, or malarky. For us, our pre-processing
and post-processing systems, LISP data shards, Lower Level Database, and
register Pre-States will help scaling significantly, but there is no
way of knowing the exact scale that will be able to be achieved until
demonstrated in real world conditions, over a long period of time. Right
now, our results are promising, seeing that we are achieving a
reasonable scale in post-processing, and managing architecture that is
able to shard the pre-processing to exceed the 4.3k tx/s bottleneck from signature verification.
register has a pre-state which is used by the operations layer for
execution to move the register into its post-state. A post-state is what
is recorded in the register database as the new state of the register
after the transaction has completed. In order to not weigh down the
register script (some of the binary data packed into a transaction), we
included what is called a post-state checksum at the end of a register
pre-state. Therefore, any validating node will compare their calculated
post-state to the post-state checksum that was included with the
benefits of this, is that a transacting node is required to do the
calculations themselves, to prove that they have done honest work. Other
validating nodes verify this calculation by comparing their new
register state checksum to the post-state checksum included in the
those that are able to house extra data on their hard-drive, their node
can be enabled to show the history of the registers without much
processing required. Since the keychain object that is used for the
register database is a binary hash map, you can enable it to operate in
APPEND mode, which will append new data to the end of the corresponding
database files, enabling a user to reverse iterate from the end of a
hashmap collision, which will show the sequence of the register history.
This is very useful for registers used in supply chains or other
‘history’ related chains, such as the transfer of ownership of titles
and deeds for example.
are a few different types of registers that determine what types of
operations can be executed on them. As you know from the tritium white
paper there are object registers and state registers. Let’s briefly
explain what each one is for:
state register is one that holds the state for a component of an
external application, with no specification on the data format which
means that specialized operations can not be applied to these register,
— A raw register is a register with given number of bytes that can be
written to or appended to at any time. It is the most versatile type of
register with no security parameters applied to it. Each WRITE is immutable, but with it being RAW, it can only be overwritten by the owner of it. WRITE is only permissible if done from the signature chain that is the current owner.
— An append register is similar to a raw register in that it is created
with a given number of bytes, but this type of register can only have an
operation applied to it to change the data state. This means that in the
database itself, the original data always exists before it, and so does
the history of all APPEND operations. A WRITE operation on this type of register will fail, even if done by the current owner. Therefore, an APPEND
register has security parameters associated with it that make it useful
for applications that would like to be able to update a register
without losing the data that existed before it. This makes every APPEND immutable but able to be modified.
TYPE::READONLY — This type of register is useful for a ‘write once’ type of register. It is only possible to use the ‘OP::REGISTER’
operation for this type, since it can only be written to once. This
would be similar to a ‘const’ type in any language, and contains
security properties that are useful for certificates of authenticity,
titles, deeds, or contracts that the creator/publisher would like never
to be modified.
registers are more specialized, as it is necessary for the operations
layer to be able to recognize the data type that they contain. This is
useful for specialized operations that require knowledge of the format
of the data that the register contains. The following Objects are
defined and useable in the current source code.
— This is a specialized register that contains details regarding
someone’s account. An account can contain the balance of any type of
token, as it is denoted by a token identifier. Token identifier 0 is a
reserved identifier and is used for the native NXS token.
— This is a specialized register that contains the details of a token,
and claims that token identifier for use of the specific token. This
register contains information regarding the significant figures of a
token, and other parameters to define the total supply, and the total
supply that has been made available to the public.
operations layer now contains a foundational set of processes, which
act as the ‘Primitive’ operations. These together allow the creation of
records, history, tokens, transfers, and non-fungible tokens. Let us go
through each operation one by one, to explain what each one is capable
operational code creates a new register with a memory address assigned
to it. The memory address must be unique, and will index the data of the
register. Think of it as an abstracted memory address that comes from
getting the memory location of a variable (in the programming language
C/C++, this would be with the symbol ‘&’ which is an abstract of a
machine address), but it lives in the Nexus Blockchain. This will be
further abstracted towards Amine, when addresses will not only be ‘locally accessible, but will be ’network accessible’.
Though replicating the exact same state across the system does provide
added levels of redundancy, it evidently limits the potential of the
system to scale. Creating shards of the data work load into ‘network accessible’ groups
is therefore necessary, where specialized processing is performed by
different groups and types of nodes, whilst retaining the levels of
redundancy that replication provides.
specific addressing is one of the innovations that is designed to solve
the data overhead problem outlined in the above section regarding
scaling. The two most notable bottlenecks that limit scaling are
signature verification and the increasing amount of data overhead that
compounds very quickly as volume increases. A scalable system is not one
that can simply ‘process’ X many transactions per second, but one that
can still function after processing X many transactions per seconds for
years on end. Even if one were to use conventional data structures that
go as low as O(log n), when the system scales to billions of keys, the
processing can still become quite large, especially when indexing from
This primitive operation initiates a ‘write’ on a register, which overwrites all the data of the pre-state with the new data of the post-state. It has certain limitations such as the register must be a TYPE::RAW
type, and the total number of bytes being written must be the same as
what it had prior. This type of operation is generally best suited for
applications that are submitting raw data into the ledger, to enable the
immutable storage of certain events such as submitting a proof hash
into the public ledger from a hybrid system, or having their application
require certain JSON to be submitted into a register
This primitive operation acts on a register of TYPE::APPEND,
and adds data to the end of the register, without modifying the
original data. Useful examples of this operation would be, flagging a
title to that is claimed by an insurance company, or updating specifics
about an item along a supply chain. Since the original data is always
retained in the append sequence, updates to a register via OP::APPEND
provide a useful audit and history mechanism.
allows the ownership of a register to be transferred from one signature
chain to another. A transfer can also be instantiated to another
register such as a TYPE::TOKEN
if someone would like a token to govern the ownership of a register.
This is how joint ownership can be provided between individuals, as the TYPE::TOKEN
then represents the ownership. This can also be useful for showing the
chain of custody between parties of a supply chain. If one wants to
create non-fungible tokens, this would be the method that is used to
transfer the ownership of the non-fungible token, with the non-fungible
token generally being a TYPE::READONLY
register with an identifier specifying parameters regarding an object.
This could be a simple digital item with JSON specifications, and the
transfer operation would be the proof of ownership of that digital item
or non-fungible token.
operation is responsible for the commitment of funds from one account
to another. It is quite like the ‘authorize’ of a debit card
transaction. When this operation is instantiated, the funds do not move
to the receiving account until the other user (the receiver) issues
their credit. The acceptance of the transaction by the receiver
completes the commitment. This operation works only on a TYPE::ACCOUNT object register, and can handle the debiting from any type of token by any identifier.
operation is responsible for the final commitment of funds from one
account to another. Together the debit and credit produce a ‘two-way
signature’, which reduces the chance of funds being lost due to the use
of an incorrect address. If the funds are not accepted by the receiver
within a specified time-window, they are then redeemable by the OP::DEBIT
issuer. Therefore, funds will never be lost if sent to an invalid
address. Another additional benefit of this is allowing a user to reject
funds sent to their account if there is question of who the funds came
from. It also provides the option to generate a whitelist of addresses
from which the user will automatically accept transactions from. This is
important for monetary safety, as if you receive a mysterious deposit
in your account, there is no knowing who or why it reached you.
What are the next operations?
next two operations are very important, as they unlock the ‘validation
scripts’ which act as small computer programs that define the movement
of NXS. Validation scripts enable the full potential of the operations
layer, allowing functions such as the decentralized exchange of assets
to tokens, tokens to tokens, irrevocable trusts, programmable accounts,
The validate function will execute the corresponding OP::REQUIRE
with the necessary parameters. If the validate executes to true, then
the required will be satisfied and therefore the validated transaction
will set a boolean expression that will be required to evaluate to true
in order for a transaction to be claimable. Such an example would be OP::REQUIRE TIMESTAMP GREATER_THAN1549220657, meaning that a corresponding transaction would not be able to execute until the timestamp has been reached.
Introducing the DEX
The DEX will work as a native extension of the OP::REQUIRE and OP::VALIDATE operations. It can be thought of as this:
User A wishes to sell 55 of Token Identifier 77. They want to sell it for Token Identifier 0 (NXS).
They choose their price: OP::DEBIT <from-account> <claim-account> 55 OP::REQUIRE TIMESTAMP LESS_THAN 1549220657 AND OP::DEBIT <my-account> 10.
In this above script <my-account> will be an account with identifier 0, and <from-account> will be of token identifier 77.
User B wishes to buy the 55 of Token ID 77. They send a transaction such as: OP::VALIDATE <txid> OP::DEBIT <from-account> <to-account> 10
Since this includes an OP::VALIDATE, it triggers the validation of the corresponding OP::REQUIRE submitting the parameters it is verifying. Since the OP::DEBIT was one of the parameters to the OP::REQUIRE, this will evaluate to true, satisfying the validation script.
User A can now submit a transaction: OP::CREDIT <txid> <claim-account> 55
User B can now submit a transaction OP::CREDIT <txid> <claim-account> 10
the above sequence, 4 transactions are executed to facilitate the
decentralized exchange between two different types of tokens. This
process can also be programmed for the decentralized exchange of an
asset to a token, or even an asset to an asset. I will explain more on
how this works and how we see the growth of the DEX in the next TAO
API as it stands contains two types, Accounts and Supply. The
implementation details for now are therefore for the purpose of
demonstration only, using only a simple combination of operations such
as OP::APPEND, OP::TRANSFER, and OP::REGISTER, for example.
Please keep your eyes peeled for additional API calls that will be
shown in the API documentation. I will explain how to interact with the
Use a web browser to access the JSON responses.
can use a web browser to make API requests to your Tritium node. This
is achieved by submitting a GET request to the API endpoint. This will
always be the IP address of the node, and port 8080 followed by
An example would be:
above request will log you into the API and returns a session
identifier. The session identifier should be included in all subsequent
requests to the API for methods that require authorization. Your PIN is
required for any transaction requiring authorization to ensure that even
in a case where your username and password were compromised, your PIN
will still be required in order to access your account. This gives
similar properties to 2FA that most login systems utilize today.
Create a login page in your website powered by the Tritium daemon
can embed a custom HTML form into your website to use a Tritium daemon
as a secondary login system that gives verification properties to your
web service. In the future, a login over the API will also trigger a
unique EID that is coupled with the login, making your service immutable
to IP spoofing. The API handles application/x-www-form-urlencoded , so make sure to include your parameters in your form as follows:
page you are sent to afterwards will include the JSON response data
that includes the genesis ID and the session identifier to be used for
all subsequent calls to the API that require authorization. This way you
can give a user secure access to their signature chain through your
service node in your online service. Importantly this gives users a way
to access their sig chain without needing to run a full node, and
without giving up custodianship of their funds and account information.
Embed contracts into your web application.
Since the API supports application/x-www-form-urlencoded,
you are able to embed any contract functionality into your existing web
application, either by forwarding forms through the API and applying a
forwarding url to pass through, or by making custom forms that use the POST aspect of the API to process webforms. The above HTML example is a basic webform which can be integrated with your existing login system. To extend this, you can make calls to the API via AJAX
or more complex forms inside your system. This means that to build with
Nexus Advanced Contracts, all you need is to hire a web developer who
is able to ‘plug and play’ the correct sequence of API calls into your
Use contracts or tokens in your regular desktop application
The API also supports application/json to make requests to the API via any of our provided Software Development Kits (SDKs), so that your native application can take advantage of the API. Currently, we provide a Python SDK for use in any external python application, which can be found in the repository in the folder named ‘SDK’.
We would like to encourage developers to build software development
kits in their languages of choice for the API and contribute to the open
source development of Nexus.
Please refer to the following API documentation for up to date documentation on all API’s and calls that are available:
As any new call is implemented for -testnet or -private
mode, the corresponding documentation will be included. Please give
feedback if you find any information difficult to understand, and we
will modify the documentation to communicate it in a clearer manner
Logical / Interface
are making progress on the App Store, which will be a developer
friendly area to buy, sell and share Nexus apps. Our current design is
‘module’ based. However, this is only the first iteration of the App
Store. We will give more details on how the App Store will develop, and
how we will provide security to the applications supported by the App
Request for new standards
Standards in the API and requests for new calls can be formally submitted and discussed on this mailing list here: [email protected].
Requests to lower layers such as new register types or operations can
be submitted to the same location. Please do give feedback if you find
anything you believe could be improved.
Command-line Flags Available
following flags are available for use with the Tritium Daemon. Some are
experimental and are undergoing debugging, while others are hardened
and are ready for use.
-fastsync (experimental) — this flag will reduce your required synchronization time by a factor of 2.
— sync your tritium node on the mainnet with legacy rules. This will
allow you to run a Tritium node on the mainnet, which gives you access
to all the nice Tritium features such as sub second load time, quick
synchronization time, and database stability
— run your node in private mode to access the API functionality and
build local contracts. post-processing is done via a private block, and
clears in sub-second intervals
-legacy — use legacy specific RPC formatting for nodes that need to retain backwards compatible formatting
add foreign indexes for all blocks by height as well as by hash, allows
the indexing of blocks by height from disk, but requires extra disk
run your node in testnet mode over LISP or the regular underlay. This
will synchronize you to the test network, and require mining to produce
valid blocks and commit post-processing data from your API calls.
repository has specific semantics for each branch. The following list
will briefly describe the purpose of each branch, and what they mean for
you are testing:
Personal — any branch that is named after a user such as viz, jack, scottsimon, paul, or dino. We recommend NOT building from a personal branch, as the code you pull will be incomplete or in development.
Merging — this branch is used to merge code between developers. Any code that exists on merging is still considered ‘unstable’, so if you decide to test off of the merging branch, do so with a debugger (debug instructions below). We recommend NOT using this code unless you are a qualified tester or developer.
— this branch is used for pre-releases. This means that code is in Beta,
and is ready for wider public testing. Once code reaches staging, we
will periodically include pre-release candidates and binaries with
revisions and stability fixes. This branch is for public testing before
the release of official binaries.
— this branch will be the least updated, so if you are looking for the
most recent code, any of the aforementioned branches will keep you up to
date. Code is only pushed to master when a FULL
release is made, accompanied by a release candidate, binaries, and a
change log and description. The code on master can only be merged from
pre-releases in staging.
First, you will need to have a debugger handy. If you are on Linux, make sure to have gdb installed. This can be installed via: sudo apt-get install gdb
For OSX, the debugger will be included with your X-Code command-line tools named lldb
Next, make sure to build the source clean by issuing this command: make -f makefile.cli clean
Next compile it with: make -j 8 -f makefile.cli ENABLE_DEBUG=1
Once this completes, you will need to start Tritium up with your debugger such as: gdb nexus
This will then enter you into a new command-line console, in which you want to type: run -beta -fastsync -gdb
the -gdb flag, the daemon will close if you press the return key, due
to the debugger generally catching all the signals before the
If you ever run across a point where the program crashes, get the backtrace by issuing the following command: bt
Take this backtrace and submit it to the #dev channel in slack for assessment.
you have already been testing or are looking to start helping test the
core, I would like to extend a big thank you for all your help!
Check out docker if you want to deploy nodes over LISP. You can find docker documentation here:
that is about all I have to report as of now, I hope that you continue
to watch the progress on our repositories, continue to give us feedback,
and of course, have fun doing it! Remember, if you’re not having fun,
you’re not doing what you love, so on that note, I will leave you to
ponder on what it is that brings you the greatest joy. In the meantime:
Welcome to the next update of the TAO update series, to continue the journey through the development of the TAO Framework. This particular article is centered around Tritium, our version 3.0 software client.
Are we ready?
Alright, here we go. The following includes a list of all the most recent code changes since the last “git pull origin master”.
Wow, that’s a lot of changes. It’s interesting to look back on it and see how much has been accomplished. As shown here, there was a total of 5,885 new lines of code, with 3,980 lines replaced. This generally means that there was some older code replaced with new, better code, along with almost 2,000 lines of new and fresh code.
Now we get to explore what it is that was changed on the more granular detail level, to present to you what will be included in the Tritium update.
Since the last update, there have been some improvements to the Network, or the LLP in our instance. So what is it that makes the network important?
The network is responsible for all end-to-end communication.
If the network was unable to propagate messages, the core peer to peer network would be unable to function. The crypto (LLC) in our case is an overlay for certain network messages to deal with cryptographic objects such as our transactions or blocks. Let us look at the newest results of the LLP in action (you would have also seen this in my Tritium presentation at the 2018 Nexus Conference, linked here: https://www.youtube.com/watch?v=P2pdz4zO38k).
Here it’s good to see requests top out at 197,744 per second as the LLL is the foundation for the TAO, hence the repository name LLL-TAO(shhh, you’ll see the code soon enough if you can find it).
The next aspect of the Network that was formally demonstrated by Dino Farinacci is the LISP architecture, and how it fundamentally works together with the Ledger to provide a safer data layer on the internet. As we know right now, there are a lot of the same problems with identity: it used to be seen that the internet was a place full of fake accounts, trolls, and misinformation. Lately though, we have discovered that the internet can be full of amazing things and incredible possibilities for promoting the ideas of freedom and prosperity.
Now, one of the reasons the network has been plagued with the negative aspects is that there is no trust layer in the actual system. We have no way to identify someone other than their IP address which is easy to forge or fake by any means. This can create problems, because one cannot reliably know who they are talking to (this doesn’t mean they need to know personal information, but rather consistency across identities). Now, with tying LISP and Nexus together over the ledger, we create the ability to establish a cryptographic identity of the user. This happens over the network with the static EID in LISP, and over the ledger with a signature chain for a user. This becomes very important for reducing fraud and identity theft, which is one of the focal points for Nexus.
The ledger consists of the series of events that establish the ownership of any register in the stack. This makes the ledger operate very quickly, since it doesn’t have an incredible overhead in processing requirements beside general cryptographic functions. The biggest bottleneck of the Ledger falls under the LLC (Lower Level Cryptography), as this is where the cryptographic verification happens. The newest results of a Ledger scaling test (transactions only, no blocks in this test) shows over 4.3k tx/s capable of being processed by a single node.
This generates a good picture of what a full LLL stack running real transaction data over it would look like. This shows that the LLD is easily keeping up with the demand of writing over ~1,500 Kb/s (Bitcoin has a maximum of ~1.7 Kb/s). The reason for the slowness in Bitcoin is because of the block size limit of 1Mb every 10 minutes. In this case, we wanted to demonstrate the efficiency of the LLL stack in its capability to handle large loads of data. This particular example shows the maximum processing capabilities of one node, which essentially would be the limits of one L1 state processing channel. The signature aggregation being passed from L1 to L2 through L1 verification nodes in a 3DC would enable greater levels of the scalability without sacrificing the security of a global set of consensus validators.
Operating well, with a simple data structure that allows easy indexing and locating of the signature chain transaction history and identification of register ownership. Since the signature chain GenesisID is of a 256-bit number space, it is easy to transfer the permissions of a register to be owned by a signature chain, or simply another register. This sets the foundation now of the Nexus Digital Identity System.
Let’s recap what a register is for better context here:
A register is a data object that changes state through global consensus, or a logical layer application.
Why are registers important?
Computers are state machines by nature. They contain a value that correlates to something on the outside world and change this value based on a sequence of logic. I know this sounds complicated, but in reality, it is quite simple.
If I have 5 apples, I record this number in a register I own to prove I have these 5 apples. When other people can ask me how many apples I have, I can give them my register address, and they can see, ahh, viz. has 5 apples. Now if I sell an apple, let us say through an order I put on the network saying: I’ll give you this 1 apple if you give me 0.1 NXS, then we are able to have state changes recorded and verified by the network, correlated to financial transactions.
This means that if someone were to make this transfer into my contract order it could execute the state change of my total apples of value 5 to value 4, while I send off the apple to the lovely customer. This is a crude example, I know, but the intention of it is not to show secure program logic — but rather, logic that shows the use of a register on Nexus.
Now that we have gotten this out of the way, let us look into what Object registers have been defined as of this update:
Each of these 4 objects are objects that can have the state changed through the ledger consensus mechanism. This is important for Tokens, Escrow, Orders, and Accounts as I shouldn’t be able to modulate the balance of my account without approval from the network. The downside to having the ledger do all the state changing of the registers through the operation codes is the resource requirement of all nodes to process this state change and ensure that it does not create a conflict with another state. It becomes very important to find the balance between logical and ledger state changes, as the network doesn’t always need to know everything that the logical layer is doing, and the logical layer shouldn’t be doing everything that the ledger is doing. This distinction is important for understanding how Nexus Advanced Contracts will scale to levels of requirement for mass adoption.
Raw State registers on the other hand are defined through specifications on the Logical layer. They operate very quickly because nodes only need to write the data, address, and owner of the register. Only the owner of this register will be capable of writing a new state to it if it is not defined as read-only. This allows the Logical / Interface layers (The Application Space) to state record important data to their system such as hashes to IPFS files, private database transactions, or even to create authorization objects to modulate the state of their database based on user actions.
The following operations are implemented fully, with functionality that executes through the vchLedgerData data member of the Tritium Transaction class:
1. OP_WRITE: Write a new state to a register address given as a parameter to the operation code.
2. OP_REGISTER: Create a new register on the network. It must contain a unique register address, claiming it for this signature chain.
3. OP_TRANSFER: Transfer ownership of a register to another signature chain, or to another register address. This is an important function for establishing the ownership of data by signature chain, and the supply chain that moves it from one owner to the next.
4. OP_DEBIT: Debit a token from a given Account Object Register. This is the commitment of funds operation that gives the recipient the ability to claim with a corresponding credit. The balance of the debit that can be claimed is determined by the percentage ownership displayed through the temporal proof.
5. OP_CREDIT: Claim a balance referencing the transaction debit that was used to commit funds to the given receive account. If a credit is claiming a debit fund from an account they are joint owner of through register chain, a temporal proof is required to satisfy the display of their ownership.
The method that processes the operation codes is called execute:
This method is responsible for the changing of the states on the ledger level, as operations are instructions to the processing nodes to modulate the state of a register through global consensus. The operations and register layers are being designed to be processed on higher locking levels of the 3DC (namely L2), to ensure that transaction processing is broken up across multiple node layers which adds to our ability to scale the advanced contract processing.
If an operation code is followed by a validation script it will require the validation boolean logic to evaluate to true if an operation code is to be claimable as a proof. What this means is that a certain logic needs to be true for that operation to be claimed. An example would be, “Do not call me past 9:00 PM”, if one was to try and make a phone call the call would not be possible.
src/TAO/Operation/include/validate.h | 54 ++++++
A simple example of this would be relating to debits and credits, where one could put stipulations on the OP_DEBIT requiring the time to be of a certain point in the future. What this would result in is the OP_CREDIT satisfying this script by being submitted past the timestamp that was required. This allows one to program logic beyond the basic operation logic to create greater functionality and customization.
The Application Programming Interface will use what is termed JSON (Java Script Object Notation) to submit commands to the nodes that create these state changes in the register, resulting in program logic that gives us the ability to use advanced contracts.
The API will have two components, public and private.
This is important, as the public API will always be developed with public funds through the multiple Nexus Embassies, and provide the required functionality for public use. This will include most of the use cases and programmable logic.
The reason for this is for the integration of businesses that require some of their application logic to be more specialized as far as API functionality, due to the proprietary nature of their developments. This also becomes a “software-as-a-service” integration opportunity for an Embassy to generate additional revenue streams, in which the profits can be recycled back into the core development process.
Logical and Interface
The new Nexus Tritium Wallet is now in public beta. The launch of this has brought an incredible amount of feedback and bug reporting to improve the interface and logical layers. As many of you already know, we provide one common interface for such functionality, where any other distributed application developers will be able to develop their own.
The logical layer is where most of the processing gets done. It is an extended application space through the OSI.
It is important to understand this, because the idea of a blockchain carries many connotations that extend beyond just the ledger. It can coordinate many systems, have private networks that operate and state record off the ledger, access control schemes, state recording based on user actions, and the list can go on. We like to leave this area open for any new type of developers to extend their application space from the conventional OSI design.
What’s in Our Future?
As you can see from this blog post, most of the hard developing is now complete for Tritium. This means that we are in the stage of weaving together code over the network, establishing local databases to handle your sigchain and register indexes, and adding lower level RPC commands to interact with the Ledger, with the higher level API being the interface in the command set.
What does this mean?
Tritium will be released by the end of January, 2019. Yes — I said it — a timeline. As we have noticed over the last year, the removal of roadmaps and timelines did not do what was intended, it only created further uncertainty and rumors in the project. As we are moving into Chapter 3 of our history books with distributed Embassies, newer architecture, and distributed governance models, we felt it was appropriate to augment this with commitments from the development team to set and meet deadlines.
The 2018 Nexus Conference in September was filled with new and exciting technology updates, education about the future of blockchain, and inspirational speakers that reminded everyone of the beautiful and exceptional opportunities our world has to offer. First and foremost, Nexus would like to thank all of our partners, sponsors, speakers, exhibitors, volunteers, community, and attendees, who made this one of the most enjoyable blockchain events of the year.
The Tritium Software Stack that Nexus has been developing was demonstrated by Colin Cantrell, the Founder & Architect of Nexus. In tandem, the developers have been working on the integration of LISP (Locator/ID Separation Protocol) to increase the speed and security of message propagation on the network level, presented by Dino Farinacci and Victor Moreno.
The team has also been focusing on the reorganization and restructuring of the US Embassy, to realign with the decentralized social and financial principles originally outlined in the Genesis White Paper. With these exciting changes, the team and community will be moving forward and will continue to promote the ascent of Nexus as a leader in the blockchain and cryptocurrency industry.
2018 has been filled with conferences of all kinds. There are conferences that focus on hackathons, ICO projects, economic forums, and business development, all interested in educating the world about blockchain technology, new governance models and economic systems. None though are quite like The Nexus Conference. An intimate event of 350 people, the main focus of the conference was to bring people together who truly wanted to get into the trenches and build these new systems.
The event began with the Blockchain 101 course that introduced new enthusiasts to how exactly this distributed peer-to-peer truth machine works. Later during the conference, a speaker asked the crowd, “By a show of hands, for how many of you is this your first blockchain conference ever?” and half the audience raised their hand. Bringing new people into this emerging industry is an important part of the overall growth model, as the goal of freedom through financial and informational accessibility was at the center of Bitcoin, the original blockchain protocol.
Conversations at The Phoenician centered around how our society has evolved thanks to technology, especially since the information age started, with the invention of the internet. Nicholas Thompson of WIRED took the crowd on a journey through the history of technology and how it has impacted every stage of our lives. Jay Samit demonstrated how everything we do, from planning our calendar to ordering our food, is now virtual. Scott Hines shared his insight on how our digital identities will play a big role in machine learning. Jim Cantrell took the stage to remind everyone that the point of all this hard work is to give monetary control back to the people.
The last day of the conference introduced the attendees to one of Nexus’ invaluable advisors: Steve Beauregard, CRO of Bloq, who has had a profound impact on the evolving cryptosphere. James Glasscock focused on innovating curation through the distributed gig and token economy. After diving deeper into the regulation of the SEC, participants were invited to the first ever Nexus working groups.
The very first Nexus working groups produced very fruitful discussions, ranging from the technical exploration of the ledger, contracts, and network, to discussions relating to distributed social organization and how Nexus could create scalable self organizing structures. The DAC working group lasted for over 3 hours, which showed signs of intense interest in the discussion. This was important, as the fundamental aspects of decentralized technology rely on our ability to organise ourselves in new ways. The proposed structures and understanding of the limitations of our current options helped spur the discussion. We will set up online groups for all of the working groups and will hold regular Zoom meetings to combine knowledge in developing new standards. If anyone would like to join a working group (DAC, Network, Ledger, Contracts, API/Functionality, Architecture), please contact [email protected]
We hope that everyone walked away from the conference with a better understanding of blockchain, cryptocurrencies, and the Nexus technology and are inspired to get involved in projects that advance the goals of making our society better for everyone in the world. We look forward to continuing the working groups so developers, business leaders, and average users can play an active role in what we’re building.
At the 2018 Nexus Conference, Colin Cantrell, Nexus Founder and Software Architect, gave a brilliant presentation explaining the latest developments of the Nexus Tritium protocol. Tritium is the first release of the Nexus Three Dimensional Blockchain (3DC), and will be followed by Amine and Obsidian. He explained the necessity of each layer of the Tritium software stack (Network, Ledger, Register, Operations, Logical, API and Interface), their different functions, and gave a deeper explanation of the Lower Level Library (LLL). Colin displayed some impressive benchmark tests comparing the Nexus Lower Level Database (LLD) read / write speeds to Ripple Nu DB, Google Level DB and Oracle Berkeley DB (spoiler: Nexus tested faster than all). He also demonstrated a live, functioning demo of Tritium with the entire stack running, and introduced many business use cases for advanced contracts.
Read the full recap and watch the presentation here.
The team conducted a key allocation meeting via Zoom on October 5th as part of the ongoing reorganization of the current Nexus Embassy, US. Staying true to the initiative of getting back to our roots, Colin Cantrell stated that the purpose of the meeting was to discuss global strategies, with community input on how the Embassy’s can be distributed.
It was proposed that in order to decentralize the organization, two new Embassies will be established – one in the United Kingdom, founded by Alex El-Nemer, and one in Australia, founded by Paul Screen, Mike Casey, and Nic Henry. The chairs of the meeting asked all 48 Zoom attendees whether there were any objections to the proposal, of which there were none.
US Embassy Restructuring
As part of the organizational restructure, certain positions have been eliminated or scaled down. The positions of CEO, formerly held by Ajay Ahuja, VP of Marketing, formerly held by Scott Bischoff, HR Manager, formerly held by Omnia Elawawzy, and Marketing Manager, formerly held by Wendy Katz are no longer paid positions at the US Embassy. The rest of the team want to thank them for all they have done for Nexus and wish them the best in their future endeavours. John Saviano, Nelson Sparks, and Colin Forbes (former CFO) have moved to contractor positions and will continue to help the project as needed. Alex El-Nemer will continue his activities within Global Business Development, but will couple this with leading the newly formed UK embassy.
The new Embassies will place additional emphasis on community involvement in the education of the technology. Business development will be the primary driver of marketing, as new use cases produce real stories that show the world how blockchain will help many different people. The secondary form of marketing will be blogs and updates written by the team, articles written by the community, educational videos and graphics, interviews, presentations and speaking opportunities at conferences, local meetups, and exhibits. In-house social media initiatives will continue to evolve and the team is meeting with some PR candidates to distribute communications to the mainstream media, secure placements for published articles, support the production of events, and to pitch for interview and speaking opportunities.
UK and Australia Embassies
Alex and Paul both spoke about the plans for their respective Embassies. Alex’s goal for the UK Embassy is to continue to develop relationships with companies who will increase the adoption of Nexus. They will do this through their networks by becoming a part of the Nexus ecosystem through blockchain use. Additionally, the UK embassy will hire developers tasked with building out the APIs necessary to integrate business applications with the Nexus blockchain, to increase enterprise involvement in the blockchain industry. The UK Embassy will continue to drive business development globally, but with a concentrated European focus.
The Australian Embassy will initially focus on starting an additional development team, with Paul taking on the role as an Ambassador to head communication efforts between the developers in all three countries. “The primary reason for starting all of these Embassies is a step to decentralize the project,” he said. This creates redundancy and risk mitigation as the team builds out the TAO framework and onboards new users. The plan is to hire 3-5 developers over the next 3-9 months, depending on NXS coin price, who will work on all layers of the Nexus software stack. The future plan is to scale out business development in Australia, New Zealand, and Southeast Asia.
The current proposal for key allocation includes distributing three keys to the UK Embassy, three keys to the Australian Embassy, and seven keys to the new US Embassy. Once the new model is established, there is a second stage proposal to allocate two more keys (one to the UK and one to Australia) depending on the future price point. The third stage proposal is for the three Embassies to choose another candidate each for an allocation of a key. The Australian Embassy will focus on Southeast Asia, the UK will assess some potential candidates in Europe, and the US Embassy will search for a candidate in the South American region. Each Embassy will be sovereign, independent, and accountable for managing their own finances. They will also collaborate together on technological and business development, and share outsourced support services, such as PR, if mutually agreed through consensus.
Distributed Autonomous Community
One of the most exciting events at The Nexus Conference was the working group centered around creating a Distributed Autonomous Community (DAC), so everyone who is involved in Nexus is able to have their voice heard. Facilitated by April Bunje and Jules Alexandra, the community discussed possible solutions to the problems often faced by organizations when attempting to scale. For three years, Colin has spoken about the necessity for the creation of a representative structure and a cryptographic voting mechanism, to facilitate the allocation of funds that originate from the self-funding mechanism of the Nexus blockchain. The DAC organizational design is in development, with the plan to have it fully built out with the completion of Obsidian. The current structure encompasses different voting groups. The design is in an early stage of development, and we welcome as much collaboration on the research and design of it as possible.
Over the last year, Nexus has taken a unique approach to market research for the development of advanced contracts and APIs. The team has focused on business development, building partnerships that will help Nexus achieve its goals. Alex El-Nemer spoke in the key allocation meeting at the beginning of October about reaching out to companies to help them solve their most challenging problems. “When we have a meeting, we don’t go in pushing a blanket product, we just ask them, how can we help?” said Alex, explaining how the Nexus team approaches new partners.
Most companies are in the early days of understanding how blockchain technology can help them, and it’s important to educate them on what blockchains can and cannot offer. The team also helps businesses discern what they need from a public blockchain versus what can be accomplished with a public-private hybrid system. The next step is building a pilot: showing them how the code works and how it can be integrated into their systems through an API.
Many companies are already aware of the potential use cases of blockchain technology for their business model. Some have been researching and developing for a year or more, but are finding the limitations posed by other protocols, like the scaling issues on the Ethereum Smart Contracts Platform, and are now looking for a new solution. “Those that do know what it’s all about and are looking for people who can do it better than everyone else,” said Paul Screen.
Nexus’ use cases built by advanced contracts will have the primary focus on:
Automated peer to peer global transactions including the transfer of asset ownership
Immediate unbiased contract execution, replacing the need for third party escrow services
Consumer account verification to reduce fraud relating to social engineering attacks
Supply chain transparency and management, to reduce counterfeiting of goods and to prove the adherence to consumer and industry standards
Tokenization of digital data to create copyrights or licenses while automating royalty payments
The use of access control schemes to log and control the security to private database systems
Improvements in auditing through financial transparency
Nexus has also been collaborating with TechUK, a government trade organization. Alex and TechUK wrote a blockchain use case report, which has been distributed to over 900 TechUK members, including companies such as Facebook and Amazon. This report is raising the awareness of Nexus’ innovations, and we are actively collaborating with similar bodies in other countries.
One of the industries that the team is very passionate about is music, as Colin, Alex, Brian Vena and many others who support Nexus are musicians and music lovers. One of their greatest wishes is to empower artists, by creating technology that ensures artistic talent is rewarded and supported financially. This is one of the goals of the new partnership with SoundVault, a global music licensing and royalty tracking platform.
The feedback gathered from meetings with corporations and small businesses has helped Colin and the development team shape the architecture of Tritium, taking into account the needs of different industries, thus enabling the creation of truly useful technology. The team will continue building these relationships and connecting with more companies to establish real-world use cases for the Nexus blockchain.
Nexus Partners with Spacechain
At the Nexus Conference, Nexus announced its revolutionary partnership with SpaceChain, the decentralized space agency that plans to build an open-source, blockchain-based satellite network. The partnership centers around collaboration to deploy a decentralized internet in space. SpaceChain and Nexus have discussed hosting each other’s nodes on their respective future satellites, and there are many more exciting developments to be explored. Additionally, Jeff Garzik, one of the founders of SpaceChain, is joining the Nexus team as an advisor.
Since the Trust update (2.5.1) on the 13th of September, which replaced the expiration of Trust Keys with trust scores, the number of Trust Keys has doubled to over 400 active keys. This shows that more people are actively staking, providing higher network security, taking NXS off the market.
Read the new Proof of Stake with Tritium Trust White Paper here.
October 31- November 2nd: Dionna Bailey and Anastasiya Maslova are heading to World Crypto Con in Las Vegas.
November 3rd-9th: Colin Cantrell and Dino Farinacci will be attending the IETF in Bangkok, Thailand
November 16th-17th: Colin Cantrell will be at CryptoFinance conference in Oslo