Sparkster Development Update. WE 17th September 2018.

in #tech6 years ago

Sometimes, as you’re building something, you find more efficient ways of doing things than what you had originally planned. This week, we will be discussing some of the significant changes we have made to our architecture in order to take advantage of some opportunities we discovered during development. We will detail these changes to the reader, along with the reasons for why we chose these design decisions.

backgroundtech.jpg

In the last design we outlined to the reader, we mentioned that a compute node will not wait for consensus before responding to the client with the result of the computation. In this approach, verification of the result will occur at some point in the future, but the user will have their result immediately. The idea here was that, if the result was shown to be falsified by the compute node, the consequences would only affect the compute node in question, and the current state of the data. Let us elaborate on this point briefly.

Blockchains that store data typically use “data trees” (or tries), data structures that are, for lack of a better term, snapshots of data at a particular point. In the case of block chains, the data trees represent the data in the block chain at the close of a block. These trees are called Merkle Trees, and have an interesting property: they are composed entirely of the data they contain. This means that if we were to change the data in the tree, we would get an entirely new data tree, with the old one left intact.

Therefore, if we wanted to look at the data after a block closed, all we need to do is find out the “address” or hash of the tree that existed at the time of the block’s closure. A “hash” is a special value that depends on some data, so if the data changes, the hash also changes. We can use the hash of the top or “root” of the Merkle Tree to gain access to the entire tree, because this hash depends on the hashes of its branches, which further depend on the hashes of their branches, etc. Consequently, all of these hashes are dependent on the data they contain, because the bottom of the tree (called the “leaf” nodes) get their hashes directly from the data contained there, and the same applies to nodes farther up the tree, though they are also derivatives of the nodes below them.

Since we can easily store the hash of the root of the data tree inside of a block, if we wanted to undo a transaction because we discovered that it was falsified, all we would have to do is recover the previous hash of the data tree and call it the current state of the data.

This worked well for independent transactions, but when we started to explore dependent transactions, we foresaw some challenges. Suppose that there exists a transaction T1 that modifies a field B on a document called Doc. Next, suppose we have a transaction T2 that modifies B on Doc as well, but T2 depends on the value of T1. Eventually, we discover that T1 was invalid and we rolled back the state of the tree. This would also undo the changes that T2 made to B as well, when in fact T2 was a valid transaction.

Now suppose we executed T1, T2, T3, …, Tn and we discovered that T1 was invalid. Transactions T2 to Tn would also get undone. To overcome this problem, we’ve evolved our architecture.

Instead of returning a result immediately, we now allow time for 20 verifier nodes to reach consensus on the validity of the transaction. However, unlike our previous architecture, each verifier node will tell all 21 master nodes of their “votes” on a transaction. So, instead of each master node selecting a verifier, waiting for that verifier to execute the code and respond back to the master node, and that master node informing all 20 other master nodes of its response from the verifier, we will supply each verifier with the list of all 21 master nodes. When the verifier is done, it will call out to all 21 master nodes. The benefit of this approach is that the verifications happen in parallel, drastically reducing latency. The tradeoff, of course, is network traffic.

Once the master nodes see that the verifiers have established the validity of a transaction, and any state (data) changes that need to be made will be made by the compute node, and the final output will be returned to the client.

Finally, the transactions for that block will be committed to the block chain and we will also insert the state of the data tree into the block at this time.

By taking this approach, we eliminate the need for rollbacks entirely and no longer need to concern ourselves with dependency graphs or maintain a pointer to the previous data tree before it was changed.

We are also using a distributed hash table (DHT) to store important parts of the data tree for fast lookup times. Hash tables have the benefit of O(1) lookup times on average, so if we wanted to retrieve the part of the tree where a particular document is stored, we could store a reference to the subtree in a hash table, preventing us from having to perform a breadth-first search on the tree to find the data we are seeking. The word “distributed” here means that the hash table is stored in parts, on different systems. When we want to retrieve a piece of data from the hash table, we ask the entire network where this hash is located. The system that contains this hash will respond and give us the data we want. Our DHT will be stored on the storage nodes. This design decision with DHT’s has improved our search times significantly.

Sort:  

Good article ! subscribed to you and I hope for a mutual subscription

Coin Marketplace

STEEM 0.21
TRX 0.25
JST 0.038
BTC 96498.09
ETH 3358.85
USDT 1.00
SBD 3.08