You are viewing a single comment's thread from:

RE: EOS - The next Dan Larimer Thing

in #newslink7 years ago (edited)

you should be able to find the clip in one of the threads you can dig up in the 'replies' section of my blog @l0k1

I never heard of ark.io before... it sounds like they are borrowing heavily from what @faddat was working on (ipfs based content distribution). I'll read it now before I post this reply...

If you want to read what I was going to make Dawn turn into, click the link on my profile https://gitlab.com/dawn-network/nexus it's a bit all over the place (I have published various drafts also on gitbook). Proof of Service, I call it, though it's not a security protocol it is a consensus protocol for monetising delivered content for creators and hosts and of course curation (consumers of the media).

ok, it was a bit hard to find it on the page (this should be in the menu at the top imo) but https://ark.io/whitepaper I am now reading this...

Actually, I only had to scan the contents page of it to get the general gist. A key difference in my design is that it uses merkle trees with a cluster of different databases that are constrained within the scope of the data type. So there is an account registry, a cluster membership advertising protocol (i was going to build it on zerotier, btw), after I read Eric Freeman's masters thesis (you can google that phrase to find it) on forming a consensus - in this system there would never be a fork because of the nearly instant confirmation by ALL validator nodes at the same time, I was also going to use interval tree clocks for determining the canonical sequence of transactions quickly, and I have plans to use a graph database system that runs on GPUs to accelerate the provision of database service (ie, running a validator or replicator) and it goes beyond just monetising simple text, or even media files, to enable even more, different distributed databases to implement applications that the provision of service can be directly monetised by using algorithms that prove delivery but also modulate the reward by using deterministic social network distance to reduce payouts to people who have close financial connections with each other.

The concept is not fully crystallised yet, though it's mostly complete, and yes, the token ledger is not a core component, neither is the monetisation system, these are basically applications. I also worked on some ideas about an alternative to standard double entry ledger format and instead use a coin model, where it is necessary to trade a big token for smaller ones of the same value, and using a similar network distance algorithm, issuing a small amount of new tokens to nodes that provide change (like how banks charge for those rolls of coins above the value of the coins). This was a design aimed at reducing the necessity for a total history of the ledger to have high confidence of it being a valid spend, and further, and I forgot this, I'll make a new paragraph...

The replication protocol, I forgot to mention that - there is no blockchain but instead a simple transaction log, that the network builds a consensus about ordering using interval tree clocks, and as well as being able to stream this log to replicate it, the shared memory system would use a bittorrent like breaking the database into small pieces with hashes identifying the content, to allow rapid update of the shared memory.

I'm not so enthusiastic about all this now as I was when I was intensively working with @faddat, and to be honest, I would be more than happy for someone to steal it and run with it, although it would be good manners to give me credit or maybe a validator slot or something.

by the way, all this stuff about short clearance times, in theory, with my 'fully connected graph' validation chatter protocol used by validators, the clearance time could be under 1s for internationally distributed validator nodes, and when operated on a LAN or smaller geographical area, potentially as low as 100ms, which would also, coincidentally, make it possible to have a distributed database that tracks a virtual world environment's changes (like world of warcraft sort of thing).

I also started to work on methods of sharding these databases and allowing a fast provisional clearance with a subsequent update digest so potentially, posts could be in the database and propagating to replicas very fast, and only in the case of an attempt to double spend (post data tied to the same object that perform different operations) then about 10-60 seconds later the fractal distribution of the distributed validator sets has all the same, globally, and locally, 99% of the time, within a second (It takes 4 message cycles, broadcast, crosscheck, certify, distribute/confirm)

Coin Marketplace

STEEM 0.27
TRX 0.21
JST 0.038
BTC 95687.43
ETH 3622.07
SBD 3.85