@transisto and @furion : what about data sharding the blocks chronologically and load balance from there? Of course I have no insights in steemdata's exact loads, but aren't the latest blocks queried most? If so, a relatively small chunk of data consumes most CPU / RAM but only a fraction of SSD, so by sharding "new blocks" (MongoDB can shard by default) on a separate node splits the SSD vs CPU/RAM/bandwidth problems.
@transisto and @furion : what about data sharding the blocks chronologically and load balance from there? Of course I have no insights in steemdata's exact loads, but aren't the latest blocks queried most? If so, a relatively small chunk of data consumes most CPU / RAM but only a fraction of SSD, so by sharding "new blocks" (MongoDB can shard by default) on a separate node splits the SSD vs CPU/RAM/bandwidth problems.
Just an idea! ;-)
@scipio
EDIT: small self-upvote for visibility, 100% upvoted "contribution comment #4"
Scaling MongoDB is currently not a problem. The database would need to grow by another 1000% before sharding becomes relevant.
I think this is a very good in Amazon forest Amit user community don't forget to Upvote
Good Work Follow x Follow please <3
hola aca saludos desde venezuela me sigues te sigo :$
czechglobalhosts
Contribution comment #7
give me vote and follow i also give you vote and follow
Thank you !
Contribution comment #8
If you like the book, it is a request for reading
https://steemit.com/book/@saifuk/dan-brown-origin