You are viewing a single comment's thread from:

RE: SteemWorld ~ Weekly Support ~ #13

in #steemworld6 years ago

Well, maybe I used to work with more complex data and data-flows but it seems to me that sooner or later to run into database limitations.

For example 1 simultaneous write is quite a severe catch. Also highload app (imagine you need to scale 10 times with the growth of steemit users) demands data separation into hot and cold sets. Indexes and recent partitions on SSD and old data HDD plus you can decompose most of tables into cold-hot rows. So you really need to write new sets into db and manage/process another chunks simultaneously.

And lets not forget completely about replication/failover... It would be sad if you had lost your data and now end up with 3 days downtime in order to re-process all blockchain data in single thread.

I know full-sale DB engine is an investment of resources but calculated and with strong roi prospects.

Just my 2 satoshis.
Cheers!

Sort:  

Well, maybe I used to work with more complex data ...

Yes, that's exactly the point. I worked for 8 years for large companies in the logistics sector. We mainly used enterprise servers by Oracle, Sybase and Informix, which of course work on a higher level than SQLite, because they are highly configurable and come with huge performance and stability benefits for simultaneous write operations. For such case I would not work with SQLite and I think it would not even be possible.

But as long as I can run SteemWorld with my current setup, I will stick to SQLite. There is only one process that needs to write to the databases and I don't think that this will change in the near future. Re-indexing the data can be executed in a separate backup path without having any server downtime. Maybe at some point in future I will tell you that I needed to upgrade to a more complex engine, who knows :) For now it's working great.

Coin Marketplace

STEEM 0.18
TRX 0.24
JST 0.036
BTC 95282.29
ETH 3279.60
USDT 1.00
SBD 3.07