iguana status - 20x reduction in RAM required for BTC node! now 4GB RAM systems should be enough

in #iguana8 years ago

As you know a full bitcoin node requires a monster machine as the blockchain is almost 75GB. I was battling against startup time to get to realtime sync for bitcoin and on the non-SSD server it was taking forever.

well not forever, but a big fraction of the time it took to sync all other block. which is clearly silly, but I did really optimize the parallel sync so any unoptimized syncing will be rather slow compared to that. I tried the usual tricks of lump allocation of memory, prefetching the memory mapped files and it made incremental improvements but I just couldnt stand waiting so long for it to sync.

So I had an idea.

Each coin has a bundle of blocks and the bundlesize I set to be the 500 or 2000 that comes back from the headers message as that is the "natural" size for a bundle. However, while it is a bit unnatural to use a larger bundle size since you probably wont get anymore than 2000 items in the reply, nothing prevents setting an arbitrarily small bundlesize. Unless the software has some assumption somewhere about being 500 or 2000, I should be able to just change one line of code and make bitcoin have a smaller bundlesize.

I set it to 100, which is 20x smaller and it went from requiring a 32GB RAM system with SSD to where it is doing the parallel sync now using a bit more than 2GB of RAM. even the tmp files used is peaking at 15GB instead of 30GB like before. With much less stress on the system limits things should operate much more linearly and I am seeing it syncing a lot smoother on the slower VPS. It usually took a bit more than 2hrs for sync, now it is 90%+ done after 1.5 hours. It seems to not be using more than 200mbps of bandwidth, so the <2hr time would be possible if you have a 200mbps connection, 4hrs at 100mbps, 6 to 8hrs at 20mbps. I know the last one doesnt add up, but it is due to the ability of iguana to totally saturate a connection, so it will sustain a bit over the rated capacity of the line the entire duration of the sync. while at faster speeds it goes up and down in usage

I estimate the time to achieve RT mode will be 20x faster and so on average you will be fully in sync before the first bitcoin block comes in, which seems to be adequate.

This changes the expected configuration from being basilisk BTC to iguana BTC and will allow for better performance overall. HDD space seems to be a lot more plentiful than RAM and I think most systems can handle 2GB of RAM used for bitcoin. This is a full bitcoin node with block explorer level dataset and will allow full functionality without reliance on any other nodes.

In some ways this is one of the most significant iguana news in a while

took a bunch of restarts after crashes to get it to stumble to the finish line, but all in all very good result to sync using much less RAM. Since there is no iguana database to corrupt, you can almost always just restart after any crash, at anytime. iguana tries to identify what is needed to be done and just resumes, this part is a bit ad hoc and it doesnt always figure out the most efficient things to do, but by deleting the right intermediate files iguana is most always regenerating what is needed. I guess worst case is to do a full sync from scratch and burn 2 hours.

Tables creation without SSD takes a while, almost 2 hours with the increased number of bundles and RAM usage peaked at 10GB, so i will need to do some optimizations to fit it into 4GB, though this was using 8 cores, so if to use only 2 cores it would still fit in 4GB. But being able to run BTC along with a dozen other coins is worth the slower table generation time, especially since most connections will be taking longer than 1hr for the downloading part anyway

Quite an unexpected development today! I was going to debug basilisk, but ended up reducing BTC requirements to be able to run on 4GB systems. Of course more RAM will be better performance and SSD is highly recommended, but always good to have the lowest hardware requirements.

James

The big question is if the two bundlesizes created the same ledger, I know you were wondering that, so here is the data:

2000 block bundles:
BTC.RT426740 u.213+c.213 b.213 v.213 (0+740/740 1st.213).s0 to 213 N[214] h.426740 r.426740 c.426000 s.426740 d.0 E.213 maxB.16 peers.118/64 Q.(0 0) (L.426740 213:740) M.426739 00000000000000000207f8109a1c8b5860950377f5d1e835da95c190012e34d6 ledger.cab1009145174be1 supply 15834239.79530316 bQ.3 8:33:09 stuck.0 max.0

100 block bundles:
BTC.RT426740 u.4267+c.4267 b.4267 v.4267 (0+40/40 1st.4267).s0 to 4267 N[4268] h.426740 r.426740 c.426700 s.426740 d.0 E.4267 maxB.16 peers.117/64 Q.(0 0) (L.426740 4267:40) M.426739 00000000000000000207f8109a1c8b5860950377f5d1e835da95c190012e34d6 ledger.7f53a8f59cd4974f supply 15834239.79530316 bQ.2 0:41:47 stuck.0 max.0

notice the supply for both: 15834239.79530316 vs 15834239.79530316

matching supply doesnt mean 100% that all the ledger entries are identical, but it is a very very good chance, especially since all the bundles were verified in isolation and all the balances cross checked against listunspent output

Sort:  

I was looking at the bitcoin development subforum yesterday, one guy had done a profiling to find the slow parts of bitcoin core... as other parts were accelerated, sha256 and crc32c were found to consume a lot of cpu... apparently bitcoin core devs are contemplating the use of some sse4 and avx+ variants for both. Might be worth to take a look if you are using the C versions.

https://bitcointalk.org/index.php?topic=1593610.0

thanks for the link. I already use a somewhat optimized SHA256. a lot of the bitcoind used openssl hashing, which just seems wrong, but when they eventually get around to making optimized lib, like secp256k1, it is great stuff

The biggest performance killer I found was the DB and to my mind if the blockchain is immutable data, then why is it in a DB?

I think even scp256k1 would be accelerated with some newer instructions (mulx / adcx) if one goes deeper to the asm level. But they are not very widespread instructions yet in terms of use. The sha256 and crc look that they can be sped up quite a bit. As you progress with iguana and do similar profiling work, you'll have to eliminate the bottlenecks one by one and at that point you may hit issues like that.

As for the DB, I'm not sure exactly how the bitcoin core software works. What I do know, is that it's extremely slow with a mechanical disk. I think I've read gmaxwell say that the db is only for the wallet and not for the blockchain. I haven't analyzed it myself (I'm an extremely rusty coder with C aversion, so...) and that's why I can't say anything with certainty.

As for the immutability, I guess one would say that in theory the blockchain is "eventually consistent"... so there's always room for some divergence...

I run a full node on a core2duo with 2gig of ram. No idea why anyone would need more than 4gig for a full node.

did you configure with txindex=1?
iguana keeps not only txindex=1 level of information, but a complete block explorer level dataset, which is probably overkill for a home user, but yet it does it
as the utxo dataset exceeds the RAM, then you will need to access the HDD more and more

a full node that is tracking just your wallet's tx vs a full node that tracks tx and balances for all addresses, this changes the memory required by a large factor

Thanks for the latest update. Keep optimizing!

nice one man

Interesting!

Coin Marketplace

STEEM 0.21
TRX 0.20
JST 0.034
BTC 90827.60
ETH 3116.50
USDT 1.00
SBD 2.97