You are viewing a single comment's thread from:

RE: MySBDS - Steem Blockchain Data Service in MySQL

in #utopian-io7 years ago

Thanks for helping me with the basics of docker. The "docker ps" showed that steem_mysql wasn't running so I ran "docker start steem_mysql" and then I saw that it was running, but I still couldn't connect so I ran the "docker logs steem_mysql" command and noticed a bunch of recent messages about doing a recovery, but the recovery seemed to finish in about three minutes and after that everything seems to work as expected :)

Sort:  

Awesome, glad you got it going!

This all underscores how many 'grey areas' I left ambiguous in here, so I'm putting together a more thorough tutorial that'll go into docker and other details a bit more.

Sorry to keep troubling you. I'm noticing that steem_mysql keeps stopping and gives the following type of error, "Aborted connection 5 to db: 'steem' user: 'root' host: '172.17.0.3' (Got an error reading communication packets)". I keep restarting steem_myslq (I've restarted at least 10 times and probably closer to 20 times) and it was stuck on the lastblock of 20080434 for a long time, and then I checked again and and the lastblock jumped to 20192434. Is having to restart steem_mysql this many times normal?

No worries, it's good to document these issues in the blockchain so others can learn too!

Interestingly, this is one issue I have not yet run into. For me, steem_mysql has been running flawlessly, it's the steem_sbds container that has stopped and had issues so far.

My general thoughts are this may be system level issues. How much memory do you have? I've tested this in a 4GB instance and can say it definitely does NOT work with less, at least not reliably.

Also, you mentioned installing other tools so these may also have an impact.

Something to check is to dump all the logs to local files so you can parse and review more easily. Docker makes it a little weird, so use this technique:

docker logs steem_mysql > mysql.log 2>&1
docker logs steem_sbds > sbds.log 2>&1

I didn't install any other tools in this particular setup (I destroyed the server with the other tools), and I think I adhered very closely to the instructions you provided. I'm using the 16GB digitalocean server and 500GB volume that were recommended in the instructions, and I used the high memory script.

Alright, with that deploy when you're doing the database restore, that does hammer mysql pretty hard so that's probably the main issue here.

Again, check the docker logs to see if anything obvious pops out there. Beyond that, watch mysql to see what it's doing by running this command:

mysqladmin -h 172.17.0.2 -u root -p processlist

While the restore is running, you should see 'INSERT INTO...' in the process list for many hours.

If mysql dies, you shouldn't need to start all the way over. Assuming that you still have all the .gz files in the dump/ folder you can run this:

mysql_password="mystrongcomplexpassword"
volume_name="volume-tor1-01"
mysql_ip=`docker inspect --format "{{ .NetworkSettings.IPAddress }}" steem_mysql`
for i in /mnt/$volume_name/dump/*.gz ; do gunzip < $i | mysql -h $mysql_ip -p$mysql_password steem ; done

Obviously, update the password and volume. This is just part of the script that calls the import.

I haven't been too suspicious about the database restores because I actually did the restores twice (onec on the server I destroyed and once on the current server), and both times seemed to take about the same amount of time (maybe around 8 hours) and both resulted in about the same amount of space being used on /mnt so before investigating the database restore, I wanted to try something based on what you said about how you had no issues with steem_mysql stopping and that you tested things on 4GB so I'm in the process of trying things with your low memory script's mysql settings. More specifically, I removed both the docker steem_mysql and steem_sbds and then ran lines 38 and 59 in your low memory script to get them both going again, and I found I also had to rename a directory with a really long name that was in the "volumes" directory so the system would continue using all existing mysql data instead of populating mysql from scratch. So far things seem to be working as steem_mysql hasn't stopped in over an hour.

I'm glad it's coming together! It really is a lot of learning little pieces of the puzzle like this to understand everything. I'm still getting that tutorial together to walk through all of the steps like this in a little more detail.

One tip, renaming the volume could cause issues with docker (thought it sounds like it worked here) so here's a trick in case it comes up again.

Inside the volume folder you'll see another folder named _data containing all of the databases. Make sure you docker stop steem_mysql first so nothing is running and then just move your _data folder with all the existing data to overwrite the _data folder in the running container. You can't just replace the steem/ folder as this is innoDB and needs the ib... files as well.

Then, just docker start steem_mysql and it'll see the database.

I was wondering about the possibility of zipping and tarring up the _data directory to populate sbds instead of using mysqldumps. For example, instead of just offering latest.tar for download on your website, would it also work to offer latest_data.tar as a download? There is probably a reason why mysqldumps are what people typically do, but I'm wondering if this is a special case where the _data directory might be a faster solution because the database restore took around 8 hours on a 6 virtual core machine with 16GB and the approach of downloading overwriting the _data directory could be much faster.

It still seems to be working, and the lastblock seems to stay very close to the current head_block_number.

Thanks for the tip about the _data folder. I was definitely not confident about what I did, but it is good to know the proper way to do things going forward.

Note: What I did fortunately had the same effect as overwriting the data folder in the running container because there was /old64characterdirectory/_data/ and /new64characterdirectory/_data/ and since I saw that the old one was about 360G but that only the new one was the only one being written to so I stopped both steem_mysql and steem_sbds and then removed the new one and renamed the old one to what the new one was which was a roundabout way of overwriting the _data folder in the running container, but my approach could have been problematic if there was anything other than the _data folder in the running container.

Coin Marketplace

STEEM 0.20
TRX 0.18
JST 0.031
BTC 87237.56
ETH 3192.65
USDT 1.00
SBD 2.94