Botto Development Achievements ๐
Botto Development Achievements ๐
Since the inception of Bot Steem, our primary objective has been to contribute significantly to the community and the blockchain ecosystem as a whole. Initially, we implemented a strategy to stream blockchain data across all our bots to acquire the necessary information. While exploring the steemdb repository, we decided to leverage the block stream service available. This decision catalyzed our development progress, leading to the successful integration of two pivotal pull requests into the repository, expertly merged by @ety001.
https://github.com/steemit/steemdb/pull/14
https://github.com/steemit/steemdb/pull/13
Details of Pull Requests
PR 13: Steem Blockchain Sync Script ๐
Overview ๐
This Python script is meticulously designed to synchronize a MongoDB database with the Steem blockchain by fetching and processing blocks. The optimization involves fetching historical blocks in batches of 50, transitioning to fetching a single block every 3 seconds upon reaching the head block, complete with a robust retry mechanism on RPC failures. This method has demonstrated up to a 60% reduction in synchronization time during testing.
Features โจ
- Batch Processing: Accelerates synchronization by fetching historical blocks in batches.
- Real-Time Updates: Switches to single block fetching at the head block.
- Robust Error Handling: Ensures continuous operation with retry logic for RPC failures.
- Comprehensive Data Processing: Updates MongoDB collections by processing various operations within blocks.
Key Improvements ๐
Old Code:
- Sequential block processing.
- Configuration via environment variables.
- Inefficient historical block synchronization.
New Code:
- Batch Processing: Fetches 50 blocks at a time, significantly enhancing sync speed.
- Retry Logic: Implements robust error handling for RPC failures.
- Configuration Management: Utilizes a config.json file for streamlined configuration.
- Performance Enhancement: Achieves up to 60% faster sync times due to batch processing and improved error handling.
Execution ๐
Using Docker:
- Build the Docker Image:
docker build -t steemdb_sync .
- Run the Docker Container:
docker run -d --name steem-sync-container steemdb_sync
Configuration ๐ ๏ธ
Modify the config.json
file with the appropriate settings before running the Docker container. Example config.json
:
{
"mongodb_url": "mongodb://10.10.100.30:27017/",
"steemd_url": "http://10.10.100.12:8080",
"last_block_env": 78090042,
"batch_size": 50
}
mongodb_url
: The connection string to your MongoDB instance.steemd_url
: The URL of the Steem node you are connecting to.last_block_env
: The block number to start synchronization from.batch_size
: Number of blocks to fetch in one batch (default is 50).
PR 14: Steem Account Data Update Script ๐ ๏ธ
Overview ๐
This update enhances the script responsible for updating and maintaining Steem account data and related properties in MongoDB. The changes include moving from environment variables to a configuration file, enhancing error handling, implementing batch processing for Steem API requests, and more structured logging.
Features โจ
- Configuration Management: Migrated from environment variables to a
config.json
file for better management. - Enhanced Error Handling: Utilizes the
tenacity
library for retries, detailed logging, and specific error messages. - Batch Request Handling: Implements JSON-RPC batch requests to optimize Steem API interactions.
- Efficient Data Processing: Processes account data in batches and uses
bulk_write
for MongoDB operations. - Structured Logging: Replaces print statements with the logging module for structured and leveled logging.
- Graceful Scheduler Shutdown: Ensures graceful shutdown of APScheduler to prevent data corruption.
Key Improvements ๐
Old Script:
- Environment variables for configuration.
- Basic error handling with minimal logging.
- No explicit handling of batch requests.
- Individual data processing and insertion into MongoDB.
- Basic print statements for logging.
- APScheduler without graceful shutdown.
Updated Script:
- Configuration Management:
config.json
file for streamlined configuration. - Enhanced Error Handling: Detailed logging and retries using
tenacity
. - Batch Request Handling: JSON-RPC batch requests to reduce API calls.
- Efficient Data Processing: Batch processing and
bulk_write
for MongoDB. - Structured Logging: Logging module for better tracking and debugging.
- Graceful Scheduler Shutdown: Ensures tasks complete properly, preventing data corruption.
- Configuration Management:
Execution ๐
Using Docker:
- Build the Docker Image:
docker build -t steem_account_sync .
- Run the Docker Container:
docker run -d --name steem-account-sync-container steem_account_sync
Configuration ๐ ๏ธ
Create a config.json
file in the root directory with the following structure:
{
"STEEMD_URLS": ["https://api.steemit.com"],
"MONGODB": "your_mongodb_connection_string"
}
STEEMD_URLS
: The URL(s) of the Steem node(s) you are connecting to.MONGODB
: The connection string to your MongoDB instance.
Further Enhancements and Leadership ๐
Following these accomplishments, we have significantly enhanced our bots' capabilities. We have also added a search bar on both the curation trail and fanbases pages. While some changes might be less visible, our commitment to providing exceptional services to Steem users remains unwavering. All these advancements have been made possible under the leadership of @xpilar.
cc
@steemcurator01 @upex @ety001 @steemcurator02 @steemchiller @bangla.witness @upvu.witness @pennsif.witness @starlord28 @moecki @faisalamin @visionaer3003 @dhaka.witness @stmpak.wit @jrcornel.wit
Thank you for the update. I have been planning to create a local database for various purposes for some time. When the time comes, I will take a close look at your updated script. ๐
While there are DAO's for rewarding Developers through proposal , I also think Devlopers Like @steem.botto should be rewarded for pull requests they sent , using some mechanisms , Maybe upvotes from steemcurator accounts
Thank you for your kind words
Yes, it will be easy to have a local copy of blockchain as a DB, instead of having to sync it. From that also you can filter only transactions of those accounts you need, You can also build API based on the mongo, LIKE the SDS @steemchiller is offering, we consumed it a lot on my previous versions, but now we use the DB itself as a source of data, It will be easier . Once the chain syncs , we have many options to consume the data
The advantage of having its own DB is that numerous external queries can be avoided.
But Steemchiller has prepared the data so well and has such short response times that I'm not sure if it could be better.
However, I think it's just fair to use an own infrastructure for complex and numerous high-frequency requests.
Consuming SDS data is very straightforward and easy, but it is not Open Source i believe, I hope it become OS soon
This post has been featured in the latest edition of Steem News...
Thank you for mention and News
Your post is manually rewarded by the
World of Xpilar Community Curation Trail
BottoSTEEM OPERATED AND MAINTAINED BY XPILAR TEAM
BottoSteem
Robust Automations on STEEM Blockchain using the Power of AI
https://steemit.com/~witnesses vote xpilar.witness
"Become successful with @wox-helpfund!"
If you want to know more click on the link
https://steemit.com/@wox-helpfund โค๏ธ
Hi @steem.botto, @xpilar and team
Can anyone explain it to me? Why our account is being used to upvote your posts without permission?
We've noticed that several not uathorized upvotes took place since BotSteem launch few days ago. And we're very concerned about it.
Hope to hear from you
Hi, I will check and pause if any of such thing have happened. We do have some glitch on userbase copy. And we also request to login in https://botsteem.com and verify the trails and fanbases you have. I have turned off your curation trail globally as of now . Sorry again