How to set up IPFS - the basics and some semi-advanced ideas
I started filebot in order to improve the standing of IPFS and Steem - plan on releasing the source code though right now there are no comments and in my eyes some parts lack in ellegance and not nearly all planned functions are available in any form plus im lately dealing with some health problems and the following will be more universal suggestions.
Dlive lately had some problems with IPFS. They didnt rent own server(s) to use as IPFS node(s) but rented space on providers for IPFS, which had some problems and limitation as i learned by now.
Setting up your own IPFS node is not too hard and if you have a lot of files you want to keep online it is a good and cheap way to ensure it. Cheap becouse your node and other nodes will share some space and traffic as neccessary as opposed to traditional web protocols such as http or ftp where your server alone will have to deal with high traffic as it happens.
How do you set up an ipfs node?
First, you need to install ipfs. I will deal with linux servers as this is the most sensible choice of OS for this purpose (and most servers if we are honest).
What you need to know about that, youll find here.
By default, it will use ~/.ipfs. as path for the repo. The harddrive (or raid, for a serious web service such as Dlive Raid 6 may be a good choice) which you want to use to store files should be mounted to this location. If the default doesnt work out for one reason or another, you can change the repo path variable with
export IPFS_PATH=/new/path/to/repo
now, you will want to initialise the repo with
ipfs init
Now you can run the deamon with
ipfs daemon
You dont allways want to manualy start it and have to keep that command prompt open though. So you will want to set it up as a service.
The service configuration may look something like this:
[Unit]
Description=IPFS daemon
After=network.target[Service]
ExecStart=/usr/local/bin/ipfs daemon
Restart=on-failure[Install]
WantedBy=default.target
While ipfs is running
ipfs add test.jpg
Will calculate the hash, returning
added QmVqhDBqwGkrvvFZegtMaZ67CBhZ6q7ZbmmfTr6qVNwipA test.jpg
From this moment, your file is available via ipfs (provided you are online).
You can test this by visiting https://ipfs.io/ipfs/QmVqhDBqwGkrvvFZegtMaZ67CBhZ6q7ZbmmfTr6qVNwipA
replacing QmVqhDBqwGkrvvFZegtMaZ67CBhZ6q7ZbmmfTr6qVNwipA with whatever hash was returned for your file.
alternativly, if, lets say, someone else did just that and you want to keep providing that file, you can do so via
ipfs pin add QmVqhDBqwGkrvvFZegtMaZ67CBhZ6q7ZbmmfTr6qVNwipA
which tells ipfs to keep storing the file with that hash.
You could allways change the config. To do that, you will want to cd to ~/.ipfs or wherever you have your repo and edit it for example via
nano config
Nano can of course be any other text editor.
What is advisable will depend on your system.
On a small personal home server such as the Pi, you may want to improve performance with settings such as
"DisableBandwidthMetrics": true,
And possibly reducing Low Water and High Water under Swarm - on an actual typical server you could increase it instead for faster delivery as long as your CPU and more importantly your RAM is up to the task.
You may want to increase
"StorageMax": "8GB",
Which is the default. I put it to 100GB, a big server could do a bit more. This is the maximum of foreign files you temporarily keep to reprovide, which is the basis of IPFS. The files you pinned or added are seperate from that.
The most likely mistakes to avoid are
- Performing commands in the wrong directory
- Performing commands with a different user to the one running the daemon
It is comparativly simple to create shell scripts in PHP and vice versa, if you add a file and need the hash, as you can redirect output to a file instead of a screen, it becomes relativly simple to extract hashs of files added via ipfs add test.jpg/filename, save it in a database to embed the content behind the hash on the appropriate pages.