Minecraft Modding: Integration Testing!

So although I haven't focused much on the mod itself, I have done quite a bit for the infrastructure.
In particular, I've added automated Integration Testing of the mod via Bitbucket's Pipelines project and a few auxiliary projects.

To note, the process basically involves these things:

Why Integration Testing?

Because for mods, most of the details are in the running game, not unit tests. If I could act as a player in the game itself in an automated way, much like Selenium for web projects, then I could make sure most of the test cases work without having to manually do them every single time. This won't catch things like graphical glitches, since it is all headless, but unit testing wouldn't be able to catch such issues either. Basically, unit tests ensure I wrote the code as I intended to write it, but integration testing here will ensure I wrote the code as I intended it to work.

Some Code

Here, let's dive into the code. Mineflayer, as mentioned above, is used in the project, but what does it do? Mineflayer is a Javascript API for connecting to Minecraft Servers and controlling a player on that server as a bot to do various actions. The bot only works with Vanilla minecraft by default, but I was able to get around this using minecraft-protocol-forge, since Mineflayer uses minecraft-protocol under the hood. In my Mocha test case, it looks like this:

        bot = mineflayer.createBot({
          host: "localhost", // connect to the local server
        });
        forgeHandshake(bot._client, { // this allows us to handshake with a forge server, 'verifying' our mods, which Mineflayer doesn't have or need
        forgeMods: [
          { modid: 'minecraft', version: '1.12.2' },
          { modid: 'mcp', version: '9.42' },
          { modid: 'FML', version: '8.0.99.99' },
          { modid: 'forge', version: '14.23.5.2768' },
          { modid: 'settlements_core', version: '0.0.1' } // this is my mod, the others I got from running the client and checking the versions
        ]});
        bot.on('spawn', function() {
            done(); // this is the Mocha 'done' function for async requests
        });

For now I simply have 2 mock test cases, wrapped up in their own describe clauses. They Each send chat messages after logging in and then verify that the bot received its own message. A very simple test case, but effective and proves the point:

    describe('chat', function() {
        it('should find own chat message', function(done) {
            const MESSAGE = 'a unique chat message';
            bot.on('chat', function(username, message) {
                if (username === bot.username) {
                    message.should.be.a('string');
                    message.should.equal(MESSAGE);
                    done();
                }
            });
            bot.chat(MESSAGE);
        });
    });

lastly, we should disconnect from the server in between test cases:

    afterEach(function() {
        bot.end();
    });

For our npm command to actually run the bot, we use npm with some utilities:

"test": "wait-on tcp:25565 && mocha test-int --exit --reporter mocha-junit-reporter --reporter-options mochaFile=./test-reports/junit.xml"

This causes us to wait for a server to be active locally, then run mocha with --exit, which causes mocha to exit when the tests are done instead of waiting for the bot to fully disconnect. The output file is for Bitbucket's Pipelines, of which is shown below:

options:
  max-time: 10
pipelines:
  pull-requests:
    '**':
      - step:
          name: Full Int Test
          image: atlassianlabs/docker-node-jdk-chrome-firefox:latest
          caches:
            - gradle
            - node
          script:
            - chmod 755 ./int-test.sh
            - ./int-test.sh
          artifacts:
            - settlements-core/server.log
  branches:
    master:
      - step:
          name: Create Javadoc
          image: openjdk:8
          caches:
            - gradle
          script:
            - cd ./settlements-core
            - bash ./gradlew javadoc
          artifacts:
            - settlements-core/build/docs/javadoc/**

This will run an executable script called 'int-test.sh' in each pull request (it will also generate javadocs on each merge to master, but that is a different discussion!). This script simply runs our integration tests after starting up a server, using gradle-node:

#!/usr/bin/env bash

cd ./settlements-core
mkdir ./run
cp -r ./test-int/int-world/** ./run
bash ./gradlew npmInstall
bash ./gradlew runServer > server.log &
bash ./gradlew runIntTest
kill $!

We also need to change our Forge build.gradle to support this plugin, importantly making the server run in headless mode and our npm task for testing:

task runIntTest(type: NpmTask) {
    args = ['run', 'test']
}

minecraft {
    serverRunArgs += 'nogui'
}

And that should do it. Of course, the test cases here are simple, but they will be expanded as I add functionality. You can find the full source here: https://bitbucket.org/minecraftsettlementsmods/settlements/src/master/

Sort:  

Congratulations @halemaster! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You made more than 10 upvotes. Your next target is to reach 50 upvotes.

Click here to view your Board
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Support SteemitBoard's project! Vote for its witness and get one more award!

Coin Marketplace

STEEM 0.22
TRX 0.24
JST 0.038
BTC 94692.02
ETH 3236.92
USDT 1.00
SBD 3.29