Writing Software for an Among Us League

November 13, 2020 in Javascript

Lately, I’ve been playing a lot of Among Us with my friends. It’s a refreshing change of pace from the competitive FPS titles that we usually play. However, we’ve managed to inject a ‘healthy’ dose of competitiveness into our games in the form of a league that I built some software for.

We recently crossed the 100 game mark and players have been making feature requests and also submitting PRs! It has been open sourced at healeycodes/among-us-friends.

The league website is a Node/Express application that calculates a player’s Elo rating — a relative measure of one player’s skill against another — as well as other performance statistics like crew/imposter win rate. It ranks players and graphs their recent performance on the home page.

The home page - a list of players, their win/loss rate, and Elo charts

When a player goes to their auto-generated player page, they can see a history of their games and who played on each side, the amount of Elo they won or lost each game, and an Elo history graph for the whole season.

A player's page - a longer Elo history chart, game history, Elo change amounts

I manually track each game we play in a Google Sheets spreadsheet. This data is brought into the application via the Google Sheets API v4. To set this up, I grabbed the spreadsheets ID (which can be copied straight from the URL bar) and created a restricted API key via Google Console.

The data arrives in a series of rows with each row representing one game. An Among Us game can be split up into three parts of data.

  • The crew (7 or 8 players)
  • The imposters (2 players)
  • The winner (crew or imposters)

Since a spreadsheet cannot be efficiently queried, and page requests are unlikely to exceed 0.15/s, and the amount of data will grow linearly, we bring in all the rows on every request (Sheets API usage is unlimited and free).

There is a performance cost to recalculating the statistics but this can be solved in the future with caching. The performance cost for each additional game is a constant amount so, loosely, we could say that the cost of generating league statistics grows at the same rate as additional games. It has a time complexity of O(n).

A player object is built for each person in the league and the following information is calculated.

  • Crew wins/losses
  • Imposter wins/losses
  • Current Elo
  • Elo history
  • Their game history

Elo ratings

For Elo calculations, I use the npm package elo-rating. In the Elo rating methodology, the K-factor is the maximum possible adjustment for a game. I have based the league’s system off the International Chess Federation’s rules. Before a player reaches 30 games, their rating is more volatile. After 30 games, their K-factor drops from 40 to 20.

Elo was designed for games with two players. I was initially unsure how to solve the problem of Among Us having different team sizes. Other people have created rating systems for Among Us that separate a measure of a player’s skill into crew/imposter. However, I made a product decision to reduce a player’s performance down to a single number. As a result, a player is compared against the average Elo of the side they are playing against.

We alter our game settings (e.g. crew vision, kill cooldown) over time to encourage a 50/50 win rate of crew and imposter teams. Even if the win rate is skewed, the effects on a player’s general rating will even out in the long run because players will have a similar number of games on each side (20% imposter, 80% crew).

Elo systems in video games are not tuned to be mathematically perfect but to reward and encourage players — a mixture of fun and correctness. Read more about this topic in Analyzing Starcraft 2’s Ranking System by Sirlin.

const EloRating = require("elo-rating");

function EloChange(games) {
  // Based on https://ratings.fide.com/calculator_rtd.phtml
  // K = 40 for a player with less than 30 games
  // K = 20 afterwards
  const K = games > 30 ? 20 : 40;
  // From playerA's perspective, how much can be won or lost
  return function (playerA, playerB) {
    return [
      EloRating.calculate(playerA, playerB, true, K).playerRating - playerA,
      EloRating.calculate(playerA, playerB, false, K).playerRating - playerA
    ];
  };
}

// The tests for this look like:
// For new players
let eloChange = EloChange(0);
expect(eloChange(1200, 1400)[0]).toStrictEqual(30);
expect(eloChange(1200, 1400)[1]).toStrictEqual(-9);
// For everyone else
eloChange = EloChange(31);
expect(eloChange(1400, 1200)[0]).toStrictEqual(4);
expect(eloChange(1400, 1200)[1]).toStrictEqual(-15);

The backend of the application is tested using Jest, and SuperTest for mocking requests to the server. The statistic functions are unittested. All the tests are ran on commit/PR using GitHub Actions with the default Node YAML file that runs the following commands across the three latest versions of Node.

    - run: npm ci
    - run: npm run build --if-present
    - run: npm test

Charts and pages

For the line graphs that chart player ratings, I use Chart.js. There is a shared graph function for the home page and the player page. It takes an array of Elo history and a sample argument which allows the home page to show the last 30 games and the player pages to show all of the season’s games.

The home page generates a list of players and their graphs after hitting an endpoint of the application that sends back a blob of statistics. In the future, depending on the size of the player base, this will need to allow querying specific players. For now, the page load cost of this is imperceivable (i.e. sub-100ms).

fetch("/stats")
  .then(response => response.json())
  .then(json => {
    // Remove loading text
    statsList.firstElementChild.remove();
    // ..

All pages use a similar pattern of serving a HTML page that contains a loading message that is removed when the data is fetched and the rest of the page is built. There is no framework and the JavaScript is unprocessed.

Deployment

I manually deploy and maintain the application via Glitch. I import the latest changes from the default GitHub branch and any changes to the package.json file are reconciled. This means there is ~20 seconds of downtime. The Glitch IDE allows for hotfixing in production (or even development), as well as viewing application logs, and a terminal window that connects to an SSH session.

Glitch IDE, log tab, and terminal tag

I started this project by coding directly in Glitch’s browser IDE which can export to a GitHub repository’s glitch branch where PRs can be created. After a while, I took development offline when users started visiting the website more often.

My current workflow is: develop locally, check tests locally, create PR, check the CI tests, merge to main, go to Glitch, and import from GitHub.

I also added a route /raw-stats so that myself and other developers can work with read-only production data when we work locally.

Feedback

Like most online games that have rating systems, there have been comments about the fairness of the rating system. As of writing, the key unsolved problem is that the player’s team is not taken into account (only the opposing team’s rating is used). If you are paired with new players, or very experienced players, the probability of you winning a game is altered but not captured in the Elo calculations.

A player on a strong team should be rewarded less for their win and vice versa. The change required to implement this is to ‘walk’ the player’s Elo towards the average of their team before determining their rating change after a game.


Comments or questions? I enjoy talking with readers over email.

Email newsletter 📧

I write about code. Get my posts, projects, and personal updates straight to your inbox!