Raise A Fox: A browser-based game

Raise A Fox: A browser-based game

Raise it, Feed it, Clean it but don't let it die :(

This is a browser-based game made using Javascript. I made it last year during a Frontend Masters Workshop by Brian Holt.

This game is inspired by "Tamagotchi" a handheld digital pet released around 1996-1997 in Japan and the USA but never made its way in India(my country). Maybe that's why I find this game really cute and exciting :)

images.jpeg

It's a digital version of Tamagotchi. With just 3 buttons we feed, change the weather, and time, and clean our pet fox. If not fed on time the pet dies. Developed using nodejs and later on converted into a docker image so that it's accessible to everyone using just one command.

Let me be clear here, I'm not going to describe how to build the game since that is already done by Brian holt in The Project Fox Game. You can follow his course notes, these are well detailed and he also incorporates milestones in his notes. So if you click any of them, they would lead you to the git repository of project files completed till that milestone. You can check and compare if you get stuck anywhere. Pretty cool right!

I will only be describing the containerization part here, how we can

  • Containerize any web application with the help of docker.
  • Publish Docker images of our application and run them on any machine with just one command!

Let's start! The first step is to build the game or any web app of your choice, or you can clone this repo

Now that you have the project, remove the existing Dockerfile from it. We'll be making a new one.

Create a new file Dockerfile and copy-paste the below content.


FROM node:12-stretch

USER node

RUN mkdir /home/node/src

WORKDIR /home/node/src

COPY --chown=node:node package-lock.json package.json ./

RUN npm ci

COPY --chown=node:node . .

EXPOSE 1234

CMD ["npm","run","dev" ]

The first thing on each line (FROM, CMD etc.) are called instructions. They don't technically have to be in all caps but it's a convention to do so so that the file is easier to read. Each one of these instructions incrementally changes the container from the state it was in previously, adding what we call a layer.

  • The first instruction, FROM node:12-stretch actually means to start with the node container.

  • The USER instruction lets us switch from being the root user to a different user, one called "node" which the node:12-stretch image has already made for us. We could make our own user too using bash commands but let's just use the one the node image gave us. (More or less you'd run RUN useradd -ms /bin/bash lolcat to add a lolcat user.)

  • We have to copy files from our local file system into the container. We'll use a new instruction, COPY. This will copy your index.js file from your file system into the Docker file system (the first index.js is the source and the second index.js is the destination of that file inside the container.)

  • We'll save ourselves a lot of permission wrangling if we put the file in the home directory. But we'll have to add a flag to the COPY command to make sure the user owns those files. We'll do that with --chown=node:node where the first node is the user and the second node is the user group.

  • WORKDIR works as if you had cd'd into that directory, so now all paths are relative to that. And again, if it doesn't exist, it will create it for you.

  • We added a RUN instruction to run a command inside of the container. npm ci it's very similar to npm install with a few key differences: it'll follow the package-lock.json exactly (where npm install will ignore it and update it if newer patch versions of your dependencies are available) and it'll automatically delete node_modules if it exists. npm ci is made for situations like this.

  • There is an instruction called EXPOSE that its intended use is to expose ports from within the container to the host machine. However, if we don't do the -p 3000:3000 it still isn't exposed so in reality, this instruction doesn't do much. There are two caveats to that. The first is that it could be useful documentation to say that "I know this Node.js service listens on port 3000 and now anyone who reads this Docekrfile will know that too." I would challenge this that I don't think the Dockerfile is the best place for that documentation. The second caveat is that instead of -p 3000:3000 you can do -P. This will take all of the ports you exposed using EXPOSE and will map them to random ports on the host. You can see what ports it chose by using docker ps. It'll say something like 0.0.0.0:32769->3000/tcp so you can see in this case it chose 32769. Again, I'd prefer to be deliberate about which ports are being mapped.

  • CMD instruction, there will only ever be one of these in effect in a Dockerfile. If you have multiple it'll just take the last one. This is what you want Docker to do when someone runs the container.

Then open up the terminal in the root of your project folder and run:

docker build -t <your-app-name>:<version> .

eg. docker build -t akanksha0307/fox-game:v1 .
# -t is used for tag and '.' is used for telling docker to search for the Dockerfile in the current directory.

Then run your image:

docker run --init --publish 1234:1234 <your-app-name>:<version>

eg. docker run --init --publish 1234:1234 akanksha0307/fox-game:v1

The publish part allows you to forward a port out of a container to the host computer. In this case, we're forwarding the port of 1234 (which is what the Node.js server was listening on) to port 1234 on the host machine. 1234 represents the port on the host machine and the second 3000 represents what port is being used in the container. If you did docker run --publish 8000:3000 my-node-app, you'd open localhost:8000 to see the server (running on port 3000 inside the container).

That's it! you can go to localhost:1234 in your browser and play the game. You can also push this image to Dockerhub and share it with your friends. You can find the steps for that here