…and finish implementing the minimax algorithm
In this article, we will finish implementing the minimax algorithm for playing the 2048 game, and then we will use this implementation to automatically play a web version of this game which can be found on this Github page.
Here is the previous article about this subject, in which I showed how to represent the game state of 2048. In case you missed it, check it out:
For that, we will first create the
GameDriver class which will act as a middleman between our minimax implementation and the game found on this webpage. The
GameDriver class is responsible for interacting with the game. We’ll need 2 operations to be handled: getting data about the current state of the game, and making one of the moves: up, down, left, right. These 2 operations are implemented in the following methods:
.getGrid()— this will get the game state data and return it as a
.move()— this will take as parameter a move direction code and emulate a keypress of the appropriate arrow key.
The move direction codes that we chose in our implementation are:
- 0 = Up
- 1 = Down
- 2 = Left
- 3 = Right
For the implementation of this
GameDriver class, we will use Selenium, which is a good library for doing this kind of thing: interacting with a web browser. In case you don’t know about it, check out this article:
Now, we’re starting with importing a few things. From the Selenium library, we need webdriver and Keys, which will be used for creating a driver instance for your desired web browser, respectively for using the arrow keys. We also import the maximum size of an int type as
MAX_INT and the
time package; we will see after a while for what we need them.
from selenium import webdriver from selenium.webdriver.common.keys import Keys from sys import maxsize as MAX_INT import time
Next, we create the instantiation method for the
GameDriver class. We store the URL of the game page, create an instance of the Chrome driver, and open the game URL. Then we need to store a reference to the body element of the page to be able to send the arrow key commands later. We’re also storing a dictionary that maps the move direction codes to the corresponding arrow keys.
.getGrid() method extracts data about the game state and returns a
Grid object. We store the data as a matrix and pass it to
Grid’s constructor when returning. Firstly, the matrix is initialized with 0’s and then we update it as we find the tiles on-page.
After inspecting a little bit, the game page in Chrome Developer Tools (CTRL+SHIFT+I), I concluded that:
- The tiles can be identified by the “tile” class name.
- The position (row and column number) of each tile on the grid can be extracted from a class name of the form “tile-position-col–row” which lies in each tile’s class attribute.
- The tile number is the maximum value that can be extracted from a class name of the form “tile-num” which lies in each tile’s class attribute.
Below is the code that implements the above ideas:
.move() method below sends the appropriate arrow key signal to the body element so that a move is done in the direction indicated by the parameter it takes. Then, we use
time.sleep(0.1) to make a pause of 0.1 seconds after the move signal is sent so that the page has time to update itself.
def move(self, moveCode): self.body.send_keys(self.moves[moveCode]) time.sleep(0.1)
Below is the full code of the
Now it’s time to implement the minimax algorithm which consists of 3 functions:
getBestMove(). If you’re not familiar with the minimax algorithm, you can check this article to make sure you understand what’s happening here.
maximize() function takes as parameters:
state which is a Grid object,
b which are the alpha and beta from α-β pruning, and
d is the maximum allowed depth. This function returns a tuple of the form
(maxChild, maxUtility), where
maxChild is the children of the current state object (in the minimax algorithm tree) that maximizes the utility, and
maxUtility is the utility value of
maxChild game state.
maxUtility variable will hold the maximum utility of a node encountered so far. At the beginning of the function, we don’t know any utility value, so we consider the maximum so far to be a number smaller than anything a utility value can be. I chose -1.
Then, we check if the current state is a terminal node or if we reached the maximum depth. If so, we return None as the
maxChild and evaluate the current state’s utility, otherwise, we continue by iterating over all the children of the current state. In each iteration we make a copy of the current game state and make a move in one of the available moves; the
child variable in the for loop is a move direction code that is used to make this move.
Then we let Min do his move through the
minimize() function and get back from this function the utility of the current iteration’s child state. This is the utility that we would get if we chose to do the move that leads to the current child in the loop. If this utility is greater than our previous maxUtility, then we update the maxChild and maxUtility accordingly. After this, we do 2 more checks according to the α-β pruning algorithm, so that we skip paths in the game tree that we know ahead of time that will not give us the best move.
minimize() function is similar to
maximize() but now we’re in the shoes of the Min player and we try to choose the move that minimizes the utility.
getBestMove() function calls
maximize() and returns the code of the move that we have to take to maximize our score/utility.
Below is the code of our minimax implementation:
Now, it’s time to create the game playing loop in which we repeat, until the game is over, the following 3 things: get the game data, use minimax to establish what’s the best move, and actually do this move.
When we run this game playing loop, we should have on our screens a 2048 game that plays itself as in the GIF at the top of this page.
If you want to learn more about Artificial Intelligence, then here is a great book that can help you:
You can find the full code of this project on Github.
I hope you found this information interesting and thanks for reading!
This article is also posted on Medium here. You can have a look!