A game, in the abstract, is a set of game bits (like individual actors in the game world) and rules (what those actors can do and what the world does in response). The player uses an interface of some kind to interact with the game. This interface could be any collection of physical or virtual objects: a board and special pieces, cards, a ball, or a computer program and the graphical assets associated with it.
Consider tic-tac-toe. We recognize this game because it consists of a three-by-three grid that is filled with Xs and Os by different players in turn, one per grid slot, until Os or Xs are aligned three-in-a-row. But we can create other games that can be translated directly into tic-tac-toe. Here’s one: we take the numbers between one and nine (inclusive) and take turns picking numbers until one of us has a set of three numbers that sum to 15. The same mechanics underlie these two games—the only difference is the interface.
More complex games cannot be directly translated into other games as easily. They must be broken down into constituents that then are simple enough to be translated.
All this translation shows us that a game’s interface can be separated from its mechanics, but how do we actually describe the mechanics of games? Translating one set of abstractions into another, as we did with tic-tac-toe doesn’t seem to get us anywhere in this endeavor. Mathematics is, in fact, a language in which we can concisely and completely define game mechanics. Mechanics can be torn from the game’s interface and described as an abstract mathematical problem that the player is attempting to solve. I will spare you the exact mathematics here, but any game can be reduced to some composition of mathematical problems.
Math has had several more millennia of exposition than game design, so it can provide our young field with some useful language. Problems in mathematics (more than just “1+x=2”-style problems, but problems like sorting a list of numbers without knowing the exact contents of the list) have varying degrees of complexity. The complexity of a problem is basically its “difficulty” to solve, usually expressed in how the time required to solve the problem grows with respect to the “size” of the problem (the size is, for example, the number of numbers that are in the list we’re trying to sort). Non-trivial games can be reduced into math problems of a complexity class of higher than NP.
(Math nerds please excuse me. This isn’t going to be super-precise because I’m trying to make it understandable without taking 1,000 words.)
NP stands for “Non-deterministic Polynomial”, a polysyllabic train-wreck that alludes to how long it would take an “oracle” machine to guess the solution correctly. A problem of complexity higher than NP (I’m thinking particularly of NP-complete problems here) is intractable for a computer to solve—it would take a computer millions of years to solve even modest varieties of the problem. Computers can only verify that the solution is correct in a reasonable amount of time. Reasonable in the theory of computability is a polynomial relationship between problem size and time. It may take a computer 8 million years to solve a certain NP problem, but the computer would only take 8 seconds to check that a solution is correct.
Great games ask you to solve problems that are NP-complete or harder. It has actually been proven that tetris is an NP-complete problem. You’re doing some very heavy lifting when you play these games—your brain works hard to get as close as possible to solve the problem. The startling part is that this is fun!
Raph Koster talked about complexity theory and games in his AGDC presentation this year. I think it’s a profound truth about games that more game designers should understand and utilize.