Showing posts with label gaming-math. Show all posts
Showing posts with label gaming-math. Show all posts

Monday, October 31, 2011

Gaming Math: Problem 18 Answer and Problem 19

The answer to Problem 18, "Rating Trading", has been posted.

Problem 19: Rating Trading, Part 2

Consider the following generalization of the Math Trade Problem, which I came up with while thinking about how to use a similar "math trade" process to trade Magic cards. The difficulty with trading Magic cards using math trades is that Magic cards vary extremely widely in value, so the restriction of one-for-one trades is likely to be prohibitive. Therefore, suppose that we have the following Generalized Math Trade Problem:

Consider a group of N players, each of which has one or more cards to trade. Each player has a separate "subjective value" of each object - i.e. one player might value a given card at $12, while another might value it at only $8. The goal is to redistribute the cards among the players so as to maximize the sum of the amount each player values the cards he ends up with, subject to the constraint that each player ends up with cards that he values at least as much as the cards he started out with.

The "decision problem variation" of the Generalized Math Trade Problem is as follows: given an instance of the Generalized Math Trade Problem and a target value, determine if there is a solution such that the sum of the amount each player values the cards he ends up with is at least the target value (subject to all the other constraints of course).

Problem 19: Show that the decision problem variation of the Generalized Math Trade Problem is NP-complete.

Hints (ROT13 to read; you can read them one at a time)

1. Gur erqhpgvba vf gb gur fhofrg-fhz ceboyrz.
2. Gur fbyhgvba vaibyirf bayl gjb cnegvpvcnagf.
3. Bar bs gur cnegvpvcnagf unf bayl bar vgrz.

And the solution.

Saturday, October 29, 2011

Hints for Gaming Math Problem 18

ROT13 to read:

18A. Qba'g sbetrg gung pbfgf pna or artngvir.

18B. Fcyvg rnpu abqr vagb gjb.

18C. Guvf zvtug vaibyir nqqvat fbzr nqqvgvbany abqrf.

Gaming Math - Problem 18

Problem 18: Rating Trading

Consider the following problem, which we will refer to as the Math Trade Problem:

Consider a set of N players, each of which has one game that they want to trade. Assume that each player has a different game. Each person also has a list of games that he wants (You may assume that nobody lists a game they already have as a game they "want".) The goal is to determine how to distribute the games between players such that the number of trades (i.e. the number of people that end up with a game they want) is maximized subject to the constraint that everyone ends up with exactly one game, and each person ends up with either a game he wants or he keeps the game he already has.

The minimum-cost network flow problem with node capacities is as follows:

Consider a graph G with a set of vertices V (also known as "nodes") and (directed) edges E between these vertices. Each edge has a maximum capacity, which states how many units of flow can be sent along that edge, and a "cost", which gives the cost of sending each unit of flow along that edge. The goal is to find a way of sending flow such that the amount of flow going into each node is the same as the amount of flow going out, and the total cost of all the flow sent is minimized. Additionally, nodes may have "node capacities" which give the maximum amount of flow that can flow through that node. (In some versions of the network flow problem, there can be "source" nodes or "sink" nodes for which the amount of flow going in need not be the same as the amount going out, but these will not be necessary for our purposes.)

---

Problem 18A. Given an instance of the Math Trade Problem, show how to transform it into an instance of the Minimum-Cost Network Flow Problem with Node Capacities, such that a solution to the network flow problem will give a solution to the given Math Trade Problem.

Problem 18B. Given an instance of the Minimum-Cost Network Flow Problem with Node Capacities, show how to transform it into an instance of the Minimum-Cost Network Flow problem without node capacities. (There are several well-known algorithms that can be used to solve this problem.)

Problem 18C. Consider the following generalization of the problem. Suppose that each player owns multiple games, and we want to maximize the number of trades (i.e. total number of games that people want that they get) subject to the constraint that each player ends up with the same number of games he or she started out with, and each of those games is either one he already had or one that he wants. However, we also add the complication that several players may want to trade the same game, and nobody wants to end up with two or more copies of the same game.

I will put some hints up later tonight and I will put the solution up tomorrow.

Friday, March 18, 2011

Gaming Math - Problem 17

Recently, University of Illinois computer science professor Sheldon Jacobson has developed a mathematical model to predict the distribution of seeds in the Final Four of the NCAA basketball tournament. (The NCAA basketball tournament consists of 64 teams divided into 4 regions with 16 teams each. It is a single-elimination tournament, and within each region the teams are "seeded" from 1 to 16, with the teams in order from strongest to weakest. The team that wins in each region goes to the "Final Four".) There is a web site allowing you to explore the results of the model. A key finding, reported in media coverage about this web site, is that the most likely combination of seeds is not (1,1,1,1), but rather (1,1,2,3).

Problem 17: Seeds of Victory

According to the web site, the probability that all four 1-seeded teams will go to the Final Four is 1 in 31.82, while the probability that two 1-seeded teams, one 2-seeded teams, and one 3-seeded team will go to the Final Four is 1 in 14.05.

In the article, Jacobson made the following statement:

"But I can tell you that if you want to go purely with the odds, choose a Final Four with seeds 1, 1, 2, 3.”

For concreteness, suppose that you are asked to pick, for each region, which team is going to the Final Four, and you are only considered "successful" if you correctly pick all four regions. Assuming that the probabilities given by Jacobson's model are accurate, is it true that picking a (1,1,2,3) split is more likely to be successful than a (1,1,1,1) split?

The answer is here.

Thursday, February 24, 2011

Gaming Math - Problem 16:

Problem 16: "Fall Out Pick Up"

The video game "Fallout 3", and its sequel, "Fallout: New Vegas", are set in a post-nuclear-war world where most civilization has been destroyed and the few people remaining alive must scavenge for supplies to stay alive. Each piece of loot that you can pick up has a weight and value. There is a limit to how much weight you can carry, and when you get to a town you can sell your loot for bottle caps (the game's currency) based on its value. The problem of maximizing the total value of items you can pick up while staying within your weight capacity is known as the "knapsack problem" and is a well-known NP-complete problem.

In the game, however, there is an additional complication. You only observe each item one at a time, and because of the dangers and rival scavengers it is impossible to go back and pick up an item that you dropped or left behind. So you have to make a decision about whether to pick up each item, as well as which items to drop in order to make room, as you see each item.

(a) Is there an algorithm that gives the optimal result in all situations? (There is no limit on the running time of the algorithm.)

(b) The "competitive ratio" of an algorithm is the worst-case ratio of the value of the solution found by the algorithm to the optimal solution. For instance, if an algorithm always gives you a set of items with value at least equal to half the value of the optimal solution, then its competitive ratio is 0.5. Does there exist an algorithm for this problem with competitive ratio greater than 0? If so, give one; if not, prove it is impossible.

(Hint: Imagine there was an "adversary" controlling the capacity and sequence of items, and it gets to observe which items the algorithm selects before deciding which new items to present.)

(c) Suppose that the capacity is C, and the highest weight of any one item is M. Give an algorithm for this problem with competitive ratio 1-(M/C).

The solution is here.

Saturday, February 19, 2011

The qual is finally done!

I had my qual exams this Thursday and Friday. According to the qual web site, normally the qual has 8 questions on it, and a passing score requires "nearly perfect" answers to about 3 of the questions and partial answers to 3 more. This time there were 10 questions on the qual, and I was able to get complete answers to 5 of them and partial answers to 4 more of them. So I think I passed, though I won't know for sure until the results come in, probably next week.

One thing that this does mean for the blog is that you will get to see more of my "Gaming Math" problems. The main reason I haven't been able to put up more of those problems recently is not because I haven't played any games with interesting math or computer science problems in them, but rather because most of the time I can't actually solve the problems I come up with. But preparing for the qual has helped me get better at solving those kinds of problems so that means that you will see more of them.

Here is a problem:

Gaming Math - Problem 15: "For Science!"

In the board game "Battlestations," players control crew members aboard a spacecraft, and they are sent on dangerous missions. In one of the missions, the players are trapped in a region of space containing N wormholes, and the only way to get out is to go through the wormholes in a specified sequence. The only way of finding out what that sequence is is by using the ship's Science Bay, which allows the player to ask any yes-or-no question about the sequence.

1. In the scenario described in the game, N=4. What is the minimum number of questions required to guarantee figuring out the correct sequence?
2. For an arbitrary N, what is the minimum number of questions required? Find an algorithm that achieves this minimum number, and prove its optimality.

(Hint: Don't forget that you can ask any yes-or-no question about the sequence.)

The answer is here.

Wednesday, February 24, 2010

Gaming Math - Problem 14

Okay, as promised, here's the one about eigenvectors:

Problem 14: Community Events
---

The "World of Warcraft" tabletop roleplaying game is based in part on the computer game of the same name, but also includes other features, including rules for in-game towns and communities. In the game, communities have a "community behavior map" consisting of a list of attributes, including "wealth", "greed", "happiness", "disaster", and so on, each with a numerical value. Different events can affect different attributes. Also, there are "links" going from one attribute to another, and each link has a different "intensity," which can be positive or negative. If a link goes from A to B with intensity I, then whenever an event affects attribute A, attribute B also changes by an amount equal to the amount attribute A changed, times the intensity of the link. For example maybe "wealth" and "happiness" have a link with an intensity of 0.5. Then if a group of adventurers brings back lots of treasure, increasing the "wealth" by 10, the "happiness" will increase by 10 x 0.5 = 5. Of course the intensity can be negative - an increase in the "disaster" attribute will probably reduce other attributes.

The rules specifically state that impact is only felt one link away from a factor. For example, suppose attribute A is linked to B and attribute B is linked to C. If an event affects A, then B will be affected by the link, but that change does not then continue on to affect C through its link. Suppose, however, that this rule were removed; you calculate the changes based on the initial event; then the changes based on the links; then the changes caused by the links from those links, and so on.

What conditions on the links and their intensities are necesary to ensure that this process always converges?

The answer is here.

Monday, February 22, 2010

Gaming Math - Problem 13

I previously wrote on this blog that I am talking two courses this semester: one on scientific computing and one on planning algorithms. Coincidentally, I have recently played a couple games that offer nearly perfect examples to illustrate concepts that we are learning about in these courses. Here is the one about scientific computing the other one will come up soon.

Problem 13: Characteristic Characters

In the HERO System, player characters are defined in part by a list of numerical "characteristics," such as strength, intelligence, dexterity, defensive capability, etc. When creating characters, players spend "character points" (CP) to purchase characteristics. Each characteristic costs a different number of CP per point of the characteristic. Most characteristics only cost 1 CP per point, but some cost more. For example Dexterity costs 2 CP per point because it affects important combat factors such as initiative, and Speed costs a whopping 10 CP per point because every point of speed increases the number of "phases" you get to perform actions each turn. For example if I were to buy 15 points of Dexterity and 5 points of Speed, that would cost (15*2) + (5*10) = 70 total CP. Note that you ARE permitted to buy a negative amount of a Characteristic.

The rules for purchasing Characteristics changed significantly between the 5th edition and 6th edition of the game. In 5th edition, Characteristics were divided into two categories: "base characteristics" and "figured characteristics." Base characteristics were purchased as described above. However, "figured characteristics" each had a base value that you got for free, that was a linear function of the values of the base characteristics you purchased. For example, let's say "maximum hit points" had a base value of (3 * Constitution + 2 * Strength). Then if you bought 15 Constitution and 10 Strength, you would get (3*15 + 2*10) = 65 hit points "for free", and you could then buy more for a given cost in CP per point (or buy a negative amount, effectively "selling back" the hit points you got for free.)

In 6th edition, figured characteristics were eliminated; they instead became treated the same way as base characteristics - you don't get any "free points", you just have to buy them up as normal. Some of the CP-per-point costs of the various characteristics were altered to compensate.

Now here is the question:

Prove that regardless of what the costs and "figured characteristic functions" were in the old system, that it was possible to modify it - changing costs but eliminating figured characteristics - such that every possible combination of characteristics costs exactly the same under the new system as under the old system.

The solution is here.

Tuesday, December 8, 2009

Gaming Math - Problem 12

Problem 12: Energy Crisis

In the collectible card game "Magic", players get "land" cards which they can play to generate "mana," which represents magical energy that can be used to play other cards which represent magical spells. Each land card can be "tapped" for one mana per turn, and different cards cost different amounts of mana to play. If you get lots of cards but not enough mana to play any of them that is called being "mana screwed." Unhinged, a "joke" Magic card set, contains a card called "Mana Screw", which allows the user to (as many times as he wants to) spend one mana to flip a coin, and if he wins the coin flip he gets two mana (for a net gain of one.)

Problem 12.1:

Suppose that a player has X mana, and he has a spell that he wants to play that costs Y mana (where Y is greater than X). He uses Mana Screw's power repeatedly until he either (a) runs out of mana, or (b) gets enough mana to cast his spell. What is the probability that he will get enough mana to cast the spell.

Problem 12.2:

There exists another Magic card called Krark's Thumb (this is not a joke card) that allows the user to, whenever he is called upon to flip a coin, flip two coins and ignore one. Thus if a player used Krark's Thumb he would have a 3/4 chance of winning each coin flip, rather than 1/2. Suppose that such a player had one mana, and wanted to play Gleemax, another "joke" card that costs 1,000,000 mana. He again continually used Mana Screw's power until he either ran out of mana or got enough to play Gleemax. What is the probability of success? (You can take the limit as Gleemax's cost approaches infinity.)

(Hint for both problems: Let f(x) be the probability of success starting from x mana, and find a recurrence relation for x.)

The solution is here.

Monday, November 30, 2009

Gaming Math - Problem 11

At the board game night no Sundays at the local game store there is guy who has been coming in to playtest a new card game he invented. The game is based off of computer fighting games like Mortal Kombat. The way it works is as follows: Each player chooses a character, and each character has a deck of "attack cards" and "defense cards." The players take turns attacking and defending (so the first turn player 1 attacks and player 2 defends, then the second turn player 2 attacks and player 1 defends, etc.) The sequence of play in each turn goes as follows:

(1) The attacker draws attack cards until he has 5 attack cards in his hand; the defender draws defense cards until he has 3 defense cards in his hand.

(2) The attacker plays attack cards. Each attack card has three attributes: a "range" (high, medium, or low), a numerical "attack value" indicating how much damage the attack does, and a "level" (1, 2, or 3). The attacker can either play any one attack card as a basic attack, or he can play a combo. To play a combo the first card in the combo has to be level 1, then the second level 2, then the third (if it's there) level 3 (you can play either a 2 or 3 attack combo)

(3) The defender tries to block the attack. Each defense card is either a "High Block", "Medium Block", or "Low Block." In order to block the attack he has to play a block card that matches the range of the attack. If there is a combo attacks must be blocked in order. For example if it's a 3 hit combo, and he blocks the first and second attacks, the third attack still does its damage. If he doesn't block the first attack because he doesn't have a matching block card, he takes damage from all the attacks, and he can't block the second or third attacks even if he had a matching block card.

(4) The attacker can discard attack cards he doesn't want, and the defender can discard defense cards he doesn't want.

(In the actual game there are more complicated elements, like "counters" that let you turn your enemy's attack back on himself, "special moves" that let you power up your regular attack cards with special powers, and certain characters have special rules, etc. But for the purpose of this problem we are just using the elements above.)

----

Problem 11: You've Been Blocked

Consider the game above. Assume that the defense deck has an equal quantity of high, medium, and low block cards, and ignore finite deck size effects (i.e. assume that each time you draw a card from the defense deck, you have a 1/3 chance of getting each of the types of cards, regardless of what cards have already been drawn).

Problem 11a. If the defender has just drawn a new hand of three cards, what is the probability that he will successfully be able to block a 3 hit combo where each of the attacks has a different range? How about if all the attacks have the same range (e.g. 3 medium attacks)?

Problem 11b. Suppose the defender draws a new hand of three cards. What is the probability that if I attack with a single attack (e.g. a medium attack) he will be able to block it?

Problem 11c. Suppose I've just attacked with a single medium attack, he blocks it (thus using up the medium block card). If I attack him with another medium attack on his next turn (don't forget he gets to draw a new card to replace the block card he used up) what's the probability he will be able to block it? (Assume that the defender will always block an attack if he has a block card available.)


The solutions are here.


Sunday, October 11, 2009

RPG Math - Problem Index

Now that I have finished my 10th RPG Math problem I am going to post an index of all the problems along with what category they are in.

Problem 1: Ineligible Receiver (of bullets) Downfield
(Geometry)
Problem 2: Too Many Men on the Playing Field (Exponential Growth)
Problem 3: Take Cover! (Geometry)
Problem 4: Divvying Up The Loot (Algebra)
Bonus Problem: Divvying Up The Loot, Modern-Day Edition (Algorithms)
Problem 5: Mathematically Challenged (Optimization)
Problem 6: Taking Inventory (Algorithms)
Problem 7: By Our Powers Combined (Combinatorics)
Problem 8: Focus Fire! (Optimization)
Problem 9: The Killing Fields (Motion Planning)
Problem 10: All Decked Out (Expected Value)

RPG Math - Problem 10

Problem 10: All Decked Out

The card game "Dominion" is based on building up your deck of cards during gameplay. There are three types of cards: "Action" cards, "Treasure" cards, and "Victory" cards. Treasure cards have a number of coins on them, while Victory cards have a number of victory points on them.

A player's turn goes as follows. First, he draws five cards from his deck. Then, he plays up to one action card. Action cards can have one or more of the following effects:

"+X Cards" - Immediately draw X more cards.
"+X Actions" - You may play up to X more actions on this turn. (This allows you to create long chains of actions if you play +action cards and then use your new actions to play more +action cards.)
+1 Buy" - Increases the number of cards you can buy in the Buy Phase (See below) by 1.
"+X Coins" - Add X coins to the number of coins you have available in the Buy Phase (see below.)

After you finish playing actions you go into the Buy Phase. You add up the number of coins on all the treasure cards in your hand plus any +coins action cards you played. That is how many coins you have available to buy another card to put into your deck, and better cards cost more gold.

The way you win the game is by having the most victory points worth of Victory Cards in your deck at the end. ut of course Victory Cards are useless when you draw them, so they dilute your deck - and a key strategic element is when to start buying victory cards.

(***)

Here is the problem. Consider a simplified version of the game where you have an unlimited number of actions per round (you can keep playing action cards until you run out) and action cards only have the effects "+X cards" and "+X coins". Give a formula that can be used to calculate, given the composition of your deck, the expected number of coins you will have available to spend at the end of each turn.

You may make the following simplifying approximations:

1. All draws are independent - i.e. if 1/3 of your deck consists of treasure cards with a value of 2 coins, then each time you draw a card, you have a 1/3 chance of getting a treasure card with a value of 2 coins.

2. There is no chance of running out of cards to draw. (In the actual game, once you run out of cards to draw you can reshuffle your deck. It is possible to draw your entire deck in one turn, but that is rare.)

The answer is here.

Wednesday, October 7, 2009

RPG Math - Problem 9

Recently, I have been talking to some professors in order to get advice on what classes I should take during the next few semesters so I can fill out my Program of Study Form, where I will create a plan for what courses I will take during my time at UIUC. One professor I talked to is Steven LaValle, a robotics researcher who specializes in motion-planning and navigation algorithms for robots. Some of the problems discussed in his book include things like, given a map of an environment and limited sensing ability, what's the best way for the robot to move in order to get enough data to find out where it is in that environment?

Recently, when I was visiting the Amtgard group in Peoria, the first part of the quest involved a motion strategy problem very similar to these. The scenario was as follows:

- The battlefield was approximately a rectangular field. At one corner was the "destination."
- At the beginning of the game, all the 'questers' (about 6-8 of them) were blindfolded, disoriented, and placed at an unknown location on the battlefield.
- The goal of the questers was to get to the destination. If a player ran into the edge of the battlefield, he would be directed back in by a reeve, no penalty suffered. However there were also "kill zones" in the battlefield that contained enemies. If a player moved into (or too close to) one of the "kill zones" then the enemies would kill them. (The player would then come back to life a few minutes later where he was, and would have to move away from the kill zone before continuing.)
- Although questers were blindfolded, they could still hear what was going on, which meant they can hear if someone is getting killed and would then know approximately in what direction that kill zone was.

Although I had actually read some of LaValle's book, I didn't remember enough of the part about motion planning under incomplete information, so that didn't really help me much - I got hit a total of three times (although I only died once - it took two hits to kill me, and I got healed when I got to the destination.) But that did give me an idea for another RPG Math problem. this is a much more simplified version of what happened in the game, but it should still be interesting and give me a chance to talk about some more algorithm techniques. With that out of the way, we now present...

Problem 9: The Killing Fields

Consider a square battlefield X units on a side. The battlefield contains a total of N "kill zones", the location of which is unknown, and each of which is a circle of radius at most 1. (Kill zones may extend off the side edges of the battlefield, but may not cross or overlap the top or bottom.) The goal of the "attacker" is to go from the bottom edge of the battlefield to the top edge of the battlefield. The attacker can start at any point on the bottom edge and move in whatever path he wants to get to the top. If the attacker hits a "kill zone", he is dead, and has to go back to the beginning and choose a new path, and he knows exactly where he was when he was hit. The goal of the attacker is to minimize the number of deaths before he is able to get to the top.

9a. Prove that no matter what method the attacker uses to choose his paths, there is an arrangement of "kill zones" such that he will take at least N deaths. (Hint: Imagine that you were the defender and you knew the attacker's strategy. How would you place the kill zones?)

9b. Suppose that N < (X/4). Show that the attacker has a strategy such that he will never take more than N deaths.

9c. Suppose that N > (X/2). Suppose that the attacker uses the strategy that minimizes the "worst case" number of deaths that he will suffer. What is the maximum number of deahts that he can guarantee he will not suffer more than, in terms of N and X?

The solution is here.

Wednesday, September 30, 2009

RPG Math - Problem 8 (Now With More Combat Action!)

This problem isn't based on a role-playing game, but rather on a tabletop wargame, but it is still too good to pass up.

The game this problem is based on is Warmachine. Warmachine is a wargame in the same vein as Warhammer Fantrasy or Warhammer 40k, where you have miniatures that you move around on a board measuring distances rather than on a square or hex grid. However, Warmachine has several features that make it so I like it more than those other games:

1. There are fewer models you have to put together to form a decent army, and each model has fewer pieces.

2. There is more variety in units - each different type of unit has unique special powers. This makes the game much more interesting and more strategic because you have to look at what the opponent has brought to the table (you are allowed to look at the "stat cards" that have the opponents' units' stats and powers on them) and figure out how best to counter their powers with your own. Also there is more strategy in army selection as you try to put together an army that has powers that complement each other. For example one very useful combo is the "Bile Thrall", which applies a "corrosion" effect to targets it hits with its Bile Cannon, combined with the "Cankerworm," a large worm-shaped "warjack" (i.e. battle robot) which has an Ablation ability that allows it to ignore half the target's armor when it hits an enemy unit that has corrosion on it. In one game I successfully used this combo to kill an enemy warjack in one turn that was so heavily armored it would have been very difficult to kill any other way. Unfortunately I then made a tactical blunder: I left my "warcaster" (the army's leader) exposed in a place where the enemy could use one of his last remaining units to charge and kill him.

Anyway this is an RPG Math post so let's get right to the puzzle:

Problem 8: "Focus Fire!"

In Warmachine, there are some units (like squads of infantry) that are composed of lots of identical models. Some of these units have the "Combined Melee Attack" or "Combined Ranged Attack" special ability. This special ability allows the unit to, rather than having each model in the unit attack separately and make separate attack and damage rolls, combine several models' attacks into one attack with bonuses to the attack and damage rolls.

When using combined attacks, the player can choose how to group the models into attacks. For example, if a unit had 8 models, the player could choose to make 8 separate attacks, to make one attack that combined all 8 models, to make two attacks using 3 models each and one attack using the remaining 2 models, etc.

Assume that the unit has N models, and the player's goal is to maximize the total expected amount of damage done to the target. Let F(x) be the expected damage done from one attack with x models. Suppose you are given the values of F(x) for each value of x between 1 and N inclusive. Find a polynomial-time algorithm* to compute the best possible grouping.

Hint: Let G_i(j) be the best possible expected damage using j models in groups of no more than i each. Think about how to calculate the values of G_i if you already know the values of G_(i-1).

The solution is here.

*This means that the running time of the algorithm is bounded by A(n^B) for some constants A and B.

Wednesday, September 23, 2009

RPG Math - Problem 7

I already gave you one problem about algorithms, so it's time to give you a problem related to what I am studying in the other class I am taking this semester - combinatorics.

Problem 7: By Our Powers Combined

In the Xbox 360 game "Marvel: Ultimate Alliance 2," the player controls a team of superheroes as they battle evil. One of the key weapons in this fight is "Fusion" attacks, which are special attacks that combine two heroes' superpowers. Each pair of heroes has a Fusion attack., and when the player chooses to use a Fusion attack he chooses which two heroes to do the attack with. (For example, a team of 4 would have 6 different possible pairings of heroes and thus 6 different Fusion attacks.) There are three types of Fusion attacks: "Targeted," which are attacks that do a lot of damage to a single target (useful for fighting bosses), "Clearing", which attack all enemies in a wide area (useful for clearing out large groups of enemies), and "Guided", which allows the player to steer the attack after it fires to sweep up as many enemies as possible in the attack.

Since every pair of heroes has a Fusion attack, this situation can be represented as a complete graph with the vertices representing heroes, the edges representing pairs of heroes, and each edge labeled with what kind of Fusion attack that pair has.

There are 24 heroes in the game, and the player can bring 4 of them into battle at a time. Show that it is not possible to assign Fusion types to pairs of heroes in such a way that no matter what team of 4 the player chooses, he will always have at least one of each kind of Fusion attack available.

Major Hint: If there were only 6 heroes in the game, teams only consisted of 3 heroes, and there were only two types of fusions, then the problem would be the special case of Ramsey's theorem described in the first "Example" in the link, and there would be no way to do it. You may use this result in solving the problem, and the technique used to solve the problem is very similar to the technique used in proving that result.

The solution is here.

Sunday, September 13, 2009

RPG Math - Problem 6

We haven't had one of these in a while. It's time for another round of...

Problem 6: Taking Inventory

This problem is related to algorithms. For this problem, we will need to introduce some new terminology. This new terminology may appear in other algorithm-related problems in the future.

A decision problem is a problem which asks, given a certain input, whether that input satisfies a certain condition. For example, the Boolean satisfiability problem asks, given a Boolean formula (such as "A and B or (not (C and A))" whether or not there exists an assignment of "true" and "false" values to the variables (A, B, and C in this case) that will make the result of the formula "true". The independent set problem asks, given a graph G and an integer L, whether it is possible to choose L vertices such that no two of the vertices chosen have an edge between them.

An instance of a decision problem means a particular input for the decision problem. For example an "instance" of the Boolean satisfiability problem means a particular Boolean formula. An instance of a decision problem of course has one answer, "true" or "false".

To reduce a decision problem P to another decision problem Q means to show a simple* way to, given an instance of problem P, find an instance of problem Q that has the same answer. For example, consider the following two problems:

P: Given a graph G and an integer N, is there a way to color the vertices of G with N colors so that no edge of G has endpoints that are the same color?

Q: Given a partition M of the plane into regions, and an integer N, is there a way to color the regions of M in such a way that no two adjacent regions have the same color?

In this example, there is a way to reduce Q to P - given an instance of Q, create a graph with one vertex for each region, with edges between vertices corresponding to adjacent regions, and feed that to P. There is not, however, a corresponding way to reduce P to Q. (For instance, if N >= 4, then the answer to Q is always yes, but the same is not true of P.)

A problem P is equivalent to a problem Q if P can be reduced to Q and Q can be reduced to P.

---

Now with all the terminology taken care of, it's time to get to the problem.

Many computer games use a "grid inventory" system. This means that items that the character is carrying are arranged in a grid, with larger items taking up more space. For instance, a small item such as a gem might only take up one square of the grid, while a large item such as a suit of armor might take up, say, a 3x4 area of the grid. All items your character is carrying must fit in the grid, so you cannot pick up an item if you can't fit it in the grid. This image gives an example of what an inventory grid looks like. (If you can't see the image, hover over the line "NWN_inventory.jpg image by..." and click the "Zoom in" item on the menu bar that appears.)

The Inventory Grid Problem (IGP) is as follows: Given an NxM inventory grid, and a set of objects with associated sizes (e.g. you have an 8x10 grid, and you have one 4x6 object, three 3x3 objects, five 2x1 objects, etc.) is it possible to fit all the objects in the grid? (Note: Objects cannot be rotated.)

Just today I came into a real life situation that is very similar to the inventory grid situation. I have just bought miniatures for a miniatures combat game called Warmachine. In order to safely transport the miniatures without them being damaged, they sell large blocks of foam that are perforated in a grid pattern (see this image for an example) and you can remove the squares of foam in order to insert your miniatures. This way each miniature is stored separately and surrounded by a cushion of foam. This image is an example of foam being used to store miniatures.

Suppose that you have a whole bunch of miniatures, and a block of foam of a given size, and you want to see if it is possible to fit all the miniatures into that block. This problem is similar to the Inventory Grid Problem above, with the exception that objects cannot be adjacent to each other - there must be a "cushion" or "buffer" of at least one square between each object. Thus, we have the...

Buffered Inventory Grid Problem (BIGP): Given an NxM inventory grid, and a set of objects with associated sizes (e.g. you have an 8x10 grid, and you have one 4x6 object, three 3x3 objects, five 2x1 objects, etc.) is it possible to fit all the objects in the grid such that no two objects are adjacent? (Again, objects cannot be rotated. While this isn't very realistic in terms of the miniatures problem, it makes the problems below much simpler.)

-----

There are two problems here:

6.1. Show that the Inventory Grid Problem and the Buffered Inventory Grid Problem are equivalent.

(Hint. Reducing BIGP to IGP is fairly straightforward. Reducing IGP to BIGP is a little trickier.)

6.2. Show that the bin-packing problem** can be reduced to the Inventory Grid Problem. (This implies that IGP is NP-complete, meaning that there is (almost certainly) no algorithm that solves it in a time polynomial in the length of the input. I might go into further detail on this topic in future questions.

Solutions are here.

* "Simple" means "can be done in a time polynomial in the length of the input."

** The problem as described in the link is the optimization variant of the problem - find the minimum number of bins necessary. The decision problem variant of the problem is: given a number N, can you do it with N or fewer bins (thus the answer is "yes" or "no" as described above). Since we're dealing with decision problems here this is the one you want to use.

Monday, January 19, 2009

RPG Math - Problem 5

Problem 5: Mathematically Challenged

Dungeons and Dragons' 4th edition has a game mechanic known as "skill challenges," in which players work together to defeat a challenge that requires using certain in-game skills. Each player's character has a "skill modifier" that represents how good they are at the skill in question, and the challenge has a "difficulty class" (DC) that represents how hard the challenge is. (In the game, there may be more than one skill that can be used for a particular challenge, but this is irrelevant to our analysis because each character can simply use whichever skill he is best at.)

Players go in a circle (order decided on by the players), each one making a skill check. A skill check is made by rolling a 20-sided die and adding it to the skill modifier. If the result is equal to or greater than the DC, the check is successful. For example, if the skill modifier is 7 and the DC is 15, the player needs an 8 or more on the die to earn a success. You continue around the table until 8 successes or 4 failures have been achieved. If the players get to 8 successes first, they win the skill challenge; otherwise, they lose the skill challenge.

A player can also spend his turn to "aid another." This means making a skill check against DC 10. This check does not count as a success or failure. Instead, if it fails, nothing happens and you move on to the next person, while if it succeeds, the next player gets a +2 bonus to his next skill check.

Consider a skill challenge with a DC of 15 and 4 players with skill modifiers of 8, 6, 4, and 2 respectively.

(a) What is the optimal way to use "aid another" to maximize the chance of winning?

(b) Suppose that we want to make "aid another" less powerful by increasing the difficulty of the aid another check from 10 to a higher number. In this particular challenge, how high would we have to raise the difficulty to make never using aid another the optimal strategy?

(Author's Note: This problem has some minor differences from how skill challenges actually work in D+D, but these differences do not change the overall conclusion.)

Answer is here.

Sunday, December 21, 2008

Holiday Math - Bonus Problem!

Also in the spirit of the holiday season, I present a new math problem relating to Christmas gift-giving. This doesn't have anything to do with gaming, but it's still too good to pass up.

Bonus Problem: Divvying Up the Loot, Modern-Day Edition

The "white elephant gift exchange" protocol, featured in an episode of NBC's "The Office", is a procedure used for distributing Christmas gifts at large gatherings. In the protocol, each of the N participants brings a gift, for a total of N gifts, which are all placed into a central pile. The participants are numbered from 1 to N. The protocol consists of N rounds.

In the I'th round of the protocol, the following steps take place:

1. Player I selects a gift and places it in front of him. He can select a gift either from the central pile or any of the gifts in front of other players.
2. If he selects a gift from the central pile, the round ends. If he selects a gift from another player, then that player (who just had his gift taken) now has the option of choosing a gift, again either from the central pile or from another player. If he takes a gift from another player, then that player also has the option of choosing a gift as described. This process continues until someone chooses a gift from the central pile, at which point the round ends.
3. Once a gift has been selected, it cannot be selected again until the beginning of the next round.

After all N rounds have been completed, each player gets to keep the gift that is in front of them.

Is it possible that after this protocol is completed, there will be two people (A and B) who each prefer the gift that the other one ended up with to the one he ended up with? (Assume that each participant has a strict preference ordering over the available gifts, and that each time a participant makes a selection, that he chooses the gift he most prefers out of those that are available for selection.)

Saturday, December 20, 2008

RPG Math - Problem 4

It's time for more RPG Math again! This time, in the spirit of the holiday season, this puzzle is based upon that elusive goal that motivates almost every D+D adventurer - and underpins much of the activity in our real-world economy during the holiday season. That's right, I'm speaking of nothing other than treasure, money, and loot!

Problem 4: Divvying Up The Loot

The Dungeons and Dragons 3.5 edition rulebook contains a section on suggested methods for dividing up treasure. In this section, the book uses an example of a party of four adventurers (A, B, C, and D) who have stumbled across a treasure hoard consisting of 5,000 GP (gold pieces) and a magical shield. It suggests (paraphrased):

If two or more players want the shield, they should bid [on how much it is worth to them]. The highest bidder gets the shield, and the value is added in to the treasure stash when calculating each player's share of the loot. Then everyone gets an equal share, and the player who gets the shield has his amount of gold deducted by that amount. For example, if A won the bidding with 800 GP, then the total treasure size is 5800 GP, so each person gets 1450. That means A would get 650 GP and the shield, while B, C, and D would get 1450 each.

The maximum that a player can bid is his share of the treasure. For example, in this case, A could bid at most 1250 GP (one-fourth of the total of 5000 GP.) In this case, A would end up with the shield while the others would split up the 5000 GP worth of gold.

What is wrong with this section?

Answer is here.

Wednesday, December 17, 2008

RPG Math - Problem 3

Problem 3: Take Cover!

For this problem, we will say a line segment A crosses a line segment B if the following statements hold:

1. A and B are not parallel.
2. A intersects B.
3. The point at which A intersects B is not one of the endpoints of A.

---

In Dungeons & Dragons 3.5 edition, battle takes place on a square grid in which each character occupies one square. Some of the boundaries between squares are considered "walls" and cannot be seen through. A defender D is considered to be in the line of sight of attacker A if there exists a point in the attacker's square (including boundaries) and a point the defender's square (including boundaries) such that the line between those two points does not cross any wall. A defender D is considered to have cover from attacker A if for every corner of A's square, there exists a corner of D's square such that the line between those two points crosses a wall.

Now the questions:

3.1. Show that there exists a battlefield configuration such that A has line of sight to D and D does not have cover from A, but every line between an interior point of A's square and an interior point of D's square crosses a wall.

3.2. Suppose we remove condition (3) in the definition of crossing. (In fact, the D+D rule book is not clear on whether condition (3) should be in there.) Now show that there exists a battlefield configuration such that D has cover from A, but no line between an interior point of A's square and an interior point of D's square crosses a wall.

---

Now suppose we specify that instead of so-called "paper-thin" walls that just go along boundaries of squares, walls take up entire squares or contiguous groups of squares. Now we say a line segment A crosses a wall B if either of the following conditions hold:

1. The line segment A contains an interior point of B.
2. The line segment A crosses a boundary of B.

3.3. Solve problem 3.1 with these new conditions.
3.4. Solve problem 3.2 with these new conditions.

---

Solution to this problem is here.