(Posted by Alan on the Delphi forum, 10/30/2001):
Here's a note from Bill Fisher, Kingpin of Quicksilver Software:
Actually, designing the AI has been more time-consuming than it has been difficult. We'll know for sure when we start our testing. The key has been to plan the game from the beginning so that each AI module has a well-defined, carefully limited set of tasks to perform, and to make sure that each of those tasks can be broken down into a simple set of need-based decisions.
I have two rules for writing game AI:
1. Don't do anything that looks stupid.
2. Try to do things that look smart.
In most games, the challenge is getting past item #1. That's often quite difficult. For example, in Conquest of the New World, we gave the AI a detailed list of buildings that needed to be constructed in each city, and the preferred order in which they were to be built. But we also had to tell it about a number of special cases, such as when there's not enough land and it needed to tear down something which is no longer productive to replace it with something that's now needed. These sequences were based on months of careful human play and analysis.
In Castles II, we addressed this challenge by creating a design where simply making "reasonable" choices from a well-defined sequence of play would result in very effective AI behavior. In formal mathematical terms, I like to say that "local optimization equals global optimization" -- that is, doing what looks smart from a very narrow view of the world tends to be the right choice. We're trying for the same approach in MOO3. Not all game designs are like this. In chess, in fact, it's often the exact opposite. Capturing your opponent's Queen with that pawn, while it looks good right now, will often lead to a game-losing chain of events one or two turns down the road.
For MOO3, here's what we've done:
1. Modularize: break down the game into small, simple sub-units with clearly defined rules and decision processes.
Needless to say, simulating an entire galaxy is a daunting challenge. We took a "divide and conquer" approach, picking a clear and easily understood metaphor to help us keep it straight in our own minds. From the very start, the galaxy was divided into various departments, just like a government would be. Each was given a set of tasks, which were assigned to various leaders within the department.
I often worry about getting too abstract and theoretical in AI design. It's tempting to build a monument to your own ingenuity, which nobody else can understand. The advantage of this departmental model is that it's quite easy to see how the various aspects of a galactic empire can be organized. It's somewhat abstract, but virtually everything has a clear equivalent in current human governmental structures, so the designers didn't get lost in abstruse mathematical abstractions. Everything boiled down to a relatively simple analogy with a familiar system.
Once the departments and sub-departments were laid out, the designers then worked through each system, one at a time, to make sure that its responsibilities were clear. Usually, this ended up looking a lot like a board game design. Alan, Tom and I have played lots of board games together over the years, and we like to think in the clean, elegant style that's required by such games. Early in the design process, I spent a lot of time talking with Alan about the departments, and was happy to see that each one essentially had a "combat results table" with a set of modifiers. This gave them plenty of variety, yet was simple enough that I could see how to write a very effective AI with minimal risk.
2. Stabilize: Design each subsystem so it's inherently stable
I wrote my first simulation game in 1975 (on punch cards). The first lesson I learned was that it's very easy to design a system that spirals off into disaster. For example, if a lack of food causes farmers to die immediately, reducing production, then any slight food shortage will quickly kill off the entire population. That's a simple system that tends toward instability. I like to think of this metaphorically as balancing a marble on a bowl that's been turned upside-down. The marble will quickly roll slightly to one side, picking up speed and rolling even faster as it falls toward the edge.
What we want to do is design systems that act like a marble inside a bowl that's right-side up. In a system like that, no matter where you put the marble, it will tend to roll back toward the center. This is what I mean by "inherently stable." That's how we've designed the individual systems in MOO3. They can swing a great deal from side to side, but they tend not to spiral out of control. There are checks and balances that always pull the system back from the precipice. That's not to say that they don't have interesting effects. The swings back and forth can be quite interesting. What it means is that they aren't fragile. Fragile systems in games are not fun.
3. Rationalize: Create "need-based" systems.
Instead of giving the AI (or the player) a long list of "stuff to build," give them a long list of specific needs with specific solutions. For example, instead of just allowing players or AIs to group ships in any way they want, define task forces around specific needs, such as a "planetary bombardment task force." At a higher level, give the AI a list of strategies to pursue. When the strategy says "attack a planet," it's then a simple matter to realize that this will require a bombardment task force, so the AI knows to build one.
This approach, of course, requires a lot of careful pre-planning and understanding of what each leader in each department will want or need to do. But, once that's done, the hard part of the AI design is also done. The AI can simply choose a solution from a logically constructed list, influenced by various other factors that are similarly defined throughout the design. And the logic for making such a choice, because it's all based on a "need" orientation, is usually quite simple.
That's a long explanation for a simple question, but I hope it shows how we're addressing this very large challenge. In the end, by breaking down a given decision into simple, clear parts that behave based on well-defined needs, we have essentially designed everything needed for the AI except the code and a few high-level strategies. By facing the challenges of the AI design from the start, we've minimized our risks and, as a secondary effect, created a system that's very easy for players to understand and enjoy.
Here's a note from Bill Fisher, Kingpin of Quicksilver Software:
Actually, designing the AI has been more time-consuming than it has been difficult. We'll know for sure when we start our testing. The key has been to plan the game from the beginning so that each AI module has a well-defined, carefully limited set of tasks to perform, and to make sure that each of those tasks can be broken down into a simple set of need-based decisions.
I have two rules for writing game AI:
1. Don't do anything that looks stupid.
2. Try to do things that look smart.
In most games, the challenge is getting past item #1. That's often quite difficult. For example, in Conquest of the New World, we gave the AI a detailed list of buildings that needed to be constructed in each city, and the preferred order in which they were to be built. But we also had to tell it about a number of special cases, such as when there's not enough land and it needed to tear down something which is no longer productive to replace it with something that's now needed. These sequences were based on months of careful human play and analysis.
In Castles II, we addressed this challenge by creating a design where simply making "reasonable" choices from a well-defined sequence of play would result in very effective AI behavior. In formal mathematical terms, I like to say that "local optimization equals global optimization" -- that is, doing what looks smart from a very narrow view of the world tends to be the right choice. We're trying for the same approach in MOO3. Not all game designs are like this. In chess, in fact, it's often the exact opposite. Capturing your opponent's Queen with that pawn, while it looks good right now, will often lead to a game-losing chain of events one or two turns down the road.
For MOO3, here's what we've done:
1. Modularize: break down the game into small, simple sub-units with clearly defined rules and decision processes.
Needless to say, simulating an entire galaxy is a daunting challenge. We took a "divide and conquer" approach, picking a clear and easily understood metaphor to help us keep it straight in our own minds. From the very start, the galaxy was divided into various departments, just like a government would be. Each was given a set of tasks, which were assigned to various leaders within the department.
I often worry about getting too abstract and theoretical in AI design. It's tempting to build a monument to your own ingenuity, which nobody else can understand. The advantage of this departmental model is that it's quite easy to see how the various aspects of a galactic empire can be organized. It's somewhat abstract, but virtually everything has a clear equivalent in current human governmental structures, so the designers didn't get lost in abstruse mathematical abstractions. Everything boiled down to a relatively simple analogy with a familiar system.
Once the departments and sub-departments were laid out, the designers then worked through each system, one at a time, to make sure that its responsibilities were clear. Usually, this ended up looking a lot like a board game design. Alan, Tom and I have played lots of board games together over the years, and we like to think in the clean, elegant style that's required by such games. Early in the design process, I spent a lot of time talking with Alan about the departments, and was happy to see that each one essentially had a "combat results table" with a set of modifiers. This gave them plenty of variety, yet was simple enough that I could see how to write a very effective AI with minimal risk.
2. Stabilize: Design each subsystem so it's inherently stable
I wrote my first simulation game in 1975 (on punch cards). The first lesson I learned was that it's very easy to design a system that spirals off into disaster. For example, if a lack of food causes farmers to die immediately, reducing production, then any slight food shortage will quickly kill off the entire population. That's a simple system that tends toward instability. I like to think of this metaphorically as balancing a marble on a bowl that's been turned upside-down. The marble will quickly roll slightly to one side, picking up speed and rolling even faster as it falls toward the edge.
What we want to do is design systems that act like a marble inside a bowl that's right-side up. In a system like that, no matter where you put the marble, it will tend to roll back toward the center. This is what I mean by "inherently stable." That's how we've designed the individual systems in MOO3. They can swing a great deal from side to side, but they tend not to spiral out of control. There are checks and balances that always pull the system back from the precipice. That's not to say that they don't have interesting effects. The swings back and forth can be quite interesting. What it means is that they aren't fragile. Fragile systems in games are not fun.
3. Rationalize: Create "need-based" systems.
Instead of giving the AI (or the player) a long list of "stuff to build," give them a long list of specific needs with specific solutions. For example, instead of just allowing players or AIs to group ships in any way they want, define task forces around specific needs, such as a "planetary bombardment task force." At a higher level, give the AI a list of strategies to pursue. When the strategy says "attack a planet," it's then a simple matter to realize that this will require a bombardment task force, so the AI knows to build one.
This approach, of course, requires a lot of careful pre-planning and understanding of what each leader in each department will want or need to do. But, once that's done, the hard part of the AI design is also done. The AI can simply choose a solution from a logically constructed list, influenced by various other factors that are similarly defined throughout the design. And the logic for making such a choice, because it's all based on a "need" orientation, is usually quite simple.
That's a long explanation for a simple question, but I hope it shows how we're addressing this very large challenge. In the end, by breaking down a given decision into simple, clear parts that behave based on well-defined needs, we have essentially designed everything needed for the AI except the code and a few high-level strategies. By facing the challenges of the AI design from the start, we've minimized our risks and, as a secondary effect, created a system that's very easy for players to understand and enjoy.
Comment