I had an interesting idea for the AI. Obviously I don't know about the programming side, but I am intrigued by the concept so I thought I'd put it out for comments.
Humans learn and adapt. They develop new strategies and figure out how to use old ones in a more effective manner. This means that any ststic AI we build will seem more and more incompetent over time, as the humans get better.
I read in the old AI threads that the tactical and operational AI's were coming along fairly well, but that there were no good plans for the grand strategy AI. It seems that we will have to use heuristics for that section, which means the strategy could easily go obsolete.
In the past, many attempts have been made to make an AI that reacts to the human strategy. They usually don't work, because the human mind is far better at innovation and grand strategy.
Could we make an AI that simply copied the player's strategy if it saw that the that strategy was more effective than anything in its toolbox? That way, a brilliant and super-effective new plan discivered by the player would also be used by the AI, so the AI wouldn't be helpless against that plan in the future. This copycat AI would basically let the human do the thinking for it.
If the player then developed a strategy that killed the new system the AI was using, then the AI could just start using that. Eventually, the AI could start building up a memory that told it which strategy worked best against the other strategies.
Now, this by itself is pretty boring. My big idea was this: Have the AI's share memory files with all of the other AI's whenever the computers are linked to form a multiplayer game. There should be plenty of free time when the players are planning their moves.
This way, the AI learns the best strategies used by anyone connected to the game. It also has the files from anyone who has played a game with any of those people, and anyone who has played with those people, et cetera. If there is enough file sharing, the AI would have access to all of the best strategies used by the best human players on the web.
And if we do a good job, the AI will know exactly when to use the strategies. It would simply memorize the situations that the humans use the strategies in. It would build a decision grid, so if it sensed that the human was using a certain strategy it would search its ever-growing library of strategies and pick the one best able to counter the human.
It seems to me that most of this could be done using simple if-then-else commands. The hardest part would be analyzing which strategy the human was using, but it should just be a matter of recording what the player is doing and matching that with attributes of strategies it has memorized. And if the human's actions do not match any prerecorded strategies, it knows that the human is doing something new enough to be recorded as a seperate strategy.
If this system works, it will produce an AI that is always able to match or even exceed the strategic abilities of human players. And the best part is that we shouldn't have to code any strategies. The AI should learn the basics from our playtesting.
Then there is the issue of the AI in scenarios. In most games, the AI in heavily modified scenarios is worse than the general game AI because it didn't have any rules for dealing with the new situations. But this system would let the AI learn how to deal with any scenario, no matter how weird it was. It would simply see how the human solved the problems. The scenario designer would get a few people to play every side in the scenario, and by the time they were done playtesting the AI would be as competent as the humans.
It seems to me that effort spent making the "copycat and communicate" system work will give better results than eforts spent teaching it a certain strategy. Teaching something how to learn gives better results than teching it a certain thing.
Is there a chance that this will work, or is it just a silly dream?
Humans learn and adapt. They develop new strategies and figure out how to use old ones in a more effective manner. This means that any ststic AI we build will seem more and more incompetent over time, as the humans get better.
I read in the old AI threads that the tactical and operational AI's were coming along fairly well, but that there were no good plans for the grand strategy AI. It seems that we will have to use heuristics for that section, which means the strategy could easily go obsolete.
In the past, many attempts have been made to make an AI that reacts to the human strategy. They usually don't work, because the human mind is far better at innovation and grand strategy.
Could we make an AI that simply copied the player's strategy if it saw that the that strategy was more effective than anything in its toolbox? That way, a brilliant and super-effective new plan discivered by the player would also be used by the AI, so the AI wouldn't be helpless against that plan in the future. This copycat AI would basically let the human do the thinking for it.
If the player then developed a strategy that killed the new system the AI was using, then the AI could just start using that. Eventually, the AI could start building up a memory that told it which strategy worked best against the other strategies.
Now, this by itself is pretty boring. My big idea was this: Have the AI's share memory files with all of the other AI's whenever the computers are linked to form a multiplayer game. There should be plenty of free time when the players are planning their moves.
This way, the AI learns the best strategies used by anyone connected to the game. It also has the files from anyone who has played a game with any of those people, and anyone who has played with those people, et cetera. If there is enough file sharing, the AI would have access to all of the best strategies used by the best human players on the web.
And if we do a good job, the AI will know exactly when to use the strategies. It would simply memorize the situations that the humans use the strategies in. It would build a decision grid, so if it sensed that the human was using a certain strategy it would search its ever-growing library of strategies and pick the one best able to counter the human.
It seems to me that most of this could be done using simple if-then-else commands. The hardest part would be analyzing which strategy the human was using, but it should just be a matter of recording what the player is doing and matching that with attributes of strategies it has memorized. And if the human's actions do not match any prerecorded strategies, it knows that the human is doing something new enough to be recorded as a seperate strategy.
If this system works, it will produce an AI that is always able to match or even exceed the strategic abilities of human players. And the best part is that we shouldn't have to code any strategies. The AI should learn the basics from our playtesting.
Then there is the issue of the AI in scenarios. In most games, the AI in heavily modified scenarios is worse than the general game AI because it didn't have any rules for dealing with the new situations. But this system would let the AI learn how to deal with any scenario, no matter how weird it was. It would simply see how the human solved the problems. The scenario designer would get a few people to play every side in the scenario, and by the time they were done playtesting the AI would be as competent as the humans.
It seems to me that effort spent making the "copycat and communicate" system work will give better results than eforts spent teaching it a certain strategy. Teaching something how to learn gives better results than teching it a certain thing.
Is there a chance that this will work, or is it just a silly dream?
Comment