One of the things that makes SMAC such a great game is the fact that there’s simply no way you can do everything at once. Every decision made has at least one, and oftentimes several opportunity costs. While I have played the game a ton of times and pursued every strategy discussed on the boards (yes, much as I hate to admit it, even the Rover Rush), I have not yet sat down with another player and compared detailed notes about the inner functionings of an empire, and I think it could prove quite enlightening, especially in light of the recent “attack” thread.
What I propose then, is the creation of a “test map” for the purposes of an experiment. We generate or have someone build a test map, make it available for download, and then have interested people play a game out, taking careful notes about each turn’s activities. If we set such a test map up, it will be of vital importance to map out specifically what information we’re looking for. Here’s a list to get us started, but by all means, feel free to add more!
1) A basic description of each player’s style
2) Population and number of bases
3) Total net mineral counts, empire wide
4) Total support costs, empire wide
5) SE settings run and cash/lab/psych allocations
6) Per turn income
7) Per turn lab outputs
8) Tech Advances gained
9) The results of popped pods
In order to remove as much randomness from the test as possible, I think we should use no pod scattering and no random events, and use directed research. We can use any map size, but obviously the larger the map, the more pronounced the differences will be, which might be the way to go.
The goal here is not to see who can win the game faster or more spectacularly, or to see who can transcend first, but to illustrate various game approaches, comparing them with real game data. As results begin coming in, we’ll also get the opportunity to get an “under the hood view” of how different strategies work, and how similar or pronounced the differences really are. It’s one thing to say that “a Momentum player will have a slower rate of research.” It’s quite another to be able to look at results that have been turned in and say, “Well, in comparing these two games we can see that the Momentum player was generating 82 research points per turn, while the Hybrid guy was cranking out 117.”
I dunno….I think it’s a pretty cool idea. If anybody else is interested, reply here. If anyone knows of a decent map we can use for testing, lemme know, although I don’t see a problem with just using a randomly generated one.
As far as results go, I think we should keep track every turn, in order to capture the most detailed data possible, but some people might be a bit overwhelmed by that.
Thoughts?
-=Vel=-
What I propose then, is the creation of a “test map” for the purposes of an experiment. We generate or have someone build a test map, make it available for download, and then have interested people play a game out, taking careful notes about each turn’s activities. If we set such a test map up, it will be of vital importance to map out specifically what information we’re looking for. Here’s a list to get us started, but by all means, feel free to add more!
1) A basic description of each player’s style
2) Population and number of bases
3) Total net mineral counts, empire wide
4) Total support costs, empire wide
5) SE settings run and cash/lab/psych allocations
6) Per turn income
7) Per turn lab outputs
8) Tech Advances gained
9) The results of popped pods
In order to remove as much randomness from the test as possible, I think we should use no pod scattering and no random events, and use directed research. We can use any map size, but obviously the larger the map, the more pronounced the differences will be, which might be the way to go.
The goal here is not to see who can win the game faster or more spectacularly, or to see who can transcend first, but to illustrate various game approaches, comparing them with real game data. As results begin coming in, we’ll also get the opportunity to get an “under the hood view” of how different strategies work, and how similar or pronounced the differences really are. It’s one thing to say that “a Momentum player will have a slower rate of research.” It’s quite another to be able to look at results that have been turned in and say, “Well, in comparing these two games we can see that the Momentum player was generating 82 research points per turn, while the Hybrid guy was cranking out 117.”
I dunno….I think it’s a pretty cool idea. If anybody else is interested, reply here. If anyone knows of a decent map we can use for testing, lemme know, although I don’t see a problem with just using a randomly generated one.
As far as results go, I think we should keep track every turn, in order to capture the most detailed data possible, but some people might be a bit overwhelmed by that.
Thoughts?
-=Vel=-
Comment