Announcement

Collapse
No announcement yet.

3d & Spherical maps discussion

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Apologies for not posting anything in a while. Anyway, I agree that there are two discussions going on here, one about the technical feasibility of different options, and another one about what's really better in a gameplay sense. I'll try to adress the former first.

    The tradeoff between speed and memory that's been discussed is not equivalent to a decision between tiles and areas. The possible speed problem I see with Vet's (sorry, I just can't use the second person form after all this time! It would be weird...) compression scheme is that it really attempts to minimize the memory, and this means that the computer has to uncompress the data to a usable form when processing it, and then compress the results back into very small space. This can be alleviated by making the internal representation of the data "partially compressed" so that the conversion between compressed and uncompressed data is either very fast, or that parts of the uncompressed data can be used directly without conversion. This would happen in the expense of memory, but then again, "there is no such thing as free lunch". For example, one of the biggest obstacles that Vet just shrugs off is the data about adjacent regions. I just can't see any reasonably fast way of extracting that information from the data structure he proposed. Hence, they'd have to be stored separately, adding several bytes to the memory consumption estimate for an area.

    I made a few calculations of my own, by the way. The purpose of these is to illustrate approximately where tiles might be more appropriate way of storing information, and when the opposite is true. I've thought about four possible levels of compression and named them min, med, max and uncompressed. The names refer to memory consumption so that min is the smallest possible, and uncompressed the largest (but perhaps fastest). The two first are basicly describing regions that are based on hexagonal tiles (just like Vet's example used rectangular tiles), and the other two are for truly free-form map. But more about that later.

    The basis for these compression schemes is that each area consists of

    Properties + Adjacent area references + Origin + Border

    Properties (denoted by capital P from now on) are the properties of the area in a layer (I am not defining layers here... I will give examples of different values for P). The references to adjacent areas are 2-byte numbers (based on an assumption that there are no more than 65536 areas in a layer), and this is not a constant number. For the purposes of the examples I've assumed that each area has six adjacent areas (that's the case with hexagonal grid at least), so A always equals 2*6=12 bytes. Origin on the other hand always consists of (x,y)-coordinates in the map, and presumably each coordinate can be represented with 2 bytes (giving resolution of 65536*65536, enough for out purposes). So O is always 4 bytes. So, the number of bytes required to store an area is

    Size = P+12+4+B = 16+P+B

    The length of border is dependant on the shape and size of an area. To make my estimates, I can fork the number from up and down by assuming the worst and best case scenarios: in worst case, every area is a very thin strip only one hex wide (when hexes are used), in the best case they'd all be hexagonal (in case of hex map) or close to circular (in free-form vector map). If N is the number of tiles that are used, then

    Length of border = 4*N+2 (in hexagonal map, worst case)

    or

    L = 6*sqrt((4*N-1)/3)) (hex map, best case)

    With free-form vectors the comparison is a little bit more difficult, so I just passed it for now.

    Min:

    This compression scheme is pretty much the same as Vet proposed, except in hexes: each edge of a hexagon is represented by two bits (four values). If you look at a hexagonal grid, you notice that every intersection of edges has only 3 ways to go to... the fourth value is used to denote the end of the border (thus, no need to check if it has reached the origin). There are 8 bits in a byte, so the space reserved by border will be

    B = L/4

    It follows that:

    S = 16 + P + L/4

    In worst case this means that

    S = 16 + P + (4*N+2)/4

    To determine what is the minimum size an area should be to make it as memory efficient as tiles, we'll just have to compare this value to the size that it would take to store N number of properties individually:

    (in my notation "=<" means "smaller or equal to" and ">=" is "greater or equal to". "=>" is just an implication.)

    S =< N*P
    =>
    16 + P + (4*N+2)/4 =< N*P (multiply by to get rid of fractional 1/2)
    =>
    33 + 2P + 2N =< 2NP (skipping the *-sign, I think it's redundant)
    =>
    2NP - 2N = N(2P - 2) >= 2P + 33 (divide by 2P - 2, it must be that P > 1)
    =>
    N >= (2P + 33)/(2P - 2)

    Now, let's consider the different values of P:

    P=1: For example, elevation can be represented adequately with one byte. I'm sure there are a lot of examples of properties that take one byte or less, but the point that the above calculation shows that there is no size of an area where the areas would be guaranteed to take less space than tiles. Remember that this is the worst case!

    P=2: N >= (2*2+33)/(2*2-2) = 37/2 = 19 (rounding up). So the "diversity", that is the average size of an area/region has to be at least 19 to give undoubted benefit to any 2-byte property being stored in an area instead of tiles. Far from Vet's estimate above, but one has to consider that this is the absolute maximum of tiles, being the "worst case". As for practical examples, I think that population density is something that requires two bytes to be realistic.

    P=3: N >= (2*3+33)/(2*3-2) = 39/4 = 10.This is getting better, but the properties that conceivably take 3 bytes are hard to imagine, unless we count combinations like "age, occupation, religion". And combinations are prone to be very diverse.

    P=4: N >= (2*4+33)/(2*4-2) = 41/6 = 7.

    P=5: N >= (2*5+33)/(2*5-2) = 43/8 = 6.

    And so on. Now, as for the best case scenario where every area is approximately a hexagon, the comparison when areas take less space than tiles is a little bit more complex:

    S =< NP
    =>
    16 + P + (6sqrt((4N-1)/3))/4 =< NP (multiply to get rid of fractions)
    =>
    32 + 2P + 3sqrt((4N-1)/3)) =< 2NP (toss the square root to one side)
    =>
    3sqrt((4N-1)/3)) =< 2NP - 2P - 32 (raise to second power)
    =>
    9((4N-1)/3)) =< 4(P^2)(N^2) - (8(P^2) + 128P)N + (4(P^2) + 128P + 1024)
    =>
    12N - 3 =< 4(P^2)(N^2) - (8(P^2) + 128P)N + (4(P^2) + 128P + 1024)
    =>
    0 =< 4(P^2)(N^2) - (8(P^2) + 128P + 12)N + (4(P^2) + 128P + 1027)

    This looks rather tricky due to the parameter P... therefore I decided to just substitute different values of P and then solve the second degree equation by guessing.

    P=1: 0 <= 4(N^2) - (8 + 128 + 12)N + (4 + 128 + 1027) = 4(N^2) - 148N + 1159

    One can see that if N = 25 the equation is false, but if N = 26 it is true. So the minimum size of an area representing a one byte property is 26 tiles, even in the best possible case. In reality such situation would never exist, therefore this result can be interpreted so that if the diversity of a 1-byte property is less than 26, then it will never be beneficial no matter what the area shapes are.

    P=2: 0 =< 16(N^2) - (32 + 256 + 12)N + (16 + 256 + 1027) = 16(N^2) - 300N + 1299

    It seems that the equation is true if N >= 12. Combined with the earlier result that in the worst case the diversity of a 2-byte property has to be at least 19, we can conclude that if the areas representing this property are smaller than 12 tiles, then tiles are more efficient, and if the areas are larger than 19 then areas are more efficient way to store the data. Between 12 and 19 the results depend on the shapes of the map and as such cannot be estimated very well. Again, think of population density as a convenient example.

    P=3: 0 =< 36(N^2) - (72 + 384 + 12)N + (36 + 384 + 1027) = 36(N^2) - 468N + 1447

    The equation is true if N >= 8. Hence the diversity of a 3-byte property has to be at least 8 tiles per area to make the areas even in theory a competitor to tiles. As comparison point to the worst case scenario, if the size is larger than 10 then areas will always be the better choice.

    P=4: 0 =< 64(N^2) - (128 + 512 + 12)N + (64 + 512 + 1027) = 64(N^2) - 652N + 1603

    This is true if N >= 7. Interestingly, this is the same number that was in the worst case scenario; when the memory needed by the property itself goes up, it matters less what the shapes of regions are. Rounding errors may also have something to do with it.

    P=5: 0 =< 100(N^2) - (200 + 640 + 12)N + (100 + 640 + 1027) = 100(N^2) - 852N + 1767

    Here N >= 5. Due to rounding errors the number is again smaller than the one on the worst case (6), but that is not very relevant. At least this gives some lower boundaries to the number of tiles that need to belong to a region to make it feasible.

    Med:

    I can imagine that squeezing data into single bits like earlier may not be the fastest way for algorithms to grasp the boundary information: of the three directions in each intersection they'd have to somehow calculate the next move, and because the directions aren't the same in every intersection (just picture a hexagonal grid to visualize this), it means extra calculation. It would be more convenient if instead of enumerated direction we could directly get an (x,y)-offset. The individual coordinates in this offset would be either 0, 1 or -1... so we can use two bits per a coordinate and four bits per the whole edge. In other words, the border would take twice as much as it did in the "min" compression. Also, there is room for some extra information, since the areas strictly speaking wouldn't have to stick with hexes anymore: there could be eight possible directions to go to, like in civ-style maps.

    I'll do my estimates in a similar manner as before, starting with worst case scenario. All that changes is the denominator from 4 to 2 (now we can only put 2 edges in one byte, compared to 4 before).

    S =< NP
    =>
    16 + P + (4N+2)/2 = 17 + P + 2N =< NP
    =>
    NP - 2N = N(P - 2) >= P + 17 (divide by 2P - 2, P > 2)
    =>
    N >= (P + 17)/(P - 2)

    P=1, P=2: Both cases are in the grey area, and there is no guarantee that using areas will pay off with this compression scheme. I'd say that if we go for the layered view, most areas would fall into this category: only composite properties have need for more bytes.

    P=3: N >= (3 + 17)/(3 - 2) = 20. Twice the size which we got with "min" compression.

    P=4: N >= (4 + 17)/(4 - 2) = 21/2 = 11. The equivalent value with more efficient compression was 7... so, it appears that the more bytes the property requires, the less difference there is between different compression schemes (as long as the data is stored in areas. It becomes increasingly cumbersome to store the same information in tiles). Makes perfect sense.

    P=5: N >= (5 + 17)/(5 - 2) = 22/3 = 8.

    Now for the best case scenario.

    S =< NP
    =>
    16 + P + (6sqrt((4N-1)/3))/2 =< NP
    =>
    16 + P + 3sqrt((4N-1)/3)) =< NP (toss the square root to one side)
    =>
    3sqrt((4N-1)/3)) =< NP - P - 16 (raise to second power)
    =>
    9((4N-1)/3)) =< (P^2)(N^2) - (2(P^2) + 32P)N + ((P^2) + 32P + 256)
    =>
    12N - 3 =< (P^2)(N^2) - (2(P^2) + 32P)N + ((P^2) + 32P + 256)
    =>
    0 =< (P^2)(N^2) - (2(P^2) + 32P + 12)N + ((P^2) + 32P + 259)

    And the solutions for different values of P:

    P=1: 0 =< (N^2) - (2 + 32 + 12)N + (1 + 32 + 259) = (N^2) - 46N + 292

    The equation is true when N >= 39. I'd say that it is a rather large value for e.g. elevation... but that's just my gut feeling.

    P=2: 0 =< 4(N^2) - 84N + 327, N >= 16.

    P=3: 0 =< 9(N^2) - 126N + 364, N >= 10.

    P=4: 0 =< 16(N^2) - 172N + 403, N >= 8.

    P=5: 0 =< 25(N^2) - 222N + 444, N >= 6.

    Just as a reminder, these are the sizes of regions that indicate the minimum diversity with which areas might be better than tiles for storing data; under these figures tiles will always be better than areas, from memory consumption point of view.

    Max:

    The previous compression is already quite handy, but it works well only with hexes (or rectangular tiles). If we want to use a totally free-form polygons, it requires a little bit more memory. One neat way to compress such polygons, however, would be use (x,y)-offsets just like I did with hexes before, but so that the offset can be more than just -1 or 1 in each direction. If we use 4 bits per coordinate, we can have offsets varying from (-7, -7) to (7, 7), and each edge would be a line somewhere in this 15x15 grid. If we decide to use a free-form map, I think this compression could be quite feasible. On the other hand, if we use tiles then it might be a little bit faster to use full bytes instead of groups of 2 or 4 bits, because computers (and programming languages) work with bytes.

    I'll estimate this compression the same way I did before, to give some comparison points. The free-form nature of this compression however makes it a little bit silly, because now the comparison to hexes is becoming less meaningful, and some edges are longer than others.

    S =< NP
    =>
    16 + P + (4N+2) = 18 + P + 4N =< NP
    =>
    NP - 4N = N(P - 4) >= P + 18 (divide by P - 4, P > 4)
    =>
    N >= (P + 18)/(P - 4)

    P=1, 2, 3 or 4: The compression isn't clearly sensible. But then again, the shapes the free-form areas would allow will most likely be more efficient than the worst case scenario in the hexagonal alternative.

    P=5: N >= (5 + 18)/(5 - 4) = 23. The size cannot clearly be thought of as "tiles", but perhaps as something equivalent to the size of a tile...

    And the best case scenario, which gives more sensible values (because what was best case for hexes may be average for free-form polygons):

    S =< NP
    =>
    16 + P + 6sqrt((4N-1)/3) =< NP (toss the square root to one side)
    =>
    6sqrt((4N-1)/3)) =< NP - P - 16 (raise to second power)
    =>
    36((4N-1)/3)) =< (P^2)(N^2) - (2(P^2) + 32P)N + ((P^2) + 32P + 256)
    =>
    48N - 12 =< (P^2)(N^2) - (2(P^2) + 32P)N + ((P^2) + 32P + 256)
    =>
    0 =< (P^2)(N^2) - (2(P^2) + 32P + 48)N + ((P^2) + 32P + 268)

    And now the interesting part (one can always dream...):

    P=1: 0 =< (N^2) - (2 + 32 + 48)N + (1 + 32 + 268) = (N^2) - 82N + 301

    The equation is true if N >= 80. This could be the average size of a region, so an equivalent of a 1 million tile map would be about 12500 areas for each 1-byte property. Not too bad.

    P=2: 0 =< 4(N^2) - 120N + 336, N >= 26. Even better!

    P=3: 0 =< 9(N^2) - 162N + 373, N >= 16.

    P=4: 0 =< 16(N^2) - 208N + 412, N >= 11.

    P=5: 0 =< 25(N^2) - 258N + 453, N >= 9.

    (Yes, I realize that these figures are getting more and more irrelevant to most readers. I'm just trying to come up with some hard numbers that would justify design decisions, and maybe to amuse Vet in the process.)

    No compression:

    If we're using fully free-form vectors, the most obvious storage structure is to just store each vertex of the border as an absolute (X,Y)-coordinate. That would take 4 bytes per each vertex (or edge, since there are as many of them). The usefullness of being so spendthrift is that the transformation to visible polygons on the screen would be quite easy. Other than that, I just cannot see the point, because just storing one origin and then a list of offsets is good enough for most algorithms, I think. I was originally going to write a similar estimate to this compression level as I did with ones before, but now I am getting a little bit exhausted to writing and frankly cannot see the point in throwing strange numbers at you.

    * * *

    There are also other sorts of possibilities, like storing borders separately and then referencing them from areas. I haven't given much thought to that suggestion, but if Vet can come up with some more accurate ideas then I'm of course eager to read. I also didn't take into account here the possibility of having tiles referencing regions, and not storing any geographical data to regions properties... that approach would require that there are very few layers of complex data. It's a little bit difficult to estimate such systems. Well, it is difficult to estimate any system because the models are so premature... we're just poking in the dark here. Anyway, I believe that the system we decide to use shouldn't be too complicated. We seriously don't have any excess of competent programmers, or very motivated ones, so I believe that the safest alternative that all the programmers (me and Vet?) can agree upon is the one that should be chosen, even if it wasn't the theoretically most efficient one.

    I was going to write something about the gameplay issues, but now I notice that I don't have the time right now. Thank you for anyone who bothered to read through this drivel. Let me emphasize that this was far too technical, and it doesn't yet address my opinions on whether tiles or areas should actually be used... it's more like a guideline for those issues. I'll be back.

    Comment


    • #62
      Alright, apparently I wasn't coming back after all.

      To continue with the trend of useless posting, I wasted my time on making a rough image of what the British Isles would look like in a hex map. Each tile in the attached gif is more or less 50 km across (on north-south axis), but they are somewhat stretched horizontally because I didn't bother making them real hexes, instead I just drew hexagonal outlines on a square grid. I apologize for the horrible colors used, I simply have zero artistic vision.

      Is this anywhere near what the rest of you had in mind when thinking about tiles? I might make a similar vector map (based on my cryptic calculations above) to illustrate the differences, but since I have a bad habit of leaving things halfway I decided to put this here for you to comment anyway.
      Attached Files

      Comment


      • #63
        I could be happy playing on this map, though the rules of play would have some influence!

        I am unfortunately too stupid too understand your previous post. I really have tried! It's a pity all programmers have become extinct by now! Especially the continued absence of our Programme Leader has been a devastating blow to all progress!

        Did you -Leland or someone else- ever play a war game? These are usually played on a map composed of hexes. Why do we need a more complicated structure? Especially due to this mass extinction, programming demands should be kept as simply as possible. Or am I underestimating your abilities, Leland?

        By the way, 'CORONEL' -the Diplomacy game M@ni@c and I are playing in- has come to life again! Hurray!
        Jews have the Torah, Zionists have a State

        Comment


        • #64
          Don't feel too dumbfound if my post at the top of this page is utterly nonsensical... it was closer to an unedited back-of-the-envelope calculation to refute VetLegion's claims than to make a point of my own. Nothing to do with actual gameplay values, or pretty much anything of importance. It was *very* late at night when I wrote that, you know...

          Anyway, I cannot say that I am familiar with war games (or games in general... I even suck at Civ), and hence my opinions on gameplay issues are perhaps rather useless. But regarding the map, I think that it would be revolutionary to have something which contains provinces/regions like Diplomacy, Europa Universalis, several board games etc., but instead of being static the areas/regions/provinces/whatever would be dynamic and change as the game progresses. One way to do is on top of hexes, another way is to have vector based areas. The purpose of the above picture is to illustrate that the "resolution" of hexes is not so bad compared to real vector-based areas, especially if you think about how small the British Isles actually are.

          Also, I am not giving much thought to compromising the complexity... my interest in this project has never been to get something finished, it's more like casual daydreaming of a "perfect" game. Or something to that effect. It's not the destination that counts, it's the journey.

          Comment


          • #65
            Leland, sorry for not posting a long time. I have been really busy with the small publishing buisiness we have here. The free time I had I played civ3, and I managed to finish 2 full games so far, one on regent and one on monarch. Game is awfully slow for a TBS, and has outrageous hardware demands. Some good concepts and lots of problems. Anyway I deleted it

            I ll reply to your equations when I study it a bit better, it is a long post and lots of numbers in it

            Comment


            • #66
              Hint: Forget the equations. I wouldn't wish crosschecking their correctness to fall upon even my worst enemy. It's the results and the text which should have some points about memory consumption tradeoffs and such.

              Comment


              • #67
                ok, I think I got what you are saying. I understood what you mean, and we were basicaly doing the same calculation.

                the difference is in estimate of the minimum area requirements. for you:

                Properties + Adjacent area references + Origin + Border

                and

                Size = P+12+4+B = 16+P+B

                while I used

                Properties + Origin + Border

                and

                Size = P + 2 + B

                I also assumed one byte for x, y and property, and there actually would be better to use two... I was refering to civ2 map type, and would using areas on civ2 map be beneficial or not, since it is the only map it was possible to attempt a test on. I havent done that test, since it took too much time to count the tiles

                As a result, your minimum area is 20 bytes, while my was 4 bytes. If I modify mine to use two bytes, it is then 20 vs. 7, since I havent used any reference to adjacent areas. Since tile size jumps from one byte to two, it gets better, it is now 7/2=3.5 instead of 4/1=4 as before (without the adjacent stuff)

                now, I think that storing borders only once instead of twice would considerably cut memory requirements. I ll concentrate on that, to see if I can make it work.

                to conclude, yes, the feasibility of using areas depends most on memory requirements for smallest possible area. you showed that if smallest possible area is big (in memory), then diversity we are going to use would have to be smaller then what is possible to achieve with tiles of same resolution. I know it is not clear

                I ll try to come up with something more complete. And let me know if I missed the point, I did see you mentioning that it is never better to use areas.

                Comment


                • #68
                  Originally posted by VetLegion
                  As a result, your minimum area is 20 bytes, while my was 4 bytes. If I modify mine to use two bytes, it is then 20 vs. 7, since I havent used any reference to adjacent areas. Since tile size jumps from one byte to two, it gets better, it is now 7/2=3.5 instead of 4/1=4 as before (without the adjacent stuff)
                  Hmm, I do not understand what you mean. 4 bytes for origin, one byte for properties, but where do you get the idea that every area larger than 7 tiles would take less space than the same information stored in tiles (I hope this is what you were trying to say)? Properties take 1 byte, origin takes 4, and border
                  takes four bits... it's easy to see that the smallest possible size for an area which takes less space than the same information in tiles is 12, and that is assuming the shape is optimal; if the shape of the area is worst possible (just a line of tiles) then the size is never optimal because each new tile adds two new borders. If you meant that it's enough to have two bits (quarter of a byte) to store each piece of border, then the "best case" minimum would be 8 (and not seven.. okay, maybe you made just a slight miscalculation), and the worst case minimum is twelve. Draw a few tiles on a piece of paper and you can see it for yourself. Besides, you are comparing hexes to squares.

                  Anyway, to clarify some of the points I tried to make, and to add some new food for thought in the mix:
                  • Your system is not better because it saves memory by leaving out adjacent area references. You have to have the adjacent tile info because I know of no easy way to extract that from the mere borders. The simpliest algorithm I can figure out involves translating every tile border of every tile to absolute coordinates and then comparing it to every point in every other area. With ten thousand or so areas it becomes a practical impossibility. But if you can come up with a considerably aster algorithm, I'm all ears.
                  • I like hexes.
                  • The point I made about areas "never" being better than tiles was not that universal, I meant that if the shape of the area is difficult then it doesn't matter how big it is, it will always take more space than the same data attached to tiles. In general, this happens when each new tile added to the area adds more border information than property information. The worst possible shape is quite intuitively a thin row of tiles, whereas the best is an approximation of a circle.
                  • Can you think of individual properties (layers) which need several bytes? There are a lot of properties needing one byte or less (like elevation, vegetation, etc.), a few that need two bytes (population density) but it is hard to come up with any more detailed individual properties/layers. On the other hand if you lump properties together in one layer, you increase the diversity, which might just make tiles more favourable again.
                  • Besides, a lot of conceptually independent properties have less than one byte of data. I haven't made any calculations about that, but it makes tiles more attractive to storing these properties.
                  • A few more bytes are needed because each area might have several borders. Just think about a temperature zone that spans around the globe, or a lake which has an island in it.
                  • The map is the most important data structure of the game. It's the central point of nearly all the action, and it's going to be accessed a lot. Hence, the map algorithms have to be considerably fast, which is quite likely achievable in expense of memory consumption.
                  • Storing just borders might solve some problems, but also be more complicated to implement. I have not given much thought to it, but perhaps it could work. I'm looking forward to seeing what you can come up with.
                  • The whole discussion has diverted a little bit because we are no longer comparing tiles versus free-form areas, we are comparing tiles versus tile-areas. So the results of our calculations only apply in the context of tiles... and I am not saying that we shouldn't have areas that consist of tiles, quite the contrary I think there should be both tiles and areas consisting of tiles. I am arguing against using areas exclusively because I think there are a lot of properties that are better stored in tiles.
                  • Speaking of free-form areas, my objection to those is that the whole system becomes complex in my opinion. A tile always have a fixed amount of neighbouring tiles, and they can easily be modelled as cellular automata. That spells fast algorithms and I think this compensates for the large number of tiles, computing power requirementwise. Areas would all be of different shapes and I think those would have to be handled quite differently. For instance, if you have two tribes of people next to each other, it is relatively easy to model the conflict and diffusion between the two with cellular automata, just calculating the properties of tiles that are in the border of the two populations. But on the other hand, how do you model a conflict between two polygons? What are the conditions for vertices moving around? When are new vertices created and when are they destroyed? How long is one edge of the polygon? These are not easy questions and I think we can potentially run into a lot of implementation problems if we do everything as polygon-based areas. It's safer to just start working with tiles.
                  • Take a look at the gif I attached above. It demonstrates what UK would look like if represented as hexes, and I think it does not look too bad. When you look at it, are you thinking of individual tiles, or the areas (different colors)? In contrast, in a civ map you are practically forced to think about tiles because the resolution is so poor. Not to mention that in civ maps the British Isles have always been exaggerated (I think) to accommodate their historical importance. In GGS there wouldn't be need for such trickery.
                  • To the player, tiles would be more like a grid on the background. No units would be moved tile-by-tile, and even positioning the army (or anything else for that matter) would not be crucial in the sense that the ranges of armies consist themselves of quite a big area. That is my humble vision.
                  • Perhaps tiles are a little bit old-fashioned, and it may be a bad design choice in the very long range. But then again, using areas exclusively may be a problem immediately. And with the deisgn principles we've been trying to follow the importance of tiles would be diminished, so it is not impossible to get rid of tiles later on because the higher order models would not refer to individual tiles very often.
                  • Need I provide more arguments in favour of the tiles-regions-provinces trichotomy? I have convinced myself at least!

                  Comment


                  • #69
                    I've been thinking about that graphic that was posted a while ago. Perhaps tiles, if used, should be 10-20 km across, instead of 50. I don't think it would be necessary to make a 'whole-world' map.
                    Maybe it should be possible to start off on one map and 'expand' onto adjacent ones. Each turn the player might have to do some management on each map he has units on.

                    Another option would be to have a separate resolution for each map, thus allowing detailed scenarios as well as 'rocks-to-rockets' games.
                    If at first you succeed, you should be doing something tougher.

                    Comment


                    • #70
                      Interesting concepts, you always seem to have novel ideas to share!
                      • With 50km diameter there'll be approximately 240-700 thousand tiles on the globe, depending on whether the real Earth surface area is used (240K) or whether the map is a cylindrical projection that stretches the poles. Anyway, since our unofficial goal has been around a million tiles, I think it's not entirely impossible to make the tiles slightly smaller. But not smaller than 25km across, in my humble opinion. Perhaps 30km is doable.
                      • A smaller tile size brings some complications: at some point, rivers, cities and possibly other things as well are going to be larger than individual tiles, and that could make the whole system a little bit more complicated. I'm sure these are not unsurpassable problems, but they deserve to be thought over, nevertheless.
                      • I wonder if you could elaborate on that multiple map idea? How would the transition from one map to another take place? I foresee a possible problem with situations where the discontinuity between maps creeps into the game, for instance by making a traversal from point A to point B different in cases where they reside on the same map and where they are on adjacent maps.
                      • Different resolutions also create more complexity, because then it would have to be taken into account in every model that implicitly uses tiles. And frankly, I'm not so sure if changing the tile sizes would work... what if the map that has little activity and low resolution in the beginning of the game is the center of action in a later stage of the game? Or vice versa.
                      • Partitioning the map to smaller pieces in implementation level might be a good idea (for example, in the beginning of the game it could make the client-side performance a bit better), but I don't think that several maps add to the gameplay much. In any case I would like to keep an open mind, perhaps I didn't quite understand what you were proposing here...
                      • I might draw the isles again with 30km tiles, just for comparison. But it's going to take a while so don't hold your breath.

                      Comment


                      • #71
                        I wonder if you could elaborate on that multiple map idea? How would the transition from one map to another take place? I foresee a possible problem with situations where the discontinuity between maps creeps into the game, for instance by making a traversal from point A to point B different in cases where they reside on the same map and where they are on adjacent maps.
                        When a unit comes to the edges of a map, it could be allowed to enter a new one. It may not directly help gameplay, but it seems a waste to have a huge map of the world when the game is mainly limited to a small area in the corner (as it might be in some games).

                        Different resolutions also create more complexity, because then it would have to be taken into account in every model that implicitly uses tiles. And frankly, I'm not so sure if changing the tile sizes would work... what if the map that has little activity and low resolution in the beginning of the game is the center of action in a later stage of the game? Or vice versa.
                        I didn't mean changing resolutions between two maps in the same game - I meant that different games could have maps with different levels of detail. This creates some problems of its own, though. Unit movement and whatnot might become a bit of a mess.

                        Partitioning the map to smaller pieces in implementation level might be a good idea (for example, in the beginning of the game it could make the client-side performance a bit better), but I don't think that several maps add to the gameplay much. In any case I would like to keep an open mind, perhaps I didn't quite understand what you were proposing here...
                        Thinking about it, it doesn't seem like a very good idea to me either .
                        If at first you succeed, you should be doing something tougher.

                        Comment


                        • #72
                          Originally posted by Nath
                          When a unit comes to the edges of a map, it could be allowed to enter a new one. It may not directly help gameplay, but it seems a waste to have a huge map of the world when the game is mainly limited to a small area in the corner (as it might be in some games).
                          Hmm. I can see the point. But, there's going to be exactly the same amount of information stored anyway, regardless of whether there is a one map or several. I think that multiple maps could come in handy in detailed colonization scenarios though: you could have a map of your mainland, and maps of the colonies, but rest of the world is ignored. I do not think that this is a reason to make the whole map system based on multiple maps though... let's just try to make one at first.
                          I didn't mean changing resolutions between two maps in the same game - I meant that different games could have maps with different levels of detail. This creates some problems of its own, though. Unit movement and whatnot might become a bit of a mess.
                          Well, it's not an entirely hopeless concept. We'd just have to take it into account in design, for instance by having all constants (population per tile, distances, areas) multiplied by the map size (or it's square, or some other variable). Also, it might make the game itself a little bit too confusing, though I am not so sure about that.

                          Anyway, I made a few more map images. The tile shape in both of them is a little bit wrong, but perhaps closer to reality than in my first image above. I fit the hexagons inside 4x5 rectangles this time (compared to 2x3 previously). In the first one the hexes are a little bit over 40km across, and in the second one a little bit less than 30km across. The wrong tile size makes them look perhaps a little bit clunky, but I think they still provide some sort of demonstration of what these resolutions mean in practise. Also bear in mind that UK & Ireland aren't that big on the world map.

                          Here's the first one.
                          Attached Files

                          Comment


                          • #73
                            And here's the second.
                            Attached Files

                            Comment


                            • #74
                              I like the second one, ie, I would like to have as big resolution as possible.
                              Are you thinking in realistic turns in other areas as well here, how many cities and pop should that kind of map support. and regions too?

                              Comment


                              • #75
                                NOTE - we are using a map as Leland described now. currently we do not know how to program a feasible areas based solution (at least I)

                                here is how I imagine the area based map.
                                Attached Files

                                Comment

                                Working...
                                X