Announcement

Collapse
No announcement yet.

Wonders

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Now that would be an elegant solution.

    Originally posted by snoopy369
    Um, the most efficient way is the way I had it originally. Just have each difficulty with an xml option "iNumBad" and order the goodie results by good->bad, then have the scout generate a num from iNumBad+1 to 20 instead of from 1 to 20... One roll, guaranteed good result
    1st C3DG Term 7 Science Advisor 1st C3DG Term 8 Domestic Minister
    Templar Science Minister
    AI: I sure wish Jon would hurry up and complete his turn, he's been at it for over 1,200,000 milliseconds now.

    Comment


    • #62
      Originally posted by Kuciwalker


      Except there are cases where other options (e.g. tech) are unavailable, so simple ordering won't work.
      If it's invalid for other reasons, just null it out (or have a default). Still much more efficient than rerolling... and perfectly moddable (you get to define what's bad!).
      <Reverend> IRC is just multiplayer notepad.
      I like your SNOOPY POSTER! - While you Wait quote.

      Comment


      • #63
        Actually, this form of randomness is NOT O(1) even for a single roll
        O(1) is for the simplisist possible puedorandom number between 0 [inclusive & 1 [exclusive] that has them all in a big list and wraps around upon hiting the end of the list.

        Behind the sceens to convert there is a floating point mulitplication O (N * M) [N & M = number of digits in standard floating point binary format. N will depend upon the precision of the RNG, M (binary format of 20 in a float before it gets to repeating 0s at the end is probably such a small fraction of N that it could instead be reprented as O (C * N) = O (N).

        The truncation itself would be another O(N) (N = number of digits in floating point binary format past the decimal point for this particular number)

        (This is the whole reason why floating point math multiplication was avoided like the plague in time sensitive operations prior to the Floating Point Math Co-processor became in common use.)

        Agreed however that when optimizing for speed, you run a profiler and find the 20% of the code that's taking 80% of the time and optimize that.

        Originally posted by Kuciwalker
        It´s exactly this attitude that´s causing so much modern software to totally unnecessarily hog ridiculous amounts of resources. If the whole game was programmed with that mindset the game would have twice the minimum specs it currently has.




        That's ridiculous. Optimization properly goes from the most resource-intensive tasks to the least. There are maybe 20 calls in the entire game to a function that is O(1), a SMALL O(1). You seriously think making it a very slightly larger O(1) (maybe 2, 4x as large) is the sort of thing that would push up minimum specs? Face it, 95% of the processor time is going to be devoted to AI, trade-route calculations, and graphics. Even if you triple the runtime EVERYTHING in the remaining 5%, you get... a 10% performance hit.

        Code:
        var* outcomes; // global variable
        
        var goodyHut() {
            int n = 0; // number of excluded possibilities.
            for(int i = 0; i < 20; i++) if(excluded(var[i])) n++;
            int r = rng(20-n);
            for(int i = 0; i < 20-n; i++)
                if(excluded(var[i])) { i++; r++; }
                else if(r == i) return var[i];
            return -1;
        }
        1st C3DG Term 7 Science Advisor 1st C3DG Term 8 Domestic Minister
        Templar Science Minister
        AI: I sure wish Jon would hurry up and complete his turn, he's been at it for over 1,200,000 milliseconds now.

        Comment


        • #64
          Originally posted by Kuciwalker
          It´s exactly this attitude that´s causing so much modern software to totally unnecessarily hog ridiculous amounts of resources. If the whole game was programmed with that mindset the game would have twice the minimum specs it currently has.




          That's ridiculous. Optimization properly goes from the most resource-intensive tasks to the least. There are maybe 20 calls in the entire game to a function that is O(1), a SMALL O(1). You seriously think making it a very slightly larger O(1) (maybe 2, 4x as large) is the sort of thing that would push up minimum specs? Face it, 95% of the processor time is going to be devoted to AI, trade-route calculations, and graphics. Even if you triple the runtime EVERYTHING in the remaining 5%, you get... a 10% performance hit.

          Code:
          var* outcomes; // global variable
          
          var goodyHut() {
              int n = 0; // number of excluded possibilities.
              for(int i = 0; i < 20; i++) if(excluded(var[i])) n++;
              int r = rng(20-n);
              for(int i = 0; i < 20-n; i++)
                  if(excluded(var[i])) { i++; r++; }
                  else if(r == i) return var[i];
              return -1;
          }
          That code is quite inefficient (and nonrandom, if i'm reading it right) ... even assuming we have to accept 'excluded list' essentially we have two options: (Excuse the code, I don't use C normally so treat this as pseudocode, particlularly "is invalid" clearly needs a meaningful check)

          Version A: Fastest, not truly random
          Code:
          var* outcomes;
          
          var goodyHut() {
            int r = rng(20); int r_original = r;
            do until r = r_original - 1 {
              if outcomes[r] is invalid then 
             { r++;
                if r >= 20 then r = 0;
              }
              else return outcomes[r];
            } /*do loop*/
            return -1;
          } /* function goodyHut */
          It has the disadvantage of not being truly random (as does yours) - but that's not a huge deal, mine is no more random after all. If you want it truly random, you would do:

          Version B: Truly random, still pretty fast
          Code:
          var* outcomes; // global variable
          
          var goodyHut() {
              int n = 0; // number of excluded possibilities.
              var* outcomes_okay;
              for(int i = 0; i < 20; i++) 
              {
                if(!(excluded(var[i])))
                {
                  outcomes_okay[n] = outcomes[i];
                  n++;
                }
              } // for i
              int r = rng(n);
              return outcomes_local(r);
          }
          Which would only go through one loop of 20 instead of 2, and would get a truly random event (yours would tend to choose events immediately after excluded events).

          I'm not sure if the latter is faster than rerolling, though. Going through 20 checks and then rolling once, as opposed to rolling once, checking, rolling again, checking, etc. ... I can't imagine that would be faster than just rerolling occasionally.

          Ultimately, I think that having an ordered list is optimal if you only have one exclusion reason that you want not nulled; if you have more than one exclusion reason and don't want to just null other excluded results, I suspect fastest is rerolling by far.

          Elegant != fastest, and as long as the code is readable and does accurately what it is intended to do, fastest >>> elegant every time.
          Last edited by snoopy369; October 30, 2007, 19:35.
          <Reverend> IRC is just multiplayer notepad.
          I like your SNOOPY POSTER! - While you Wait quote.

          Comment


          • #65
            Originally posted by joncnunn
            Actually, this form of randomness is NOT O(1) even for a single roll
            O(1) is for the simplisist possible puedorandom number between 0 [inclusive & 1 [exclusive] that has them all in a big list and wraps around upon hiting the end of the list.

            Behind the sceens to convert there is a floating point mulitplication O (N * M) [N & M = number of digits in standard floating point binary format. N will depend upon the precision of the RNG, M (binary format of 20 in a float before it gets to repeating 0s at the end is probably such a small fraction of N that it could instead be reprented as O (C * N) = O (N).

            The truncation itself would be another O(N) (N = number of digits in floating point binary format past the decimal point for this particular number)

            (This is the whole reason why floating point math multiplication was avoided like the plague in time sensitive operations prior to the Floating Point Math Co-processor became in common use.)

            Agreed however that when optimizing for speed, you run a profiler and find the 20% of the code that's taking 80% of the time and optimize that.
            I think some of that is solved by the getSorenRandNum and other random number seed lists simplifying this sort of math; but certainly more than 1 calculation occurs.
            <Reverend> IRC is just multiplayer notepad.
            I like your SNOOPY POSTER! - While you Wait quote.

            Comment


            • #66
              That code is quite inefficient (and nonrandom, if i'm reading it right)


              It is random. It's not meant to be terribly efficient, it was meant to be an example of how trivial it is to replace the "roll a bunch of dice until we get a hit" mechanism.

              Comment


              • #67
                Which would only go through one loop of 20 instead of 2, and would get a truly random event (yours would tend to choose events immediately after excluded events).


                Would not. Try it on a few examples, you'll see that it works.

                Comment


                • #68
                  Originally posted by joncnunn
                  Actually, this form of randomness is NOT O(1) even for a single roll


                  It is, the number of possible outcomes is fixed.

                  Behind the sceens to convert there is a floating point mulitplication O (N * M) [N & M = number of digits in standard floating point binary format. N will depend upon the precision of the RNG, M (binary format of 20 in a float before it gets to repeating 0s at the end is probably such a small fraction of N that it could instead be reprented as O (C * N) = O (N).


                  ...

                  Dude, you fundamentally misunderstand the concept of Big-O. It's only a meaningful function of functions that can grow arbitrarily large. The number of bits in the floating-point representation is fixed. Multiplication takes constant time on an x86 processor.

                  edit: and why would I use a fprng to generate a random integer? It's easy to write a decent rng that uses only integer ops.

                  edit2: also, even if I were using floating point multiplication somewhere in here, it wouldn't be O(N*M). Floating point multiplication has the same complexity as integer multiplication, and therefore the best known algorithm is something like O(N^1.5) or so (where N is the number of bits in the representation).

                  edit3: wiki says the fastest known algorithm is actually O(N^~1.46)

                  edit4: it occured to me this morning that N should actually be the number of digits in the mantissa.
                  Last edited by Kuciwalker; October 31, 2007, 08:58.

                  Comment


                  • #69
                    Since the amount of time to run the popGoodyHut() function (or whatever it's called) is bounded above by a constant, it's O(1), by the definition of O(f(n)).

                    Comment


                    • #70
                      Originally posted by Kuciwalker
                      Which would only go through one loop of 20 instead of 2, and would get a truly random event (yours would tend to choose events immediately after excluded events).


                      Would not. Try it on a few examples, you'll see that it works.
                      Yeah, I missed the r++ part (or its point). Still, I think it is less efficient than just setting the for loop to generate a new outcomes list ... and looping is still more efficient than either, since you're not going to loop very many times.
                      <Reverend> IRC is just multiplayer notepad.
                      I like your SNOOPY POSTER! - While you Wait quote.

                      Comment


                      • #71
                        Elegant != fastest, and as long as the code is readable and does accurately what it is intended to do, fastest >>> elegant every time.


                        Don't lecture me on something like that, I finished a lab the other day on optimizing loops for cache performance, compiler optimization, etc. and I ballooned an 18-line function (for evaluating a bivariate polynomial) into ~160 lines

                        Comment


                        • #72
                          Version B: Truly random, still pretty fast


                          Notice how many more operations per iteration you do, including a call to rng() in every iteration (that can be reduced, BTW). With unrolling, my two loops are probably as efficient as ore more efficient than your large loop.

                          Comment


                          • #73
                            Ironically, the fastest correct algorithm (so long as more than one hut possibility remains and rng() isn't prohibitively expensive) is to just reroll (with no cap) until you roll a valid number.

                            Comment


                            • #74
                              Well we've truly lost the elegance on this thread...

                              Comment


                              • #75
                                Originally posted by Kuciwalker
                                Version B: Truly random, still pretty fast


                                Notice how many more operations per iteration you do, including a call to rng() in every iteration (that can be reduced, BTW). With unrolling, my two loops are probably as efficient as ore more efficient than your large loop.
                                I'm only calling rng() once in version B... ? Maybe I coded it badly, but it goes:
                                * Create new outcomes set with only valid outcomes
                                * Generate random number from 1 to max_valid_outcomes
                                * Take that outcome from valid outcomes subset.
                                <Reverend> IRC is just multiplayer notepad.
                                I like your SNOOPY POSTER! - While you Wait quote.

                                Comment

                                Working...
                                X