Announcement

Collapse
No announcement yet.

Polymorphic Artificial Intelligence

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Polymorphic Artificial Intelligence

    I've been thinking about the concept recently.

    I think eventually this will be a direction of the artifical intelligence of future games, at least to make the artifical intelligence really smart. Basically, the concept is, the artificial intelligence, at first its really stupid - or uses predefined rules, like today. (Simple commands like, walk and shoot.)

    Then it will gain experience, and adapt. For example, it will realise the concept of investment - take for example an energy bank in SMAC. It supposed to enhance your economy, but the AI doesn't feel like hurrying it since it would defeat the purpose of gaining credits, right? Then he realises, that after a few turns, it will pay for itself. That you spend a bit, you gain a bit more.

    Then it learns that placing it in a well developed base is a good idea. It writes this to its rulebook. From now on, it will realise this fact. It will then also realise that sacrificing a bit of growth to divert to minerals for that tree farm, will eventually lead to more nutrients from forests - more growth. Then it finds that for example, the right and wrong time to do this, and adjusts accordingly.

    After a while, you have decent AI. Not just programming hundreds of rules which some tactics you might miss out, it learns. Then it learns how to attack in flanks. How to attack in surprise. How to do other things.

    Then also another possibility: you agreed, the rules are distributed to some central server, and shared. You also receive rules for the AI from somewhere else, enhancing it further. Then it might get really personal - it realises your playing style. "From how you move your mind worms, my algorithms say you must be Player So and So. You always liked the Weather Paradigm over the HGP - that building of the HGP is a decoy!"

    Perhaps first it will appear in chess games, ie. how to beat certain openings, especially in Fischer random chess.

    Or, the best is making it integrated. It will realise principles in one game, and apply to the other. This can be helped with one standard protocol: one game will use the same AI protocol as another, perhaps. Just as we all probably use TCP/IP to surf the internet. Of course, this is another big step. How to realise that (and discern) that the game you just played has different keys, and that (say its an FPS) that since you are immune as a tank, to bullets - you should slaughter all non anti-tank infantry, and either take out ASAP (or avoid) anti-tank stuff, like mines and whatnot.

    Then you switch to a RTS style game. Your heavy cavalry are virtually immune to standard infantry, save for some counter-unit. Same principle, different scenario. The trick is getting the AI to recognise it, and know how to apply it. (It will not help it when its a tank and searches for pikemen!) Discernment, again...

    I think it will be more of a program running in the background which does all this polymorphic computation, while the games just feed it back info and asks what should it do. (As well as asking your keyboard whether someone typed A and therefore the character should move left).

    Of course, making such an effort open source will have distinct advantages, rather than if it was proprietary...although I'm not sure how many people have this idea, but if it does, it will probably appear under the GPL.

    Well all this is speculation of course. But perhaps we'll see it in ten or fifteen years or so. Actually I had an idea to do it now, because I was writing a Weiqi style game (but with hexagons and four players, complete with diplomacy - if enemy pieces, even of different colours, surround all six sides of one piece, that piece is gone. Of course, one might declare he's temporarily friendly and therefore will not cut off "life support" ) and because I thought if I was going to write an AI from scratch, I might as well try now.
    Arise ye starvelings from your slumbers; arise ye prisoners of want
    The reason for revolt now thunders; and at last ends the age of "can't"
    Away with all your superstitions -servile masses, arise, arise!
    We'll change forthwith the old conditions And spurn the dust to win the prize

  • #2
    Polymorphic Software only gives Heavy Artillery. I doubt it will be researched anytime soon.
    Contraria sunt Complementa. -- Niels Bohr
    Mods: SMAniaC (SMAC) & Planetfall (Civ4)

    Comment


    • #3
      Interesting stuff

      I just tried to find a link you might like but have not been able to so(i'll keep looking and post it when i get it ).
      But have you heard of the experiment in artifical intelligence that was being run at Cambridge University. It's been going for years, but the basic outline was that they had a maze with some obsticles in it. They put some robitic 'mice' in it and these robots ran a simple program to move forward untill it hit an obsticle then remeber where the obsticle was and retry.
      Thats my super simplified version of it
      After a while they ended up with a program that was re-writing itself and improving itself and they were going through 1,000's of these generation improvements every day. There was sufficiant worry about the program 'escaping' that it had to be kept off-line in a isolated network!

      I haven't heard on what progress they have made over recent years, but it was possible their experiment could evolve a self improving computer, one that learnt and adjusted its stratedgy
      If i find the link i'll post it as it sounds a bit like what your talking about
      'The very basis of the liberal idea – the belief of individual freedom is what causes the chaos' - William Kristol, son of the founder of neo-conservitivism, talking about neo-con ideology and its agenda for you.info here. prove me wrong.

      Bush's Republican=Neo-con for all intent and purpose. be afraid.

      Comment


      • #4
        I'm afraid what you say is far too vague to be helpful.
        AI learning works and has been working for a long time. Things like neural nets can learn things, automatically or under supervision, for instance.
        The problem with games is you have to recognize things. For instance:
        Then it learns that placing it in a well developed base is a good idea. It writes this to its rulebook. From now on, it will realise this fact.
        How does it recognize what a base is? What a developped base is? Is it really a good idea to remember it? Sometimes you have to forget everything in order to get a better strategy.
        The biggest problem with learning things is you have to avoid learning by heart. For instance, if you learn that building knights is good and someone mods the knight unit, then you're left with a bad rule, and the system will have to re-learn everything. But on the other hand, if instead of learning 'knights are good' you learn 'units with move of 2 and big attack are good'.
        In order to learn, you need to react to percieved situations with choices. This means you have to see things. If you can't recognize it, you can't play against it. This will not change with an adaptive learning algorithm or anything like that. Reactions are usually available.
        But then you have different levels for the ai. Strategic vs. tactical for instance, and you'd have to teach both levels, or choose a number of levels. If you pick a single level, that's a choice. If you pick two that's another one. If you let the ai learn how many levels are good, that's a possibility, but it's unlikely you'll ever have the time to process everything and think of all inputs to yield a good ai.
        I think a goal-oriented ai is much better than either rules and is the strong point in a good ai rather than some kind of learning algorithm. I can't see how saying 'let it learn' makes a good ai if you don't tell it correctly what it must learn (how to win being far too vague in any game more complex than chess - and even in chess there are lots of evaluation functions for a given position).
        Clash of Civilization team member
        (a civ-like game whose goal is low micromanagement and good AI)
        web site http://clash.apolyton.net/frame/index.shtml and forum here on apolyton)

        Comment


        • #5
          In programming terms a polymorphic design is a code that is capable of doing more than one thing. e.g. the same code can be used for numbers and characters, which is because the code supports polymorphic typing. Otherwise you would need two different codes.

          Well, it's a cool concept but what you are talking about is probably machine learning which is even more interesting. There current research on this concept is related to logical databases. The database stores information "learned" by the computer and there is a logic code language which process it, queries is and creates new entries, its even more complicated... They can store new code generated by the program. These languages are not very nice to work with but are really powerful when used in a polymorphic way, and against large databases.


          Well, it is then possible for the program to write new code, and even rewrite itself... that's a scary thing, like in the Terminator movies. I'm not really into this, so I don't know a bout the most recent breakthroughs. Only thing I know for sure is that is has very little to do with heavy artillery.
          My words are backed with hard coconuts.

          Comment


          • #6
            Sometimes you have to forget everything in order to get a better strategy.


            Exactly. Just as sometimes even the greatest mathematicians forget the basic operators and go question the very aspect of a number. From this they logically moved on to the fact that imagnary numbers exist.

            Well anyway, it is vague, because all learning has to be general.

            I think a goal-oriented ai is much better than either rules and is the strong point in a good ai rather than some kind of learning algorithm. I can't see how saying 'let it learn' makes a good ai if you don't tell it correctly what it must learn


            The AI suffers at first, and we will guide it along, initially. You have to, teach it how to teach itself. After that, when it is more capable, we don't have to.

            Just as we tell our babies that crayon is red in colour, but we shouldn't guide their hands in whatever they draw. We can tell them an example of something one can draw, but to tell them what to draw, guiding them by hand is a no no. Telling an AI that a strong attack with movement 2 is good is superficial - you have to move on to the basic principle of warfare.

            It will eventually store these things and evaluate.

            It will choose: objective: knock out enemy. Choice: slow, but high defense, medium attack Hoplite infantry? Or High speed, High Attack, Low defense lightning cavalry?

            When do I apply the situation? Why is sometimes the infantry better? It will learn very superficial things at first, then it will start to link. It will adapt to take advantages of bonuses. It will remember to think twice before attacking anything on rocky squares, or whether to choose another target, (a bit superficial), then later, evaluate whether it should hurry sth or not. Along those lines.

            but it's unlikely you'll ever have the time to process everything and think of all inputs to yield a good ai.


            I don't process everything. The idea is an AI that adapts, so the programmers don't do countless superficial rules. Move there in this position, move this in that position. Rather, it will evaluate. If the AI made a mistake, but you saw it too late and it started to win (ie. "agh! I should have attacked the unit in that square, not that one), so you reload to correct it so you could gain the upper hand, guess what it will do next.

            How does it recognize what a base is? What a developped base is? Is it really a good idea to remember it?


            It will, it will build up. It will recognise it not superficially, but in principle. It will realise the mathematical principle, that in exponential situations like LABS multiplying facilities, the larger the index, the better. It will aply this to all games. It will eventually realise that all the time, when it tried to place it in a bad base, it failed. When it it placed it in a good base it did better.

            It writes in rulebook.

            Then it realises that it placed it in a base poor in energy but rich in minerals, that Research Hospital seemed to be built quickly, but didn't go so well. But that mineral poor, but energy rich base, that Research Hospital yielded a better game.

            Learns: place it in energy high bases. Learns: hurrying is better in this situation. The next step is also linking. Of course, it will not know what energy is, it just knows if the value is high, it succeeds. So the next step is the code to understand. Then it realises how to get high energy.

            Discernment isn't an easy thing, the discernment to recognise, not an easy thing to program. But once its laid down, there are no barriers to advancement. Then of course playing single player games start to become meaningful, because you're breeding a little entity with the ability to one day become sentient.
            Arise ye starvelings from your slumbers; arise ye prisoners of want
            The reason for revolt now thunders; and at last ends the age of "can't"
            Away with all your superstitions -servile masses, arise, arise!
            We'll change forthwith the old conditions And spurn the dust to win the prize

            Comment


            • #7
              This reminds me of a game I bought when I was in 5th grade (for reference, I'm 22 now). In this game, a simple grid of either 3x3, 4x4 or 5x5 was displayed and the player and computer would take turns placing blue and red spheres. The trick here though, was that the player decided when he had won or when the computer had won. Based on what the player said was a winning position, the computer attempted to formulate the rules of the game.

              While interesting in this context, there is a problem with having this sort of intelligence in a real computer game. Computers can predict with much more accuracy what the optimal moves are while humans would be better at creating new strategy. Once the computer has learned the optimal moveset it will always be undefeatable. Remember that the trick with building AI nowadays is to make it convincing in its miscalculations. Having an AI that could destroy the human player (without cheating) isn't really that difficult.

              Comment


              • #8
                place it in energy high bases.
                Again, I do not see how you can tell it what a base is or energy is. It must learn it. There are two ways to learn. By oneself or thru a teacher. Trying to have an ai behave properly without a teacher seems very hard for me. Having a teacher is what most games do right now, except most games don't evole after being finished.
                Just for comparison, it takes maybe 8 years for a human being to be able to learn playing civ. I doubt an 8-year old would play civ, but who knows. Do you think you can get an ai learn the same things faster?
                How would you get the ai to decide that 'knocking out the enemy' is a possible choice? If it is something it should learn by itself, not only will it take a long time for it to find it out, but it will probably be unable to explain its reasoning and strategy to you, which in terms of debug and convergence of the program, may prove hard adn maybe impossible to overcome.
                Clash of Civilization team member
                (a civ-like game whose goal is low micromanagement and good AI)
                web site http://clash.apolyton.net/frame/index.shtml and forum here on apolyton)

                Comment


                • #9
                  Wrong forum?

                  Comment


                  • #10
                    AI isn't going to be anything more then if then statements for a long time.

                    Comment


                    • #11
                      Originally posted by Whoha
                      AI isn't going to be anything more then if then statements for a long time.
                      It's already far more.

                      /me is taking an AI class this year

                      Comment


                      • #12
                        AI isn't going to be anything more then if then statements for a long time.


                        Well, yes, but now, the if's are very huge: because the variables are now adaptible.

                        Again, I do not see how you can tell it what a base is or energy is. It must learn it.


                        Yes, yes, it must learn. Obviously.

                        But it will observe the variables of the game. It will not initially know what a base is. But then it will observe, that in memory when variable energy of subset variable base is high makes for high tendency win...

                        Do you think you can get an ai learn the same things faster?


                        Thats only because eight-year-olds have to learn science, deal with peer pressure, they have to sleep and goodness knows what else. .
                        Arise ye starvelings from your slumbers; arise ye prisoners of want
                        The reason for revolt now thunders; and at last ends the age of "can't"
                        Away with all your superstitions -servile masses, arise, arise!
                        We'll change forthwith the old conditions And spurn the dust to win the prize

                        Comment


                        • #13
                          The most notable game with an AI that 'learns', or at least something close to it is SuperPower.

                          The game's AI is rather random and erratic.

                          Comment


                          • #14
                            I've been working in a firm which developped neural network software and solutions. The first thing we did was provide pretreatment for the NN to be able to cope with the quantity of data. Showing it in a correct way helps the program a lot, and that's true of all machine learning algorithms. For instance if there is a linear relation between variables, it's easy to see. If the relation is logarithmic, you'd better feed it the logarithms of the variables too in order for it to be able to see the relation. If you're working on f.e. ilage processing, there are lots of invariants in the picture which you can preprocess to get rid of. For instance in movement detection, you will start by substracting 2 images rather than give the 2 images and let the program learn to do the subtraction. The reason is that it would learn it eventually, but it would probably be doing the subtraction pixel per pixel and get one or two pixels wrong. A program will always behave better if you feed it some intelligence to start with. Thinking a program can learn everything from scratch is very optimistic in my opinion.
                            And an 8-year old doesn't have to learn just physics. You start as a baby, must learn to see and recognize patterns like faces (which are somewhat hard-wired). Face recognition is not really complete until around 3 years (show a bald person to a 2-year old and ask him to show the top of the heas. The baby will be surprised by the lack of a hairline and have trouble processing the information). This input-processing, learning to see and learning to hear and make out words, takes years. Then learning to speak, even simple words, or control motions, takes months. And then humans are hard-wired to learn that. A program which has to learn to make out what information stored in a 100ko (civ2) or 1Mo (civ3) file means will need a lot of time if it's not given guidance.
                            I'm looking at this from a pragmatic point of view, that is: could I use some machine learning algorithms in my own game? I will let the ai decide by simulating fights to see if the fight turns out well or not. But then, such a fight absolutely has to be simulated rather than learnt because it takes a lot of variables into account and it's unlikely the same situation will happen again unless you teach the program millions of times with the same set of data. And then you must make sure it didn't learn something which happens to be right in this situation but is generally silly.
                            For instance, it can decide that it needs an attack value of 100 + opponent value - (opponent value squared)/100. This may work very well until the opponent value suddenly exceeds 100 and the ai never saw that before. It will then start behaving like a moron and will have to relearn everything. With a program where you let everything run and learn by itself, like neural nets (which is the machine learning I know best), it's very hard to know what the program is doing and help it learn more.
                            The research space in civ is virtually infinite (you can have an unlimited number of units in a square for instance). It's so huge, that all situations can't be seen by the program, and hoping that the program will evolve a strategy ex nihilo rather than simple, tactical rules, looks like an act of faith to me.
                            Clash of Civilization team member
                            (a civ-like game whose goal is low micromanagement and good AI)
                            web site http://clash.apolyton.net/frame/index.shtml and forum here on apolyton)

                            Comment


                            • #15
                              You are talking about heuristic algorithms.

                              Interestingly, some very old games used heuristics instead of hard-coded AI. The game called Castles for old Atari computers comes to mind. It's quite a simple strategy game, the interesting part is there were a few computer players, each trained to be better at one aspect.

                              How does it recognize what a base is? What a developped base is?
                              Quantifying situations is a tough job, especially quantifying strategic situations. That's why computer players are uniformly bad at TBS games.

                              Is it really a good idea to remember it? Sometimes you have to forget everything in order to get a better strategy.
                              That will depend on how good the AI is, won't it?

                              The biggest problem with learning things is you have to avoid learning by heart. For instance, if you learn that building knights is good and someone mods the knight unit, then you're left with a bad rule, and the system will have to re-learn everything.
                              Yes and no. Modding a game is changing the rules. So if the game can be modded, the programmer must know this and account for it. Though heuristics will again be better be hard-coded routines.
                              (\__/) 07/07/1937 - Never forget
                              (='.'=) "Claims demand evidence; extraordinary claims demand extraordinary evidence." -- Carl Sagan
                              (")_(") "Starting the fire from within."

                              Comment

                              Working...
                              X