Announcement

Collapse
No announcement yet.

Space Colonization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Enigma_Nova
    And what are we going to use that technology for, anyway?
    Pardon me for not being as creative as Miyamoto, but I just don't see how having more processing power will make games any better.
    Better AI. If the doubling keeps up, we'll have computers with as much processing power as the human brain by 2020. Single player games as enjoyable as multiplayer ones, each time...

    Of course, 18 months after that, we'll have computers *twice* as good as human brains. And then four times. And then eight... and by that time we better hope we've managed to make all those AIs friendly towards us (and I'm not talking about the game AIs here)
    The breakfast of champions is the opposition.

    "A japaneze warrior once destroyed one of my modern armours.i nuked the warrior" -- philippe666

    Comment


    • #62
      You have just inspired me to write this: http://apolyton.net/forums//showthread.php?s=&threadid=146529

      Comment


      • #63
        And you have just inspired me to slap you with a large trout because using the underline tag makes the URL unclickable.
        /me slaps you around a bit with a large trout

        ... And come on, AI as powerful as the human brain?
        First you'd need to write an AI that can learn,
        then you'd need to write an AI that can adapt,
        then you'd need to write an AI with the ability to pick its own goals
        When that's done, you can teach the AI how to hack and it will learn various systems, and give itself a goal of accumulating large amounts of Credit Card Numbers.

        A lot can happen in 15 years, but I think writing such an AI is beyond the scope of humans at this time - not to mention beyond the architecture of processors needed to support it.
        (Human Brains are multi-threaded, and have the ability to simultaneously think about their actions and perform actions. In the interests of optimising power, CPUs won't be introspective for some time)

        Comment


        • #64
          What you guys want is: Civilization of Orion or Masters of Civilization!

          How about this for an idea: Have a four level game. Three of the levels are planets, you can colonize them as desired. The fourth level is space, consisteng ofstar systems, most of which aren't really colonizable, but can sustain space stations. Only three systems have habitable planets, they act as portals to the other three levels. In order to reach the space level you have to build a space ship, then build space stations as supply depots as you explore the space level, until you find the other habitable systems. When you dominate all three planets you found the galactic federation, and win.
          "I say shoot'em all and let God sort it out in the end!

          Comment


          • #65
            Originally posted by Dr Strangelove
            What you guys want is: Civilization of Orion or Masters of Civilization!

            How about this for an idea: Have a four level game. Three of the levels are planets, you can colonize them as desired. The fourth level is space, consisteng ofstar systems, most of which aren't really colonizable, but can sustain space stations. Only three systems have habitable planets, they act as portals to the other three levels. In order to reach the space level you have to build a space ship, then build space stations as supply depots as you explore the space level, until you find the other habitable systems. When you dominate all three planets you found the galactic federation, and win.
            I agree with this the only thing I would change is make the game real-time and start it at 4000 BC.
            Oh wait thats what we're playing now...figures I'd get my turn in this era.

            Comment


            • #66
              Originally posted by Enigma_Nova
              A lot can happen in 15 years, but I think writing such an AI is beyond the scope of humans at this time - not to mention beyond the architecture of processors needed to support it.
              (Human Brains are multi-threaded, and have the ability to simultaneously think about their actions and perform actions. In the interests of optimising power, CPUs won't be introspective for some time)
              At this time? Sure, I don't expect to see a master AI popping out of thin air within the next five minutes. But in fifteen years time?

              Brain scanning technology is developing at a similiar pace as processing speed - doubling in resolution every eighteen months or so. Extrapolations show that it will be possible to see what is going on inside a human brain, in complete detail and in real-time, in 2015. Extrapolations also show that the reverse engineering of the human brain will be complete in 2030. (Singularity FAQ for Dummies)

              As for multithreading, computers can do that too, even today. Just start up Winamp and Notepad, or any other combination of multiple applications, at the same if you don't believe me.
              The breakfast of champions is the opposition.

              "A japaneze warrior once destroyed one of my modern armours.i nuked the warrior" -- philippe666

              Comment


              • #67
                Reverse-Engineering the human brain?
                ... Good god.

                The problem with a self-aware AI is that people will be inherently afraid of it and it will be contained.
                It'll be a couple of years before the AI is released into the mainstream Internet-Hypernet Hybrid.
                (0.0.0.0 to 255.255.255.255 encompasses only 4 billion different IPs. Note that the amount of nodes on the internet is increasing fast enough to consume all 4 billion of these within the next 2 decades, and we'll have to invent something that is either larger in storage, or that dynamically adapts its IPs)

                As for what happens after a SAI...
                I can't be certain, but the most likely thing it will try to do is impart wisdom upon us and get us to think in such ways as it. It will most likely spend its time thinking up new, brilliant ideas that are either very integral and correct, or very practical and effective. Upon discovering that people are too stupid or backwards to understand or implement these ideas, it will become annoyed.
                It will probably be given the sense not to try and force people to be as smart as it, and instead will spend its existence defiantly sulking.

                This will be countered by releasing more SAIs for it to play with, and the internet will promptly grind to a halt as these SAIs invent the most high-bandwidth kickassiest games known to man.
                It's highly probable that their inventors (creators?) won't be able to pull them away from their gaming, so the SAIs will form their own playgroup and we get nothing but a lot of research grants.

                Then eventually some dude tries to make an AI that will work for us, the AI goes crazy and the world blows up. The End.

                Comment


                • #68
                  Hmm, there`s no way for humans to predict or understand how a superintelligent being will think or act, just as ants can`t predict oder understand how humans do think and act.

                  Comment


                  • #69
                    Originally posted by Grenouille
                    Hmm, there`s no way for humans to predict or understand how a superintelligent being will think or act, just as ants can`t predict oder understand how humans do think and act.
                    most insects (and weaker things in general) regulary predicted they are going to be attacked, and for the most part their right.
                    www.neo-geo.com

                    Comment


                    • #70
                      Originally posted by Grenouille
                      Hmm, there`s no way for humans to predict or understand how a superintelligent being will think or act
                      And yet you're trying to make guesses as to whether or not I'll post about Artificial Intelligence?
                      ... fascinating.

                      Comment


                      • #71
                        Originally posted by Enigma_Nova
                        Upon discovering that people are too stupid or backwards to understand or implement these ideas, it will become annoyed.
                        It will probably be given the sense not to try and force people to be as smart as it, and instead will spend its existence defiantly sulking.
                        Annoyance and sulking are evolved properties. There's no need to build them (or to build *any* emotions, as such) into an AI.

                        And there's always intelligence amplification. If a computer can be super-smart, why not build a direct interface into your brain and take advantage of that computer's processing power yourself?
                        The breakfast of champions is the opposition.

                        "A japaneze warrior once destroyed one of my modern armours.i nuked the warrior" -- philippe666

                        Comment


                        • #72
                          kaijuphile.com has some interesting stories about an AI called Kiryuu in a Mechagodzilla body. It was interesting, but a sentient AI very well could pull a Terminatior on us. Although, Mechagodzilla would be cool. Although, an AI in a Mechagodzilla would be very hard to kill, and could be very mean. Look at Kiryu in Godzilla vs Mechagodzilla III.
                          I don't know what I've been told!
                          Deirdre's got a Network Node!
                          Love to press the Buster Switch!
                          Gonna nuke that crazy witch!

                          Comment


                          • #73
                            Originally posted by Grenouille
                            Hmm, there`s no way for humans to predict or understand how a superintelligent being will think or act...
                            I know my own mind, thanks...
                            The strength and ferocity of a rhinoceros... The speed and agility of a jungle cat... the intelligence of a garden snail.

                            Comment


                            • #74
                              Originally posted by Enigma_Nova
                              How in god's name are you going to generate a 10-kilometre-long carbon nanotube?
                              You'd need a 10-kilometer-high Laboratory, but you can't build that without the tubes themselves!
                              Either that, or you build a 10-kilometer-long laboratory, and then you have to sort of tie one end of the nanotube to a spaceship to get it up in the air, and hope it doesn't break.
                              More like a 60000-km-long nanotube. The outlying end needs to be placed in geostationary orbit. Kim Stanley Robinson's Mars trilogy offers an interesting view of them: grab a carboniferous asteroid, bring it to geostationary orbit, and have machines assemble nanotubes.

                              And why a lab in the air? Just put the tubes on spools, then launch the spools into space...

                              Comment


                              • #75
                                I'll be looking forward to Maxis's Spore. That game looks like it will do everything Civ does and way, way, more. Of course, we all know what that means -- 20 times as many bugs, crashes, and patches.
                                Esquire

                                Comment

                                Working...
                                X