The Altera Centauri collection has been brought up to date by Darsnan. It comprises every decent scenario he's been able to find anywhere on the web, going back over 20 years.
25 themes/skins/styles are now available to members. Check the select drop-down at the bottom-left of each page.
Call To Power 2 Cradle 3+ mod in progress: https://apolyton.net/forum/other-games/call-to-power-2/ctp2-creation/9437883-making-cradle-3-fully-compatible-with-the-apolyton-edition
The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots.
Remember: when nobody needs a job, then all of us are equal. And this is the promise of idealists through history. Unlimited consumption without effort. God bless the machines
Remember: when nobody needs a job, then all of us are equal. And this is the promise of idealists through history. Unlimited consumption without effort. God bless the machines
The unlimited part sounds a bit off, but how about a Basic Income?
No, I did not steal that from somebody on Something Awful.
Why should there be unlimited consumption? When the corporations replace employees with robots will the employees own the robots? I think not. The employees will get unemployment compensation for as long as legally mandated, then they can go move in with Mom and Dad. I imagine that people will continue in management and the professions, at least in a supervisory role. For the rest of you - Soylent Green is People!
"I say shoot'em all and let God sort it out in the end!
I'll just hack the banking system to give me an unlimited credit line.
“It is no use trying to 'see through' first principles. If you see through everything, then everything is transparent. But a wholly transparent world is an invisible world. To 'see through' all things is the same as not to see.”
what happens when the super intelligent robots decide they could get along much better without we sacks of skin and bones.
While this is a possibility, it assumes something rather naive: that AI will have motivations that translate cleanly into human intention. There's no particularly good reason why that would be the case. What is more worrisome to the AI enthusiasts is something called subgoal stomp, where short-sighted programming puts too much weight on a subgoal, leading an AI to "stomp" over more important goals.
Subgoal stomp happens in humans. For example, evolution made sugars taste really ****ing good, because they're a great source of calories and it's important to hoard them in case of shortages. But when calories are no longer scarce, evolution's short-sighted design leads to subgoal stomp, causing us to consume calories in excess, leading to diseases of affluence. So we end up killing ourselves, but not because of any malice or evil intentions.
With AI, the thought is not that AI will suddenly develop the urge to kill all humans, but that some calculus may cause an AI to lose sight of the value of humans in favor of another goal, even if that other goal is ultimately subservient to the goal of human welfare. So we all get turned into paperclips, as TMM's comic references.
Comment