It's not that hard a question to at least make significant gains over the extremes. We have many, many examples of how someone who's never had any responsibility and/or incentives tends more towards socially and self destructive behavior. It's obvious that trapping (physically or via debt/threat of poverty) someone in a job they hate for most of their life harms their enjoyment of life, can lead to anti-social and self destructive behavior, and at best only serves to increase productivity. Since productivity is no longer an issue, we can decrease the time, have good working conditions, drop the jobs people hate, and as long as we avoid eliminating all responsibility and incentives, be sure to make at least some gains.
Announcement
Collapse
No announcement yet.
The problem with Industry 4.0 is not primarily financial
Collapse
X
-
Originally posted by Proteus_MST View Post
Actually Star Trek contradicts itself with regards to the non exitence of money.
I remember that in Star Trek II or III McCoy bought glasses from an antique shop for James T. Kirk.
If there is no money, how can he buy something ... and how can something like a "shop" exist?, but I also *seem * to remember other inconsistencies. Plus they had interstellar space travel, so could always spend significant time doing science, colonizing/exploring/contacting or having little space wars vs. Klingons, Romulans etc...without that we're pretty much screwed
Blah
Comment
-
What do you think will happen when they come up with real life holodecks?I am not delusional! Now if you'll excuse me, i'm gonna go dance with the purple wombat who's playing show-tunes in my coffee cup!
Rules are like Egg's. They're fun when thrown out the window!
Difference is irrelevant when dosage is higher than recommended!
Comment
-
The problem with industry 4.0 is that it's Turtles All The Way Down. Everything will be built and maintained by robots, which will be built and maintained by other robots, which will be built and maintained by still other robots . . . somebody's going to need to supervise, repair, design, coordinate, etc. all these damn machines, to say nothing of acquiring all the resources required to build and run them. In many cases it will remain more practical to have humans do the work.
Comment
-
Originally posted by Elok View PostThe problem with industry 4.0 is that it's Turtles All The Way Down. Everything will be built and maintained by robots, which will be built and maintained by other robots, which will be built and maintained by still other robots . . . somebody's going to need to supervise, repair, design, coordinate, etc. all these damn machines, to say nothing of acquiring all the resources required to build and run them. In many cases it will remain more practical to have humans do the work.
I remember the autonomous mobile sword in "Screamers" which was self-replicating,
finally evolved and in the end even turned against the side of the war, which initially created the first vesion of themTamsin (Lost Girl): "I am the Harbinger of Death. I arrive on winds of blessed air. Air that you no longer deserve."
Tamsin (Lost Girl): "He has fallen in battle and I must take him to the Einherjar in Valhalla"
Comment
-
Eh, I'm more worried about the problems already on the horizon than the purely speculative. The twenty-first century is going to see some very impressive disruption just from climate change combined with population growth. Combine that with escalating dysfunction in most of the world's nuclear-armed powers, and . . .
- Likes 1
Comment
-
On a related note. Sums up, clarifies, and adds to a lot of the doubts I've long had about the singularity rapture.
Comment
-
Originally posted by Elok View Posthttps://backchannel.com/the-myth-of-...i-59282b686c62
On a related note. Sums up, clarifies, and adds to a lot of the doubts I've long had about the singularity rapture.
1: You don't need an AI that is good at philosophy, morality, raising kids, cooking food, and arguing on the internet to be superhuman at coordinating millions of armed drones. In fact it's more likely an AI with a narrow field of focus can be dangerous than one which at least thinks about morality and ethics. Military and terrorist developed AIs are sure to hamstring their AIs in that regard. To them amorality is a "feature", not a bug.
2: This is a matter of will, not possibility. If we want to do it, whether it's in 50 years or 500, as long as we get that time (operating at modern+ tech level) as a species ... we will do it.
3: Whether or not it can be done in silicon (it can, it may or may not be feasible) is irrelevant as biological and/or quantum computing is where we are headed. (Biological computing obviously has the potential. We've proven that.)
4: "You can't make an infinitely powerful AI so you can't make an AI more powerful than humans." .... abort, retry, fail?
5: The more common worry of the names he drops is that AI becomes the main problem. But he's operating under the misconception that since humans can't do something, that something else can't do it. Computers are already immensely helpful at solving many problems ... because they can do some things immensely better than any human can. There are really only a few problems left ... other than the stuff we inflict on ourselves intentionally. An AI doesn't have to simulate the universe to find a cure for a disease, it just has to simulate how the disease interacts with the human body and find the best ways to break those interactions.
----------------------------
"First, simulations and models can only be faster than their subjects because they leave something out."
Just to quickly drive the nail through the coffin of this misconception ... simulate a DOS game on a modern computer. Simulations can be faster than the original if you give it more processing power and/or increase the efficiency of the algorithms. It doesn't have to be lossy.
Comment
-
As for the "we are in a simulation" speculation ... that's all it is. We can't disprove it, and of course there's no evidence to suggest it is the case. Trying to disprove it presupposes we know about the "real" reality. Which of course we know nothing about, and can't even know if it exists or not.
But it certainly is possible for a "perfect" (meaning accepted by the subject as reality) FPV simulation using the laws of physics in this reality. Effects of some chemicals, injuries, and mental illness prove it. As does our understanding of the senses and how our mind interprets them.
Comment
-
We should ignore Matrix theories because they're onanistic, impossible to prove or disprove, and couldn't be addressed even if proven. It's like a religion where you can't even pray--crummy techno-deism. Who has time to think about such nonsense?
For your previous post, note that I'm headed out of town tomorrow and probably won't be able to respond further. But from the top: 1 and 2 don't appear to be addressing what he said in the respective points. I don't think he mentions morality or childrearing in the whole article. Not clear what 2 is responding to. 3 we have proven that biological computing has the ability to make something as smart as the smartest humans, no smarter. And as he said, intelligence is not a single thing. Different brains are optimized for different arrays of tasks. X number of neurons can have their capacity distributed to accomplish Y tasks more or less efficiently. 4 his point is, again, that we have no evidence that it's possible to increase intellect all that significantly beyond our own. 5 In order for a disease to simulate the human body reliably, it has to understand all relevant aspects of human body systems. Which is to say, it would have to have very nearly solved the problem already (or had same solved for it), which is his point. A crapton of observational and inferential legwork has to be done before the computer can even get started.
Every tool ever invented is better than humans at something. Cars are better at moving quickly. Wheelbarrows are better at holding stuff. Scissors are better at neatly dividing things. And computers are better at certain specialized tasks for which humans are not suitable. No doubt we will continue inventing new tools in the future.
Comment
Comment