Originally posted by Jon Miller
View Post
Announcement
Collapse
No announcement yet.
Chimpanzees granted 'legal persons' status to defend their rights in court
Collapse
X
-
Well, there are a lot of possible scenarios when you get to human level AI. You don't have to have weapons systems to do damage. Viruses already cause a lot of problems. A human level AI could be like a hacker who writes a virus ... only that hacker is the virus replicating itself or portions of itself on every device (or group of devices, depending on how many resources it requires) ...Originally posted by Elok View PostYes, I've run into that article, but how? How, exactly, are computers going to displace us, unless somebody's planning to create self-replicating superbots or give them control of nukes, etc.? Has anyone specified?
Comment
-
I didn't say it's necessary. It may or may not be.Originally posted by Jon Miller View PostWhy?
I agree that they would be a plus once you have real machine intelligence. But why are they necessary a plus or a step forward in creating it?
I predict that some portion of the basic research required will not require any physical computers at all.
It's a plus because we don't know how much processing power is necessary. Given the unknown, the more processing power we have, the more likely we are to have enough. So it's a plus in that regard.
Also more processing power will allow it to do whatever it's doing faster. Since by definition it's a self-improving algorithm, then the more it can process, the more it can improve. This holds true down to whatever minimal level where it has any self-improving capability. Or even for us in the to test more possibilities faster.
Waiting an extra few days for something to compile to see if it's **** or not is hardly efficient.
Comment
-
You are thinking of it as a step towards real AI. It may or may not be.Originally posted by Aeson View PostI didn't say it's necessary. It may or may not be.
It's a plus because we don't know how much processing power is necessary. Given the unknown, the more processing power we have, the more likely we are to have enough. So it's a plus in that regard.
Also more processing power will allow it to do whatever it's doing faster. Since by definition it's a self-improving algorithm, then the more it can process, the more it can improve. This holds true down to whatever minimal level where it has any self-improving capability. Or even for us in the to test more possibilities faster.
Waiting an extra few days for something to compile to see if it's **** or not is hardly efficient.
Now it is a step towards nightmare super AI, but it isn't a meaningful step until we make meaningful steps towards real AI.
The nightmare super AI scenario might be an issue if we make a relatively stupid (say, toddler level) AI and think it is very useful and hook it up to everything(I expect we would know enough to not connect an adult level AI to everything). Before we reached Chimp level AI I would be joining those who were preaching the dangers of AI.
JMJon Miller-
I AM.CANADIAN
GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.
Comment
-
I only have so much time to devote to public service and I prefer to do so in situations where I can make a difference.Originally posted by kentonio View PostVery scientific of you, you're a credit to your profession.
JM
(Also I am not particularly strong at communicating)Jon Miller-
I AM.CANADIAN
GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.
Comment
-
Example of the irrationality which I find so frustrating:
"I sat in a San Francisco conference room a few months ago as 14 staffers at the charity recommendation group GiveWell discussed the ways in which artificial intelligence — extreme, world-transforming, human-level artificial intelligence — could destroy the world. Not just as idle chatter, mind you. They were trying to work out whether it's worthwhile to direct money — lots of it — toward preventing AI from destroying us all, money that otherwise could go to fighting poverty in sub-Saharan Africa."
JMJon Miller-
I AM.CANADIAN
GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.
Comment
-
"Just look at GiveWell's analysis of the Against Malaria Foundation, its current top-rated charity."Originally posted by Jon Miller View PostExample of the irrationality which I find so frustrating:
"I sat in a San Francisco conference room a few months ago as 14 staffers at the charity recommendation group GiveWell discussed the ways in which artificial intelligence — extreme, world-transforming, human-level artificial intelligence — could destroy the world. Not just as idle chatter, mind you. They were trying to work out whether it's worthwhile to direct money — lots of it — toward preventing AI from destroying us all, money that otherwise could go to fighting poverty in sub-Saharan Africa."
JM
"Biosecurity thus poses a very similar challenge as criminal justice reform and monetary policy. Estimating the magnitude of impact for a philanthropic intervention is difficult bordering on untenable."
I'm not seeing the problem here. The potential threat of AI was discussed (and in a long-term view, not short-term), and it seems they came to a rather reasonable conclusions.
Comment
-
Arguing with you is almost as bad as arguing with Asher. In order for Jon to be willing to discuss with you, you have to be willing to listen.Originally posted by kentonio View PostVery scientific of you, you're a credit to your profession.If there is no sound in space, how come you can hear the lasers?
){ :|:& };:
Comment
-
The current state of the art in AI and ML is something called "deep learning" which is simply an automated means of doing something called "feature extraction". Normally we have to manually design features of data for our algorithms to search for and recognize patterns from. If it's a photograph, this might be edge detection algorithms for example. This is somewhat related to ideas like PCA or MDS for those of you who are up on your linear algebra. Basically, we have to manually reduce the dimensionality of the data by constraining the input space to something that the computer can efficiently regress on.
This may be the first steps towards creating "real" AI as the futurists in this thread may think of it, but if it is, it's a very very small first step. I should note that the ideas of Deep Learning aren't new, what is new are things like GPGPU, running programs on graphics cards, that make it possible to compute these algorithms relatively efficiently.If there is no sound in space, how come you can hear the lasers?
){ :|:& };:
Comment
-
Originally posted by Hauldren Collider View PostArguing with you is almost as bad as arguing with Asher. In order for Jon to be willing to discuss with you, you have to be willing to listen.
I wasn't born with enough middle fingers.
[Brandon Roderick? You mean Brock's Toadie?][Hanged from Yggdrasil]
Comment
-
Apparently it's his first time.Scouse Git (2) La Fayette Adam Smith Solomwi and Loinburger will not be forgotten.
"Remember the night we broke the windows in this old house? This is what I wished for..."
2015 APOLYTON FANTASY FOOTBALL CHAMPION!
Comment
-
The entire hypothetical is one where human-level AI is at some point developed. What steps that will take are still unknown.Originally posted by Jon Miller View PostYou are thinking of it as a step towards real AI. It may or may not be.
Now it is a step towards nightmare super AI, but it isn't a meaningful step until we make meaningful steps towards real AI.
It could be that someone turns on a relatively simple learning/self-modifying algorithm and it does the rest itself.
It could be that we learn a little from many such algorithms which each eventually fail to progress past a point, and are able to create a hybrid expert system/learning algorithm to get it to achieve a sustainability at progressively higher and higher levels. (At some point though to get to the hypothetical, it has to be able to break free from such a development cycle. Otherwise it doesn't qualify.)
Or somewhere between.
It's good you recognize the danger. From the sound of it, you're basically saying the same thing as Hawking and I am about the danger such an AI poses. (You're just assuming a timeframe, which I haven't. I'm not sure if Hawking has.)The nightmare super AI scenario might be an issue if we make a relatively stupid (say, toddler level) AI and think it is very useful and hook it up to everything(I expect we would know enough to not connect an adult level AI to everything). Before we reached Chimp level AI I would be joining those who were preaching the dangers of AI.
You're displaying a lot more faith in humanity than I would.Last edited by Aeson; April 24, 2015, 10:55.
Comment
Comment