Originally posted by kentonio
View Post
Announcement
Collapse
No announcement yet.
Chimpanzees granted 'legal persons' status to defend their rights in court
Collapse
X
-
Fantastic news. When is Kentonio applying for citizenship?Scouse Git (2) La Fayette Adam Smith Solomwi and Loinburger will not be forgotten.
"Remember the night we broke the windows in this old house? This is what I wished for..."
2015 APOLYTON FANTASY FOOTBALL CHAMPION!
Comment
-
When are you going to stick your nose into a jar of horseradish mixed with powdered glass and inhale as hard as you can?
Comment
-
Originally posted by Hauldren Collider View PostArtificial intelligence as a term is a misnomer. There is to my knowledge no serious research currently towards the kind of fanciful science fiction intelligence Kentonio or Elok are describing. In the early days of the field, that was the ultimate goal--a machine that could learn, just like a human. The Turing test and so on. Nowadays the field of Artificial Intelligence is really more of a milieu of algorithms with useful applications in computer vision, speech recognition, language processing, and similar tasks involving interaction with the real world or with human communication. These algorithms run the gamut from simple graph-search algorithms to function optimization to statistical models. Much of the interesting work in AI is really just understanding how to represent input data and reduce the dimensionality of the input into something manageable and interpretable.
I have studied this subject extensively--Modern artificial intelligence is more or less the intersection between Statistics and Computer Science, my two fields of study.
Originally posted by Hauldren Collider View PostEven machine learning, which has advanced in tremendous strides over the last decade, is not really learning as you or I would think of it. And to draw a contrast between how, for example, a human child would learn and how a machine learning algorithm would learn: Let's say you are walking down the street with your child and you see a dog. You point to the child and say, "doggy!" The child now has a very good idea of what a dog is, from a single positive example, and no negative examples. The child may then see a cat, say "doggy!" and you correct him and say "kitty!" The child is probably now able to recognize a dog even if it is a different size or a different breed and at any angle. By contrast, you could take the best machine learning algorithm with the latest heuristics and sophisticated statistical or nonparametric models like ANNs, random forests, whatever, and train it with 100,000 positive and negative examples and it'll maybe do better than a coin flip when you ask it "is there a dog in this picture?"
Why is the child able to do so much better than the computer? Simple: The kid's cheating. We have a million+ years of evolution granting us instincts on how to recognize everyday things and comprehend the world around us. I suspect someday we'll be able to get the same or similar ability onto a computer, but until then advantage humans.
Originally posted by Hauldren Collider View PostAny program which is capable of "learning" in even the most trivial sense is self-altering. I suspect Kentonio that you are not really familiar with the concept of self-altering programs. It is not like a human waking up one day and saying, "I am tired of being a software engineer. I am going to learn to play piano." It is more like a human exercising on a bench press and getting stronger arms.
Google search is self altering; every time you type a phrase into it it remembers that and uses it in future predictions. This already does lead to occasionally surprising behavior that we would be unlikely to anticipate. What is important to understand is that machine learning algorithms are built around extremely concrete goals. Programs optimize themselves towards these goals. They are not abstract. There are algorithms for what statisticians refer to as "unsupervised learning", which is closely related to density estimation, but these too do not "learn" in the human sense. They merely identify patterns in unlabelled data within a set of tuned constraints. Often these algorithms are able to recover useful information, but your computational genomics software isn't going to start looking at a set of alleles and suddenly decide that it wants to get married to that sexy Japanese inflatable lovebot.
Comment
-
When are you going to stick your nose into a jar of horseradish mixed with powdered glass and inhale as hard as you can?Scouse Git (2) La Fayette Adam Smith Solomwi and Loinburger will not be forgotten.
"Remember the night we broke the windows in this old house? This is what I wished for..."
2015 APOLYTON FANTASY FOOTBALL CHAMPION!
Comment
-
Originally posted by kentonio View PostIt doesn't have to be an intelligence that thinks like a human, it could be an intelligence that might be completely alien to us and which is simply a product of emergent behaviour. The more complex AI becomes the higher the likelihood of emergent behaviour appearing. It doesn't need to be a self aware system, it could be something as simple as a system that concludes that based on available data the most logical course of action is one that as a by product also ****s humans quite severely.
No, not advantage humans. All you did there is pick out an example of where the programming of humans is able to perform a task more efficiently than a machine currently can. Change that example to something like looking at a vast database of data and picking out the mathematical patterns, and suddenly we're firmly in the 'advantage computers' side of the court.If there is no sound in space, how come you can hear the lasers?
){ :|:& };:
Comment
-
Originally posted by Elok View PostWhen are you going to stick your nose into a jar of horseradish mixed with powdered glass and inhale as hard as you can?Click here if you're having trouble sleeping.
"We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld
Comment
-
Originally posted by Hauldren Collider View PostNo. Kentonio, "emergent" behavior is not a thing. At best, what we see are results that are surprising, which is honestly comforting because if ML algorithms didn't come up with results we did not already expect they would be fairly useless.
Originally posted by Hauldren Collider View PostKentonio, you are missing something here. Everything is a vast database of data. A set of images from a video camera is a bunch of 0s and 1s, just like a bunch of readings from a particle accelerator. The difference lies in whether there is an obvious mathematical interpretation or we have to fall back on probabilistic systems. I can write an algorithm to sort a list of numbers that is provably correct. But no one knows how to write an algorithm that is provably correct to read handwritten English lettering. We use these machine learning (really statistical) techniques because it's much easier to make something that is probably correct and can improve with training than come up with something that provably gets the right answer every time.
The concerns that people like Hawkings are raising is not about the danger of AI today, but about the dangers they are foreseeing as these systems and the computational power behind them drive inexorably forward and as we place these systems in charge of increasingly essential systems and infrastructure. Dismiss that at your peril.
Comment
-
Kentonio, I think you may just lack the background to understand what I am saying.Particularly the distinction I drew between ordinary provable algorithms and ML/AI techniques. Yes I know your phone can recognize handwriting. Usually. It's a probabilistic system. Christ, dude, I write these things. I know how they work.
If there is no sound in space, how come you can hear the lasers?
){ :|:& };:
Comment
-
Originally posted by Hauldren Collider View PostKentonio, I think you may just lack the background to understand what I am saying.Particularly the distinction I drew between ordinary provable algorithms and ML/AI techniques. Yes I know your phone can recognize handwriting. Usually. It's a probabilistic system. Christ, dude, I write these things. I know how they work.
Comment
Comment