Announcement

Collapse
No announcement yet.

Is everything a religion?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #76
    Originally posted by Lorizael View Post

    For some problems that machines are able to solve, X amount of time might be so long as to be meaningless. Humans, being made of nothing more than goo, are also in principle predictable, even if we are currently extremely bad at predicting what humans will do, because 100 billion neurons is a lot and we understand very little about how human intelligence works at present.
    We do understand very little.

    That is why I pointed to the 302 neuron worm simulations as being the only possibly real movement towards Artificial Intelligence. To fully model such a human (which requires more than the 100 billion neurons but also the microbes and all of the other systems) would require a possibly impossible computer system (and if not impossible, then suitably far future as to be 'distant').

    And the discovery that loss functions work well with models but not with humans (or animals) is a crucial insight, which I think you are ignoring.

    JM
    Jon Miller-
    I AM.CANADIAN
    GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

    Comment


    • #77
      Originally posted by Lorizael View Post
      If problem-solving machines were predictable, we wouldn't need them.
      They're predictable in the sense that you can rely on them to do exactly what they are supposed to do, outside of failures and glitches. I would say that chess is "something smart people do" in the sense that it's not something that comes very naturally to human beings. It requires sustained effort and practice to acquire skill. A custom machine designed specifically to play chess is naturally going to at least have the potential to play better. But if you made even a very simple rule change, how well would the computer adjust? For example, if you made it so that pawns could move backwards one space as well as forwards? I think a human would adjust better.
      1011 1100
      Pyrebound--a free online serial fantasy novel

      Comment


      • #78
        Originally posted by Jon Miller View Post
        I start playing chess.

        You could argue that my decision to start playing chess is the same sort of thing as the chess algorithm's 'decision' to move the pawn to K3, but it isn't. The Chess algorithm is evaluating a loss function when it 'decides' to move the pawn to K3, and I (am likely) not doing so when I decide to play chess. This is because (all evidence) points towards me not evaluating a loss function when I make most decisions.

        This isn't an inefficiency with Nature (Nature after all is very efficient) and this is not "shorts for some, pants for others".... this is a fundamental and crucial difference.

        It is because what I (and even dogs) do is what you need to be able to do in an open environment (like what we experience in nature) and evaluating a loss function is applicable only for the constrained space of models/etc (which is where the chess application or go application is).

        Even a 'self-driving car' is still working in a extremely constrained space of models and not in the open space which exists in nature.

        JM
        You actually don't know what you are doing when you are movng a pawn to K3.
        Your overlying neural networks (that form your consciousness) may believe that they made the decision for this or that reason, but in truh the decision (to move the pawn to K3) has already made by underlying (subconscious) neurall connections before you consciously "decided" to do so.

        Thre are several eperiments in neurobiology that sugest so (i.e. that our consciousness is made believe that it decides consciously when the tue decision is made subconsciously)

        So, actually the subconscious parts in your brain that are involved with playing chess actually may get their decision as part of a loss function, but your consciousness doesn't notice it
        Tamsin (Lost Girl): "I am the Harbinger of Death. I arrive on winds of blessed air. Air that you no longer deserve."
        Tamsin (Lost Girl): "He has fallen in battle and I must take him to the Einherjar in Valhalla"

        Comment


        • #79
          Originally posted by Proteus_MST View Post
          You actually don't know what you are doing when you are movng a pawn to K3.
          Your overlying neural networks (that form your consciousness) may believe that they made the decision for this or that reason, but in truh the decision (to move the pawn to K3) has already made by underlying (subconscious) neurall connections before you consciously "decided" to do so.

          Thre are several eperiments in neurobiology that sugest so (i.e. that our consciousness is made believe that it decides consciously when the tue decision is made subconsciously)
          I don't disagree with this last statement.

          What we call neural networks in Machine Learning have very little to do with real biological neural networks, so your association there doesn't make sense (you are just relying upon an artifact of language).
          So, actually the subconscious parts in your brain that are involved with playing chess actually may get their decision as part of a loss function, but your consciousness doesn't notice it
          Evidence points to it not being an evaluation of a loss function in general (the specific case of playing chess and moving pawn to K3 might be one of the cases where our subconscious runs a model and evaluates a loss function, but the evidence is against this being the case in general even in 'rational' areas).

          JM
          Jon Miller-
          I AM.CANADIAN
          GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

          Comment


          • #80
            Originally posted by Proteus_MST View Post

            You actually don't know what you are doing when you are movng a pawn to K3.
            Your overlying neural networks (that form your consciousness) may believe that they made the decision for this or that reason, but in truh the decision (to move the pawn to K3) has already made by underlying (subconscious) neurall connections before you consciously "decided" to do so.

            Thre are several eperiments in neurobiology that sugest so (i.e. that our consciousness is made believe that it decides consciously when the tue decision is made subconsciously)

            So, actually the subconscious parts in your brain that are involved with playing chess actually may get their decision as part of a loss function, but your consciousness doesn't notice it
            How can subjectivity not be related consciousness? What you are saying is that a "being" can make subjective decisions without being conscious.
            I drank beer. I like beer. I still like beer. ... Do you like beer Senator?
            - Justice Brett Kavanaugh

            Comment


            • #81
              Originally posted by Jon Miller View Post
              I don't disagree with this last statement.

              What we call neural networks in Machine Learning have very little to do with real biological neural networks, so your association there doesn't make sense (you are just relying upon an artifact of language).

              ...
              While neural transmission in a biological neural network, of couse, is more complicated than in an ANN (with the biological network having presynaptic neurotransmitter depots, postsynaptic neurotransmitter activaed ion channels, in some case cascades of ion channel activation and so on (and also speed diffferences along different neural pathwways), whereas in an ANN every "neuron" just has a simple number that represents the strength of connectivity with underlying neuronal layers), learning in both nevertheless functions along similar lines ... i.e. "successful" neural pathways get an increase in strength, whereas "unsuccesssul" neural pathways get a decrease in strength.

              I don't see a reason why future ANNs (with enough processing power, a training/learning mode always activated (unlike now with distinct training and performance modes) and peerhaps something that emulates global parameters (like global hormonal levels) shouldn' become so similar to a natural neural network, that it maay develope artificial sentience.

              As said, we talk about future development here, not the current state of the art (which btw. has improved considerably ... noone would have thought before that a computer woul be able to beat a professional dan player in Go (especially considering that (unlike in chess) there are so many possible "moves" in each turn, that normal evaluation of possibility trees becomes computationally too costly, especially in the important beginning moves, which may define the strategies/counteerstrategies of the whole game), and now a DCNN actually has defeated a Pro Dan player of the highst level, who worked with the help of colleagues ... and the DCNN didn't only defeat him narrowly, but decisively).
              Tamsin (Lost Girl): "I am the Harbinger of Death. I arrive on winds of blessed air. Air that you no longer deserve."
              Tamsin (Lost Girl): "He has fallen in battle and I must take him to the Einherjar in Valhalla"

              Comment


              • #82
                Originally posted by Kidicious View Post

                How can subjectivity not be related consciousness? What you are saying is that a "being" can make subjective decisions without being conscious.
                I actually say the contrary ... i.e. that decision we "believe" to be subjective, actually aren't ... but that the underlying (subconscious) network/state machine (that in reality "makes the decisions" due to neural activation levels/states) actually just lets our consciousness believe that we made a certain decision for subjective reasons.
                Tamsin (Lost Girl): "I am the Harbinger of Death. I arrive on winds of blessed air. Air that you no longer deserve."
                Tamsin (Lost Girl): "He has fallen in battle and I must take him to the Einherjar in Valhalla"

                Comment


                • #83
                  Originally posted by Proteus_MST View Post

                  I actually say the contrary ... i.e. that decision we "believe" to be subjective, actually aren't ... but that the underlying (subconscious) network/state machine (that in reality "makes the decisions" due to neural activation levels/states) actually just lets our consciousness believe that we made a certain decision for subjective reasons.
                  So how would AI decide to play chess or make other decisions that we "believe" are subjective?
                  I drank beer. I like beer. I still like beer. ... Do you like beer Senator?
                  - Justice Brett Kavanaugh

                  Comment


                  • #84
                    Originally posted by Proteus_MST View Post

                    While neural transmission in a biological neural network, of couse, is more complicated than in an ANN (with the biological network having presynaptic neurotransmitter depots, postsynaptic neurotransmitter activaed ion channels, in some case cascades of ion channel activation and so on (and also speed diffferences along different neural pathwways), whereas in an ANN every "neuron" just has a simple number that represents the strength of connectivity with underlying neuronal layers), learning in both nevertheless functions along similar lines ... i.e. "successful" neural pathways get an increase in strength, whereas "unsuccesssul" neural pathways get a decrease in strength.

                    I don't see a reason why future ANNs (with enough processing power, a training/learning mode always activated (unlike now with distinct training and performance modes) and peerhaps something that emulates global parameters (like global hormonal levels) shouldn' become so similar to a natural neural network, that it maay develope artificial sentience.
                    Because our real neural networks don't seem to be based on optimizing a loss function in the same way that our computer neural networks do, the additional components of the biological neural networks seem to have a crucial role in their function and learning. We need something else other than a loss function, which means our current efforts are barking up the wrong tree.

                    Basically, just because connections increase/decrease in strength does not mean that it is equivalent in some general way to a biological neural network.


                    As said, we talk about future development here, not the current state of the art (which btw. has improved considerably ... noone would have thought before that a computer woul be able to beat a professional dan player in Go (especially considering that (unlike in chess) there are so many possible "moves" in each turn, that normal evaluation of possibility trees becomes computationally too costly, especially in the important beginning moves, which may define the strategies/counteerstrategies of the whole game), and now a DCNN actually has defeated a Pro Dan player of the highst level, who worked with the help of colleagues ... and the DCNN didn't only defeat him narrowly, but decisively).
                    I am well aware of that as about 30% of my research efforts are now in DCNNs (first paper out this year with additional papers coming out next year).

                    But considering the constrained model space, I don't think that that it should have been such a surprise. Yes, humans might think the complexity level is vastly different... but the constrained model space is actually quite similar.

                    JM
                    Jon Miller-
                    I AM.CANADIAN
                    GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                    Comment


                    • #85
                      Someone decides to play chess because they feel bored, or they desire skills and social status. Everyone here seems to be operating under the assumption that feelings are the product of evolution (myself for argument sake). So how can AI think human-like without evolving over a billion years?
                      I drank beer. I like beer. I still like beer. ... Do you like beer Senator?
                      - Justice Brett Kavanaugh

                      Comment


                      • #86
                        Originally posted by Kidicious View Post

                        So how would AI decide to play chess or make other decisions that we "believe" are subjective?
                        Playing Chess, for example:
                        Lack of activation niveau/virtual neurotransmitter level in his virtual limbic system that is associated with "fun" ... and the evaluation of a function that lets it "decide"that currently the most efficient way to get "fun" is playing chess (for example due to another chess player being nerby ... or due to playing chess currently having the highest increase of "fun" per time unit for this AI).

                        An equivalent motivator (and associated ammong others with fun/pleasure) in humans and other animals would be the limbic systems with its Dopamine levels and D2 receptors.
                        In fact, if you plant an electrode into a rats brain into the Nucleus accumbens septi (NAcS)(which is a part of the limbic system) and give her a button which, if pressed, will deliver an electric shock to this region, it will stop eating, stop sleeping nnad constantly press this button)
                        (nope, equivalent experiments haven't been made to humans soo far ... for ethical reasons ... and because hammering a microelectrode into a brain (yes, one can call it hammering due to having to get through the dura mater, which surrounds the brain and is very resilient, as the nme already suggests) can cause irreparable damage to the regions it has to pass through .... but I am sure if such experiments were allowed, one would find willing subjects )
                        Tamsin (Lost Girl): "I am the Harbinger of Death. I arrive on winds of blessed air. Air that you no longer deserve."
                        Tamsin (Lost Girl): "He has fallen in battle and I must take him to the Einherjar in Valhalla"

                        Comment


                        • #87
                          Originally posted by Proteus_MST View Post

                          Playing Chess, for example:
                          Lack of activation niveau/virtual neurotransmitter level in his virtual limbic system that is associated with "fun" ... and the evaluation of a function that lets it "decide"that currently the most efficient way to get "fun" is playing chess (for example due to another chess player being nerby ... or due to playing chess currently having the highest increase of "fun" per time unit for this AI).

                          An equivalent motivator (and associated ammong others with fun/pleasure) in humans and other animals would be the limbic systems with its Dopamine levels and D2 receptors.
                          In fact, if you plant an electrode into a rats brain into the Nucleus accumbens septi (NAcS)(which is a part of the limbic system) and give her a button which, if pressed, will deliver an electric shock to this region, it will stop eating, stop sleeping nnad constantly press this button)
                          (nope, equivalent experiments haven't been made to humans soo far ... for ethical reasons ... and because hammering a microelectrode into a brain (yes, one can call it hammering due to having to get through the dura mater, which surrounds the brain and is very resilient, as the nme already suggests) can cause irreparable damage to the regions it has to pass through .... but I am sure if such experiments were allowed, one would find willing subjects )
                          Humans are very different from rats. I think some experiments might confuse you about that because you will end up basing your decision in conjecture. I've seen it before.

                          I believe in free will. I think JM does as well, but he might not always bring that up with some of his colleagues. You told him that humans don't make subjective decisions. I don't know if he agrees with that.

                          I guess I will just ask you if you are insisting that AI can make any decision that we don't program it to, and also is that decision caused by something like human evolution. Because I don't know about any non-biological evolution.
                          I drank beer. I like beer. I still like beer. ... Do you like beer Senator?
                          - Justice Brett Kavanaugh

                          Comment


                          • #88
                            Originally posted by Kidicious View Post

                            Humans are very different from rats. I think some experiments might confuse you about that because you will end up basing your decision in conjecture. I've seen it before.
                            Yes, rats are different in thei brain structure.
                            But the nneural correlates of rats have their equivalents in the human brain.

                            You also shouldn' confuse the experiments with regards to the NAcS in rats with the experiments I mentioned that were made with regards to the free will ... the latter ones were performed with humans. More exactly, I refered to Matsuhashi et al. (2008)
                            The opinions among neuroscientists regarding free will are divided ..and many are of the opinion, that free will is an illusion.

                            Also look at hypnosis experiments:
                            People who are hypnotizd to perform a certain action during a later time (for example take an umbrella with them despite it being a sunny day ih no ran in sight), will do so and, if ackes why they did so, will give an answer that makes sense to them ... and none of these answers involves "my free will was impeded" ... instead they all will think that they consciously chose to do the action that was implanted into them by the hypnosis (without them consciously knowing that the action actualy was implanted into them)

                            Also look at Rabies:
                            It is just a "simple" virus ... but nevertheless it can alter the human brain in a way that impedes on that what you call "free will".
                            It creates (among other things) a fear of water and increases the aggressiveness in people who have contracted this virus ... both things which are useful for the virus.

                            And last but not least look at drug addictions and brain damages/:
                            Both can severly alter the behavior of a person.


                            So, all in all those experiments and observations suggest to me as well, that free will may actually be just an illusion and our actions just be a result of the hypoercomplex state machine that is our brain (which, however, because of the complexity of the brain cannot be easily predicted in most cases)


                            Originally posted by Kidicious View Post
                            I believe in free will. I think JM does as well, but he might not always bring that up with some of his colleagues. You told him that humans don't make subjective decisions. I don't know if he agrees with that.

                            I guess I will just ask you if you are insisting that AI can make any decision that we don't program it to, and also is that decision caused by something like human evolution. Because I don't know about any non-biological evolution.
                            Well, I may point you to the robots that were designed by a genetic algorithm:
                            https://i.materialise.com/blog/genet...uman-designer/


                            If we put it into really simple terms, the genetic algorithm got "told" what tasks the robot should perform (and had a physics engine to test designs) and then crated a solution without human interference.
                            (to be more exactly ... the algorithm had a randomized parental population of "genetic setups" (that defined different possible shapes of robots) and from this generation on each generation was virtually tested (for fitness in the task) and a certain percentage of those robots who performed best in the test were chosen to create the next generation of robots (via recombination and mutation) --- after a huge number of generations the best results were used as blueprints for real robots (or in case of the experiments of the Fraunhofer Institute 3D printed out by the algo itself))

                            The algorithm and its rules is well understood ... but nevertheless it is able to perform a creative design work ith surprising results (that probably no human designers would design this way, but which nevertheless outperform the works of human designers)
                            Tamsin (Lost Girl): "I am the Harbinger of Death. I arrive on winds of blessed air. Air that you no longer deserve."
                            Tamsin (Lost Girl): "He has fallen in battle and I must take him to the Einherjar in Valhalla"

                            Comment


                            • #89
                              Originally posted by Proteus_MST View Post

                              Yes, rats are different in thei brain structure.
                              But the nneural correlates of rats have their equivalents in the human brain.

                              You also shouldn' confuse the experiments with regards to the NAcS in rats with the experiments I mentioned that were made with regards to the free will ... the latter ones were performed with humans. More exactly, I refered to Matsuhashi et al. (2008)
                              The opinions among neuroscientists regarding free will are divided ..and many are of the opinion, that free will is an illusion.

                              Also look at hypnosis experiments:
                              People who are hypnotizd to perform a certain action during a later time (for example take an umbrella with them despite it being a sunny day ih no ran in sight), will do so and, if ackes why they did so, will give an answer that makes sense to them ... and none of these answers involves "my free will was impeded" ... instead they all will think that they consciously chose to do the action that was implanted into them by the hypnosis (without them consciously knowing that the action actualy was implanted into them)

                              Also look at Rabies:
                              It is just a "simple" virus ... but nevertheless it can alter the human brain in a way that impedes on that what you call "free will".
                              It creates (among other things) a fear of water and increases the aggressiveness in people who have contracted this virus ... both things which are useful for the virus.

                              And last but not least look at drug addictions and brain damages/:
                              Both can severly alter the behavior of a person.


                              So, all in all those experiments and observations suggest to me as well, that free will may actually be just an illusion and our actions just be a result of the hypoercomplex state machine that is our brain (which, however, because of the complexity of the brain cannot be easily predicted in most cases)
                              I would never let anyone hypnotize me. I believe that takes away your free will. It might even be a sin because maybe you become a kind of slave. There is a person at my church who uses hypnosis on people who have guilt and things like that. It's disturbing to me. I believe that those things can be worked out without hypnosis and maybe some guilt is good. But some people these days don't want to have any bad feelings.

                              I am an ex-smoker. I smoked for over 20 years. The only reason I didn't quit sooner was because it was too easy. When it became hard I quit. I believe this is free will. I say this because I know smokers who say they can't quit, but they can. All they have to do is do it. You can decide to believe anything, and that is not just a matter of chemical reactions. That is free will. But if you believe it is chemical reactions then you won't use it.

                              Well, I may point you to the robots that were designed by a genetic algorithm:
                              https://i.materialise.com/blog/genet...uman-designer/


                              If we put it into really simple terms, the genetic algorithm got "told" what tasks the robot should perform (and had a physics engine to test designs) and then crated a solution without human interference.
                              (to be more exactly ... the algorithm had a randomized parental population of "genetic setups" (that defined different possible shapes of robots) and from this generation on each generation was virtually tested (for fitness in the task) and a certain percentage of those robots who performed best in the test were chosen to create the next generation of robots (via recombination and mutation) --- after a huge number of generations the best results were used as blueprints for real robots (or in case of the experiments of the Fraunhofer Institute 3D printed out by the algo itself))

                              The algorithm and its rules is well understood ... but nevertheless it is able to perform a creative design work ith surprising results (that probably no human designers would design this way, but which nevertheless outperform the works of human designers)
                              This just sounds like simulated evolution to me. It sounds like a very good way to make better robots. But are they trying to make robots that are human like in their thinking or better at doing whatever task? Take driving for example, humans aren't created/evolved for driving. Maybe a robot can be a better driver if we can make it so. But such a robot doesn't need to think like a human and I think probably won't.
                              I drank beer. I like beer. I still like beer. ... Do you like beer Senator?
                              - Justice Brett Kavanaugh

                              Comment


                              • #90
                                Regarding the Matsuhashi study, it shows that decision making is done subconsciously but of course it can't prove that it's impossible to make a decision consciously. It seems a bit far fetch to me to think that a decision to quit smoking is made subconsciously when your physical body is telling you to smoke. Can you explain that?
                                I drank beer. I like beer. I still like beer. ... Do you like beer Senator?
                                - Justice Brett Kavanaugh

                                Comment

                                Working...
                                X