Not signed in (Sign In)
  1.  (7519.21)
    Human beings are the sum of our parts and inputs and memories. Hook a powerful enough processor up to a big enough memory bank and feed it a constant stream of sensory input and you essentially have the hardware of a human 'mind.' Then you just have to sit back and wait for the mind to develop. The 'software' flows naturally as the 'being' grows and develops over time. Even the most brilliant human beings started off as mewling infants, and any AI will likely be no different. To think someone is just one day going to put all the necessary pieces together, flip a switch and have a 'mind' is a gross simplification of how the only minds we know of operate and grow. We have no where near the understanding we would need of how our own minds work to replicate them, let alone create something better.

    @William George, 256

    There is a problem in dealing with models where at a certain level of detail, when the model beings to resemble that which is being modeled, you no longer understand exactly how it is working. Somewhere between rat brains and human brains the level of complexity is going to get to a point where we don't' really know how these things are doing what they do anymore.

    Proceeding in AI research and having the balls to say you will have a functioning mind within a decade is lunacy. When you get right down to it this guy doesn't even know exactly what it is he is trying to create.
    •  
      CommentAuthorWinterman
    • CommentTimeJan 20th 2010 edited
     (7519.22)
    I'm going to refer to this device as the Torture Box from now on.

    A sentient disembodied artificial analog for a human brain/mind? I hope they meet with a stunning, soul-crushing failure with this one because, if they are successful, the poor bastard they trap in that virtual box will go mad in fairly short order.

    There's a hell of a lot more to being human than just thinking things as anyone who's been paralyzed, in a coma or locked up in solitary confinement will tell you.

    Fucking idiots.
    • CommentAuthorPhranky
    • CommentTimeJan 20th 2010 edited
     (7519.23)
    What's with the language? I'm pretty sure the people who are intelligent enough to create an artificial brain and the means to transfer someone's consciousness would be smart enough to realise those problems. Just give it the ability to receive (and act upon) sensory input in a human-like fashion and you're halfway there... but even still that machine wouldn't have the genetic predispositions that influence personality and decision making so it still wouldn't behave like a human.

    Transferring a human mind into a machine (whilst retaining the pre-existing personality) would be impossible. It would end up being something new.

    This is a bit of a pointless discussion anyway given no one has done any of the things we're talking about nor have they theorised a way of doing it.
    •  
      CommentAuthorWinterman
    • CommentTimeJan 21st 2010 edited
     (7519.24)
    The language is well chosen and apropos.

    Like many scientists (a profession I greatly admire), these morons are apparently so focused on their desired outcome that they have forgotten that there is, in fact, no separation of body and mind. That apparent separation is an illusion of consciousness. There is no seperation. Your body and mind are one thing. Irreducible.

    If they were to create a mind in their Torture Box, either a copy of an existing one or a new one that they could teach, there are millions of bits of evolutionary hardwiring that add up to a human mind that involve everything from processing sensory input to responses to internal chemical changes such as puberty that simply could not be duplicated because no one can adequately quantify them all. And many that have to be experienced in proper sequence over time.

    Just because one has a genius for robotics or software or imaging tech there's no reason to assume that same genius extends to biology, psychology or sociology. In fact, specialization being what it is, it is usually perfectly reasonable to assume that a scientist who is a genius in one field is a moron in another. She or he simply wouldn't have had the time to be equally proficient at both. Indeed, it's often the very talents that make one proficient in one thing that make them unable to grasp another.

    If we assume that one of the definitions of Life is sentience and we further assume that these idiots will succeed in creating their human mind analog i.e. making it capable not only of independent thought but of emotions like "love" (something so complex and volatile it's laughable that anyone thinks it would be a good thing to make a computer capable of it) then we can assume that the creature they make will be, literally, the most miserable one in existence as it will think itself human but be totally unable to function as a human being.

    That, to me, is a definition of hell. So they will be creating hell for an innocent sentient simply because they think it would be neat to do. A new life form whose only function is to suffer. Neat. Awesome. No thanks.

    And that makes them fucking idiots.

    And, frankly, any model of an organic brain that doesn't include the near-constant chemical changes (for which there can be no electronic analog) isn't really a model. Just a sketch.

    I think this is an ugly thing to try and I hope they fail abjectly.
    • CommentAuthorFlabyo
    • CommentTimeJan 21st 2010
     (7519.25)
    @winterman - they're not going to create a sentient. People who've actually done some AI research will tell you that it's well beyond the realms of the possible for our lifetimes.

    It's one of those things that the everyman seems to think is easy, just a matter of creating some new tech, but in reality it's orders of magnitude harder than the hardest problems in other scientific fields. We don't know how our brains work. Period. We don't even really know how the brains of many of the planets simplest organisms work. And these people think they can replicate it? What they're trying to say is analogous to saying 'i've taken a bunch of photos of the space shuttle, and now I'm going to use those to build one, and it'll fly and be identical to the real thing'.

    If I live to be a hundred, I still won't be living in a world where machine intelligence exists.
    • CommentAuthorthud
    • CommentTimeFeb 6th 2010 edited
     (7519.26)
    If its supposed to be like human intelligence then it should have as its basis the same kind of chemical-organic multi-processor. Then sensory input would work, and there could be learning, and eventually even reasoning. The chemical brain takes a long time to do this, and we still don't know how, but it seems that if you could construct similar devices you might eventually have similar results.
    I doubt if a computer could imitate the way anyone thinks, or how they experience anything, but it might still be possible to use a computer to save information, (the same way I can now), only in a more sophisticated way.
    You could probaby get away with using any kind of memory (but lots of it) as long as there was an interface like a bridge from analog to digital.
    This might be possible by working in both directions towards the center- (1):how a human brain could use a digital one, and (2):how a digital system could be used to stimulate an organic one, like a direct optical interface to the brain. This is where parallel processors would have to be developed to mimic nerve signals.

    If placing a conscious mind inside a construct was ever possible, I think it would work through having such a bridge built so that a person could access the "other" side. Something like adding another wing to the house, that you would have to practice and learn how to enter, like finding a new part of your brain to expand into. This might take years of familiarity, (like learning another language) but if it worked, you might eventually be able to retire into this new, constructed extension with your personality intact. Or you might find that you had gained the ability to control machines directly with your thoughts.
    Not by pushing a button. But by working at it and growing into it.
    After reading Gibson's Mona Lisa Overdrive, I began to just wonder how this could possibly work, and this might be more than just science fiction. Think of those who might benefit from such immersion!
    • CommentAuthorDario
    • CommentTimeFeb 6th 2010
     (7519.27)
    @Winterman

    If that's your idea of admiration, I'd hate to hear you discussing those for whom you have no regard.
    •  
      CommentAuthorFinagle
    • CommentTimeFeb 8th 2010 edited
     (7519.28)
    I have serious doubts about the ability to top-down design any sort of workable artificial intelligence. A while back I got heavily into autopoiesis theory, which begins from the basic insight that perception and purpose are inextricably intertwined, and both arise from need. This isn't much of a new insight philosophically speaking, at it dates back to at least Heidegger on the theory side.

    The central intuition is that perception and desire, purpose, are one and the same. An organism doesn't assemble a discrete series of perceptions - a smell, a round red shape, smooth texture - and say, deductively, "A-ha! An apple!" Instead, the mind perceives the apple standing out against the background of the rest of existence due to our need for it, as food. The need creates the perception, which leads to the purpose.

    To evolve a truly workable autonomous organism, you have to make it *need* something. For that reason I'd have to argue that something like EATR is a far better model for the organization of an emergent intelligence than any neural net simulation, no matter how complex and fuzzy the logic of rules used which attempt to instantiate a mind. Minds are not piles of rules or expert systems, but purposeful agents that actively work to create and differentiate an object from the background of undifferentiated Being, based on the nature of biological need and desire. Stuff like flocking rules and bacterial growth systems are great for simulating self-organization behavior, but the poor AI guys just keep thinking that if they make an ever more complex cloud of agents with proper flocking rules, you get a mind out of it.
    •  
      CommentAuthorNygaard
    • CommentTimeFeb 9th 2010
     (7519.29)
    Confusing cloud of objections. The idea fascinates me, though. To dispel Winterman's Torture Box, imagine that they first develop a way to generate a massive amount of fake input (there's no way they can do that in eight years, right? But let's pretend, so that we don't have to reject the idea right away). That should do away with Stygmata's philosophical argument as well - if you can fake a human brain and all the input it needs to keep ticking, you can also fake any craving and motivation you could imagine. And a working mind demonstrably doesn't stop or go crazy just because it can't get input, though it's obviously not happy or comfortable. I remember reading about sensory deprivation experiments causing the brain to start generating its own input, resulting in hallucinations in the style of giant purple spiders on the walls and imaginary friends called Wilson.

    Which leads up to a more difficult objection - development. Those deprivation-induced hallucinations are products of a developed, functional mind running on empty, recycling stuff it's been fed with before. I bet that won't happen if you throw a pile of virtual brain cells in a bucket and kick them around a bit, even if you replicate the structure and operating rules of a real brainmeat. I don't see how minds are really "evolutionary" in any meaningful way; your brain won't listen to its genes, since brain cells rarely divide or change. Like the rest of your body once the shop opens, phenotype kicks in, and genotype doesn't really apply anymore. But you can't go to the other extreme either - just wire up a fully functioning mind, and throw the switch on a person with language, personality and an appreciation of Shakespeare. (Unless you had a way of perfectly copying the state of an existing mind, which amounts to the same thing, or possibly to cheating.) To really build a functioning virtual brain, you would need to simulate the neural pruning process and everything that goes with it. While I haven't got the hard data, I can't help but imagine that would need some titanic leaps - even bigger than those for the torture-box-avoiding-device - in both raw processing power, to handle the massive number crunching involved, and developmental psychology, to figure out the mechanics. (Of course, with enough processing power, and that massive sensory faker we imagined, maybe you could just fast-forward development and have a fully formed mind within minutes.)

    Even so, while utterly unfeasible right now, there's no theoretical reason it can't be done, and done ethically. It's not superscience - I can't catch it breaking any laws of physics as we know them. Our own existence is proof enough of that.
    • CommentAuthorFlabyo
    • CommentTimeFeb 9th 2010
     (7519.30)
    As for the breaking the laws of physics, someone told me the other day that some neurologists are starting to whisper that the human brain might actually rely on quite a bit of quantum physics to work.

    As if the task isn't hard enough already...
    • CommentAuthorroadscum
    • CommentTimeFeb 9th 2010
     (7519.31)
    It has already been done and we are merely its dream.

    I believe Stanislaw Lem wrote a short story along those lines - hang on , let me have a look, yes, here we are - in 'The Cyberiad', the third machine's tale in the 'Tale of the Three Storytelling Machines'. Now that could have been one for the 'Apocalypse' thread - stray piece of pottery...
  2.  (7519.32)
    @Flabyo: I'm pretty sure that quantum brain hypothesis has been floating around for a while, and doesn't have a whole lot of support (I think it shares a lot with the Intelligent Design movement, actually). I had a friend in high school who was absolutely positive that the human brain was a quantum system. I think the only real support for the idea is certain reflex actions. IIRC the time it takes for an electrical signal to go from your brain to your hand is around half a second, but under certain circumstances (i.e. somebody just kicked a soccer ball at your face) your hands will react to protect your head before the 'oh shit soccer ball' signal can possibly have gotten to them.

    My whole problem with this idea is that (when I've encountered it, anyway) it's generally floated as being exclusive to the human brain. But if the human nervous system uses quantum tunnelling to do it's thing, then so must the nervous system of at least every other mammal (and probably the rest of the vertebrates, too). And if that were the case we'd have a hell of a lot more data about it.