Not signed in (Sign In)
This discussion has been inactive for longer than 5 days, and doesn't want to be resurrected.
  1.  (10114.1)
    Going on the assumption that pretty much all of you here know what the Singularity is- if not i'm sure a few moments on the wic-of-pedia will remedy that.

    Technology growing at an astounding rate, coming this far in a mere 40 years we are nearly (and quiet close to) becoming completely dependent on computers.

    We are in a race to see who creates the first and true artificial intelligence. Nanobots are becoming closer to reality then science fiction. Teck is becoming cheaper and more widespread, we are comfortable with the buzzing sound of constant current we are bombarded with on a daily basis.

    So will the world war when true artificial intelligence is born? Will the world be torn apart by robots that deem our puny meat brains outmoded and insignificant?

    I for one am running the other way- far far from the cylons thank you.
  2.  (10114.2)
    Charlie Stross wrote up a pretty good thing about why he doesn't think the Singularity can ever happen. I can't say I completely agree with him, because he ignores or forgets about the second path to singularity in Vinge's own essay. The backbone of Stross's dismissal is that creating a super-human AI is impossible, because creating a human AI is pretty much impossible, but in the original Vinge essay (and pretty much all of Vinge's singularity fiction), the super-human intelligence that leads to singularity is an enhanced organic human intelligence, not a 100% artificial one. And it seems like we're not that far off from being able to do that. We practically already use smart phones to augment our brains (quick external memory storage, etc), and with stuff like Google changing how we remember, it seems like all we need is a way to dump the information directly into our brains (if only temporarily) and we're there.

    Will there be a robopocalypse, or a Kurzweilian 'moving beyond the need for a physical body?' Certainly not. But an 'all watched over by catatonic cyborgs of loving grace (who are in turn watched over by a the people who feed them and change their diapers)...' Maybe.
    • CommentTimeAug 4th 2011
    When it comes to the Singularity, I always go back to this:

  3.  (10114.4)
    As a curmudgeonly old cynic I've always had a gut feeling that the singularity will either be a big disappointment, or won't happen at all. It has the feel of one of those things like nuclear fusion that are perpetually 20 years away.

    That said, I'd love to be pleasantly surprised. Keep beavering away all you cheery little transhumanists!
    • CommentTimeAug 4th 2011
    A couple of years ago I snagged the domain "" specifically to post an angry refutation of the concept along the same lines that @oldhat's go-to comic takes, but I haven't got it together yet.

    Aside from the human reality of the vast majority of people who could give fuckall for AI because they hustle face to face lives doing menial work in very low tech environments, there is also the fact that even the very high tech sliver of civilization is hopelessly non-inter-operable.

    My career is in software development and support on a corporate enterprise level. I work closely with professionals who have been involved in Big Information. Federal systems with millions of simultaneous users. Actual real AI research. Backbone systems that support the activity of multiple large corporate clients.

    From my own closely observed personal experience, the goddam robot revolution is an absurdist fantasy. Even with billions of dollars in potential profit to thousands of highly motivated, technically focused public and private companies, even with that kind of motivation, nobody can get any vaguely complex software system to function reliably with any other complex software system for any length of time at all. The minute, literally the very minute, human technical support staff take their eye off the operation of these kinds of systems they unravel in spectacular fashion.

    Think about what would somehow be required for an AI driven conquest of civilization... even if that AI is software enhanced biology. Multiple non-equal complex software systems written by different agents at cross purposes using different codebases and different design paradigms would somehow have to be arranged to communicate smoothly with each other for extended periods of time without human intervention, despite the fact that in order to remain functional each separate system would have to be patching bugs, vulnerabilities, expanding storage, devising new indexing schemes to keep relevant access to databases, exchange data in incompatible formats over unsecured transmission vectors and, if they want to remain viable, somehow also controlling machinery to generate electricity, manufacture literally millions of different sorts of physical components from rare metals and other resources, as well as innovate improvements in them, assemble them into things like servers and routers and fiber optic cables, etc., transport and install them, maintain them...

    I mean, holy shit, right?

    It is not possible. Conceptually, factually, not possible.
    • CommentTimeAug 4th 2011 edited
    Just in regards to the Stross essay and AI specifically, I don't understand where in his essay he thinks he's arguing that super-human AI is impossible, since he's not making any hardware or tech arguments, just market arguments, it's weird.


    I have been thinking a lot about consciousness and intelligence, I think what should be kept in mind when talking about AI, is that consciousness and intelligence are maybe not what make humans different than animals. Once you get past reptiles, a lot of animals have intelligence and self-awareness, and achieve those things with much simpler and more efficient brain structures than our own. About five kinds of mammals can recognize themselves in mirrors, and a lot of birds are just as smart: specifically, corvids can solve problems by creating new tools out of the objects around them. Rooks can remember, recognize, and teach each other. Lots of dumb animals can display emotions. If you imagine yourself as stranded in the jungle and never speaking again, and roaming around for food, you are imagining yourself living the life of a smart bird. There's not much difference.

    In other words, what I think lately is that the humanizing elements of our brain are those which focus on memory: our big big human neocortices just add more long-term memory and our outlandishly complex mid-brains just allow for more vigorous interaction between short and long term memory; from this comes our capacity to run high-end software like language and culture. From this, we can have civilization. Without civilization, we aren't using the unique parts of our brain. So the things that make us more smarter than donkeys are the same things that were first solved by computers, as far as I understand computers.

    So the hurdle to AI is maybe the crazy basement-level stuff that neurons can do even in really really pared down systems, like the brains of arthropods. How does a bee who has a brain that is just a few noodles of nuclear clumps, manage to send navigation instructions through dance?! When you split neurological systems into their smallest parts is when their power becomes the most unfathomably awesome. This is also the region, as far as I understand it, whose mechanism science is furthest behind from grasping.


    I propose that the frontier to AI is best seen looking at biological brains upside-down wise

    extended memory > computers already do this
    all other signal processing / signal-to-meaning ranges > computers can't do this yet,

    But I think it is just a software problem, which is to say if we figured out how computer hardware needed to be different to achieve what neurons do (something like needing to have pathway preference growth between memory locations), it would pretty easy to build, but programming the software requires solving problems that seem more daunting to me.

    In either case it seems implicitly possible, but maybe more-so by accident than intention.

    That thought also leads back to the fact that the other route to superhuman intelligence is just to build off biological hardware with incremental networking integration. If the hard part is already floating around pollinating flowers, then we can probably figure out how to strap that to a server. I think everything Vinge wrote about that is still on point. Basically, progress in this realm is defined as getting past the superficial level of thought sharing via language that we've always had - "Hey, I think love is good and stuff": what does all that mean? Instead of just improving the speed of superficial thought delivery, we need to overcome the very ancient human limits of never really understanding each other and not being able to quickly learn what others have learned on an expert level. When the internet isn't just porting language and images around, but clumps of ego, then the internet users will be superhuman.

    Which isn't necessarily so exclusive as Campbell makes out. The developing world is adapting networking tech (cellphones and micro-marketing) more quickly then large cross-sections of the American populace (old white people).
  4.  (10114.7)
  5.  (10114.8)
    Corvids -specifically crows and ravens- can recognize and remember faces and hold grudges (against a particular human) for about 10 years. O,O. Apparently this grudging (and permits) are part of why avian researchers prefer to use pigeons, who only hold grudges for about a year).
    Different ravens in different regions have different 'dialects', very complex social systems and possibly language (though the studies aren't done for this one). Avians also have their own rules for syntax in calls.

    There's obvious intelligence and culture there. Just not /human/.

    Though I'm not sure how long term it goes. 10 years of memmory, learning able to be passed on to chicks via the chicks observing...
    • CommentAuthorRenThing
    • CommentTimeAug 5th 2011

    Also keep in mind that ravens, in the mask/grudge study, were able to teach other ravens about the masks so that they too carried on the grudge, even after the individual crows who had been tagged and pissed off were gone.
  6.  (10114.10)
    @old hat <3

    I am interested in how fast we as a species evolved technology so fast in such a very very short time... I know for the singularity to even remotely become a factual entity in its own right we would have to further quite a lot of advances in all of the sciences. Old hat's posted comic opens up the obvious and pure fact that the entire world needs to be caught up on things fundamentally needed like clean water- let alone working electricity and up to case with a working knowledge for infrastructure ect. and that wouldn't even remotely put everyone at equal footing.

    We are so far from understanding what the human brain actually is and does- i think intellectually we are still in our infant stages of development given the amount of time we have been technologically "advanced" I'd hope we are able to get to the point where computers can sped up our intellect and enhance or memories.

    I personally have a deep seeded fear of robotics (the roomba freaks the crap outta me) and things like Juels (link for video worry me since he asked questions on his own (the fact he asked about death and dreaming make me wanna run the other way) I know he is far from being a true AI but something that ponders and is made of machine parts is too close for comfort.

    I 'm worried that we will not come close to understanding the workings of the mind to teach a machine properly and it would go wrong. How do we expect to create a thinking machine that reasons when we do not yet understand how we reason in our own minds.
    • CommentTimeAug 5th 2011 edited
    For what purpose do we need a machine that reasons, that emotes? Machines are tools for more efficiently performing linear, rule-based 'left-brain' tasks. Intuitive, creative, emotional 'right-brain' tasks are what humans excel at - the challenges in getting a machine even remotely up to our level of performance in this regard just aren't commercially viable enough to bother, whereas the pay-off for better computational/information-processing power is infinitely useful to our increasingly technological existence. Machines that possess enough awareness to extrapolate the circumstances in which to initiate one of their list of machine-tasks without being told to do so is something that we need (Grandpa has shat himself. Initiate Grandpa Cleaning Procedure.). Machines that aesthetically decide that humans are smelly, inefficient, brutish creatures that make the world function poorly (Grandpa has shat himself. Grandpa's functionality is compromised. Grandpa is negatively impacting on the experiences of other nearby nipple-beasts. Solution: Ctrl-Shift-Delete Grandpa) are not particularly what we need , so we're not really likely to fund development down that path...
  7.  (10114.12)
    @adampark personally i think that we will never need an AI that reasons or emotes or an intelligent AI at all for that matter. But the developers creating AI and are being well funded by large corporations around the world are looking to make the AI as close to a human as possible there for it would have to have reason and emotion. They have created AI's to learn child development and are trying to teach the thing to love....
  8.  (10114.13)
    @Ren _ I haven't read the study, it just got referenced a *lot* in what I've read. Got a link to it somewhere? :)
    • CommentTimeAug 5th 2011
    You're all missing the basic premise of The Singularity: technology will be so advanced that it will be indistinguishable from magic (to borrow a phrase). Poor people won't matter because magic. AI will be perfectly developed because magic. AI will be necessary because without it, The Singularity doesn't happen.

    The Singularity is the Ray Kurzweil happy version of Judgement Day. It's always some point in the future, just about to happen, looming. Well OF COURSE it wouldn't apply to starving children in Sudan UNTIL THE DAY THAT IT DOES.

    It's all handwaving bullshit.

    HAVING SAID THAT it makes for an interesting thought experiment, and Templesmith's Singularity 7 was a mighty fine story about how machines and biology could evolve into one existence. What if all this godlike power gets into the hands of someone like, say, Dick Cheney? Or even Dave, the dude down the street that seems pretty cool but then just sort of goes weird one day? What if it all goes Skynet before we get a chance to fuse with technology?
    • CommentAuthorRenThing
    • CommentTimeAug 6th 2011
  9.  (10114.16)

    -well. Why would the singularity happen everywhere at the same speed? Technological advances don't. If It Did Occur, I'd guess it'd start in more developed countries and then trickledown to the poorer places...
    granted, I'm forgetting about Magic.
  10.  (10114.17)
    I think a type of singularity will happen sans AI, our advances are gaining speed. The utopian principle behind it completely ignored (as nice as it would be)- our lifetime will not see the whole world on equal footing for it to be a world wide phenomenon- it would be in the areas that thrive on technology as a booming business. We as humans are so dead set on the next big thing that companies are crawling all over themselves to try and develop more and more. Technology will be fully ingratiated into our lives that no aspect of our daily comings all relent on tehch. We are very close to it being that way now. We had a mini singularity after the first computer was invented- it took a mere 40 years to get this far- that is an astounding rate. There is not reason why another rollover will not happen again very soon, much the same way just as fast.
    I ignore the "because magic" aspect of the theory but quite a bit of it has grounding in the real world and makes sense for the ebb and flow of new inventions.
    Figuring out wet ware type tech would be a likely candidate to start the snowball rolling.
  11.  (10114.18)
    It's interesting; some of the comments in this thread make me wonder if parasitism is actually a control mechanism that prevents disruptive, runaway evolution from destabilizing the ecosystem...true of robots as of humans.
      CommentAuthorJon Wake
    • CommentTimeAug 8th 2011
    Yeah, evolution doesn't really work like that. Intelligent design does though.
    • CommentAuthorFlabyo
    • CommentTimeAug 8th 2011
    Last time I was involved in the academic side of AI (many years ago) there were basically two camps.

    The 'classisists' who attempted to program computers that replicated the 'processes' of human thinking, hoping that if you got it complex enough you'd have intelligence. These are the chaps who developed things like expert systems and blackboard reasoning.

    And on the other side were the 'neural' guys, who attempted to replicated the biological structures of the brain electronically, in the hopes that if they got it complex enough intelligence would emerge. These guys gave us neural nets and other fancy machine learning tools.

    But no-one is even *close* to actual artificial intelligence yet, mainly because we don't even really know why *we're* intelligent. The real thing we're going to learn from AI research is how our biolgical computers (and those of other animals) give rise to intelligence in the first place.

    I just can't see the kind of AI that sci-fi talks about ever coming along.

This discussion has been inactive for longer than 5 days, and doesn't want to be resurrected.