Not signed in (Sign In)
    •  
      CommentAuthorLuke
    • CommentTimeJan 22nd 2008
     (591.1)
    Awesome research here. They evolved 50 generations of robots in an environment with "food" and "poison" sources that charged and drained their batteries respectively, with gene-like programming elements and mutation between generations.

    By the end they found co-operating societies of robots telling each other where food was - but some robots turned to digital duplicity, lighting up to inform other robots that a poison source was food then silently mooching off to get the food themselves. Also, martyr-bots eating poison and lighting up to warn off their fellows.

    Amazing stuff.
    •  
      CommentAuthorJoe Paoli
    • CommentTimeJan 22nd 2008 edited
     (591.2)
    Truly stunning and fascinating.

    If I wasn't already an agnostic, I'd be horrified at this kind of research and its implications for morality, religion, and the soul. Wouldn't it be foundation-shaking for a diehard theist?
    •  
      CommentAuthorAriana
    • CommentTimeJan 22nd 2008 edited
     (591.3)
    Oh, nice... probably accidental, but nice:
    Some robots, though, were veritable heroes. They signaled danger and died to save other robots. “Sometimes,” Floreano says, “you see that in nature—an animal that emits a cry when it sees a predator; it gets eaten, and the others get away—but I never expected to see this in robots.”
    • CommentAuthorFlabyo
    • CommentTimeJan 22nd 2008
     (591.4)
    The really cool thing about genetic algorithms like this is that the 'rules' can be really simple yet you still get extremely complex behaviours.

    I've seen a similar experiment that was based around robots chasing each other. They started off with random brains, basically doing random stuff based on the input from their sensors, and picked the ones that looked like they were chasing after others and ones that looked like they were running away. After a good number of generations they had situations where one 'chasee' would suicide run at the chasers to let the others get away. Wish I could find the link.
    •  
      CommentAuthorAlan Tyson
    • CommentTimeJan 22nd 2008
     (591.5)
    This is extremely interesting. Do they have any idea what caused some robots to be heroes and others to be bastards?
    • CommentAuthorFlabyo
    • CommentTimeJan 22nd 2008
     (591.6)
    Depending on the complexity of the bit of the brain they're randomizing and 'breeding' you generally can't look at it and say 'this is why it's doing it'. Like any neural net system opening it up and looking at the inside of the black box gives you no real insight into how it's working.

    Another story, possibly apocryphal, runs that someone was using a system a little like this to design an electronic cricuit. They set up a system that when given the inputs and desired matching outputs, and the technical specs of all the components available, it would throw the components together randomly until it built a circuit that achieved it. Some of the resulting circuits when ran through a simulation didn't work, but when built for real did. The side effects of known tolerance failures in the components had been built into the solutions, essentially, it only worked with the exact real components the system had been told it could use.

    You can do this sort of stuff entirely in software, but it's always more fun to use robots and real world sensors to throw a nice bit of physical randomness into it. Gets you bigger grants too.
    •  
      CommentAuthorLuke
    • CommentTimeJan 22nd 2008
     (591.7)
    The 'success condition' used for selecting whose programming would make it to the next generation was the remaining charge in each robots' battery at the end of the cycle - so any strategy that leads to an individual acquiring a lot of charge will eventually be discovered. Most of the effective strategies were co-operative, where everyone works together to get maximum charge - but being a bastard works, because not only do you get charge, you caused other robots to get less, so their code is less likely to get chosen for the next generation. Of course the individual robots don't know that, but that's the point of this kind of work - as Flabyo says, the net doesn't necessarily know how something works, but can explore the possible space of solutions to find it anyway.

    Presumably if the evolution had been continued with the cheats, they would have developed strains that ignored fellow robots' input - or some kind of hyper-bastard able to double-bluff the competition.
    •  
      CommentAuthorCyman
    • CommentTimeJan 22nd 2008
     (591.8)
    Baffling stuff. Maybe this can explain what comprises good and evil in terms of chemistry?
    • CommentAuthorStefanJ
    • CommentTimeJan 22nd 2008
     (591.9)
    I want to see these bots evolve sex. That will really liven things up.

    Also, I'd be curious if they eventually evolve bastard detection, and a sense of moral outrage.

    Then we can have episodes of Robot Battles where they don't need a twit with a remote control to keep the things moving.
    • CommentAuthorMark W
    • CommentTimeJan 22nd 2008
     (591.10)
    Presumably if the evolution had been continued with the cheats, they would have developed strains that ignored fellow robots' input - or some kind of hyper-bastard able to double-bluff the competition.


    a Hyper-bastard robot eh? Hey Warren, they're trying to code you into silicon!
    •  
      CommentAuthorwilliac
    • CommentTimeJan 22nd 2008 edited
     (591.11)
    Rodney Brooks talks about this kind of emergence in Fast, Cheap & Out of Control (clip) where he has robots with very simple instructions acting like ants (no clip).

    The Radio Lab episode about emergence is pretty interesting too, but sadly doesn't talk about robots.

    edit: Apparently swarming robots (to examine emergent behavior) have gone open source.
  1.  (591.12)
    @StefanJ - the evolution of moral outrage and bastard detection is not something we want to see in robots. We'd all be so fucked. Although I'm also entertained by the idea of an army of Daily Mailbots.
    •  
      CommentAuthorV
    • CommentTimeJan 28th 2008
     (591.13)
    Flabyo said: "Like any neural net system opening it up and looking at the inside of the black box gives you no real insight into how it's working."

    This is not strictly true.
    While it is not trivial, there are techniques for network interpretation. The techniques do not give you the answer directly, but they give you enough that you can do some mathematical puzzle solving on your own.
    I have even done this sort of work myself and published the results.
    As you might expect, it gets more difficult as your network gets larger, but difficult is not impossible.
    This is also why when people intend to interpret the network they will go out of their way to use algorithms that keep the size as small as possible.
    That said, many people do not interpret their networks for various reasons (like having other goals in mind etc).

    I'm with you on the coolness of emergent behaviour and unexpected results from breaking out of simulation work. I'm trying to include more of that myself.

    I hope you don't mind my being a bit pedantic. That is just a statement I see often and it kind of bugs me.

    @williac Is it a reference to the whole parable of the ant thing? Sometimes complexity of behaviour comes from the environment. The parable of the ant has a human observer seeing extremely complicated patterns as an ant moves along a beach, but when observed on a small local scale at the ant's level it is really just that there are nonuniformities in the surface that the human does not see but that present large obstacles to the ant. So the ant is actually responding in a simple straightforward way that looks complicated from afar because the surroundings themselves are complicated.
    This part of the idea behind the Braitenberg vehicles (thought experiments - late 1980s?) and Grey Walter's Tortoises (actually built back in the late 1940s). Well, I suppose Braitenberg focussed a little more on our tendancies to anthropomorphise but some later work that references and actually builds variations of the vehicles goes more the way of behaviour emerging from environmental interaction.
    • CommentAuthorFlabyo
    • CommentTimeJan 28th 2008
     (591.14)
    @Vanessa

    No, by all means correct me. I havn't done any serious work with neural nets since my degree, and that was 10 years ago, so my memory of it all is a little hazy.

    There's not a lot of room for neural stuff in videogame AI, not being able to know how long something will take to evolve into something useable tends to scare the moneymen too much. I have seen genetic algorithms used to compute the fastest line around a track for a racing game, that was ok because we tended to start it off with one placed by the level designer and the results after only a few generations were usually good enough for the game.

    Still has the problem that it tends to find and exploit bugs of course (if you've ever done any rigid body physics simulation on a computer you'll know the sort of thing I mean, it's hard to explain). That's the price you pay for effectively allowing random choice through the whole problem space.

    Emergence of unexpected behaviours from seemingly simple bits of AI is one of the things I truly love about my job though. 9 times out of 10 you end up having to stick in code to stop it happening, but sometimes something truly glorious comes along. If you've ever seen Black and White 2, there were some utterly fantastic bits of emergence that cropped up while we were doing that...
    •  
      CommentAuthorLuke
    • CommentTimeJan 29th 2008
     (591.15)
    In a perfect world that 'problem' would be a blessing - you could let the neural network keep finding the bugs, and keep fixing them until your system was perfect. Of course this real world with "actually finishing problems within a reasonable amount of time" does lead to you shouting at the computer "Yes I KNOW you can just clip through the floor there, but don't!"

    That's what I love about neural nets in AI - they can explore the whole solution space based only on the conceptions of good and bad you've given it, and the results can be thought processes that are truly alien.