A Play about a University Talk in 2012

Noble Ape is now known as the ApeSDK.

Author: I was baptized by fire. I get six hours of workshops and... This day, basically. So flew in and then workshops, workshops, workshops. So, I'm here to demonstrate my Noble Ape Simulation. But before I do, I'd like to talk a little bit about Biota, which is a site that I've been the editor on for about, I don't know, six years now. And we basically go out and interview historical artificial life folk.

Speaker 2: So you gave a presentation on the last day? And the closing comments about Biota.

Author: Briefly. Yes.

Speaker 2: I was there but... You were there but...

Speaker 5: I was still there... Yeah. Fading. Fading rapidly.

Author: Okay. So a little bit of background associated with Noble Ape. So there are many ways to look at Noble Ape. This is probably the easiest way to give initial demonstration. So the Noble Ape simulation creates a landscape environment, initially. There are a number of simulation components that are all brought together, but it creates a random fractal landscape. There's a weather simulation that moves over the landscape, basically. It creates cloud patterns and rain and these kind of elements. There's a biological simulation, which is based loosely on quantum mechanics. Basically, at every point, there's a probability associated with the underlying shape of the landscape. The surface area, the height, the moving sunlight and total sunlight, rainfall and salt content form a stack of probabilities, which the species in the simulation are represented by in a probabilistic form. And then a noise map is placed on there. In the case of growing plants, it's a growing noise map to show the plants growing. In the case of beetles, mice, these kind of things, it's a moving noise map to track the movements. But when I started developing Noble Ape... It's a nice metric projection of the selected agent. Let me run it. It might make it... Oh. Sorry. Let me run it. It might make it a little easier. Well, I talked to it. So what else? Let me put the weather over the top of this. So this is probably the business card end of the simulation, in terms of not getting anything out of it. Ah... It's also three different kinds of intelligent agent simulations going on. The first you can see here is what I call the cognitive simulation, which was based on an early agar simulation that I did and information transfer. Stepping back, I started developing Noble Ape in 1996. So, at the time, I was using 68000 processors, early XT, AT machines. So the reason I used a very simple biological simulation was so I didn't have to do a lot of simulation associated with the biology, but so the apes could interrogate at the points that they were at any given time, rather than doing a large scale biological simulation. There's a full-time developer called Bob Mottram, in the UK. He's an industrial roboticist by profession. He's added components. I'll talk about his stuff in...

Speaker 5: So just terminology... So, each agent, you're calling each one of those is an ape.

Author: So then, a wide variety of things that we'll get into, in terms of what simulation makes them an ape, including fur and these kind of things, but I'll talk about that in a minute. In terms of the history of the simulation, I came to what... I was studying physics and philosophy at the time and I was interested in basically ideas of the mind, simulating ideas of consciousness and how these agents, in a very rich simulated environment, would have social interactions and these kind of things, and form societies. That was the initial idea behind the simulation, and it's gone a number of different directions since then.

Author: In terms of the intelligent agent model, as I was mentioning, I had early agar simulation that I had developed prior to Noble Ape. In fact, Noble Ape was really bringing together a wide variety of bits and pieces of software that I'd written at the time. But I was interested in the idea of information transfer through the agar, as population densities kind of moved through the agar and feed on it. I resolved that down to two kind of competing formulae to describe what I used for information transfer, initially in the two-dimensional agar simulation, which I used to simulate the reactive cognitive processes. But then I moved it to a three-dimensional simulation. I wrote about this in a book called "Nature Inspired Informatics". So if any of you are interested, I can pass on the reference material associated with that. But the idea basically was that there was an internal representation and external representation. There are sensors and actuators, which I have drawn in the past, but it just adds visual noise basically, to this. You could almost see which end the sensors and actuators are coming in on.

Speaker 5: So the revolving thing above is the internal representation that, that agent has?

Author: It's the first of these. There have been two additional ones layered and at this point, it's probably easiest to go to the command line version just to talk more on that. So let's get rid of that version. And this is a command line version I've been running for some time. So, when Bob Mottram first came to the simulation, he added, he comes from industrial robotics, but he's very interested in social robotics. So he added some of the social models, including the drive theory. So what you see here, well, I can show you all the apes in the simulation, but you see a particular ape in the simulation. They have double-barrelled names because they're noble apes. It's a small joke. So that way you can track them. You could also track family structures and things like that through the names.

Speaker 5: But even the progenitors, do they start off with a couple or did you seed them with a bunch?

Author: The initial conditions are really interesting. Both with regards to this and what we'll get into, associated with the simulation of language. This is initially random. The language is not initially random. And this sort of discussion I've had backwards and forwards with Bob Mottram, because it's a lot more interesting... Well, from my perspective, at least, when their language is initially randomized too, but we'll get into that in a minute. So yeah, the initial conditions are important here. All the apes start off in a condition of maturity. I think this one, yeah this one has a population of 95, 81 adults and 14 juveniles. So I've run this for...

Speaker 5: So, this is what a starting position would be like?

Author: No, it's not. I've run this for a period of time because it's a little boring initially, and when I've demoed it up until now, I've had to run it for at least 10 days to start getting stuff populating. This is one that I've cooked a little earlier, so to speak, in terms of, well, it's been running for 43 days now, 7am in the morning. So, in terms of, the kinds of simulations that are internally represented, there's an internal social simulation, which produces a social graph, which is far more interesting to observe than actually in terms of the interaction. There's a brain code simulation, which simulates both their external language. So when two apes meet, but actually there's mutual execution associated with their language code, which represents their external language. They also have an internal language, which is the same language, just represented internally, that they run both with themselves represented and also external parties that they may meet in the simulation. So typically, if one is born and then suckled and nurtured, (and all this is in there) then they will have obviously, a very strong tie to their mother in the circumstance. If the mother's taken away and another mother-like ape were... Was brought in, then they have an identical relationship to them after a certain period of time. So the interactions generate this.

Speaker 5: To introduce, say a little bit about what their internal structures are?

Author: So let me show you. Let me show you. So the internal structure of the ape, particularly... I think this should do it for us. Yes. So this is what's currently represented by the internal structure. So the location, all the basic stuff associated with their points in the simulation. The speed that they're traveling, their internal energy. These were all the initial variables, speaking whether they're uttering anything. They have an internal random variable, which is used for distribution. So it can be distributed over multiple processes and basically, they're... Maintain their coherency. Their date of birth... Where were we? Their brain location, it's dynamically stored. An overarching set of state variables which were associated with their cognitive simulation specifically. They have... This comes into Breazeal's work associated with the references to crowding and posture. They collect various objects and things like that, so they have an inventory. Bob Mottram added the idea of honor, which was a social currency relating to parasites and grooming early on, so that is included as well.

Speaker 2: So just to jump in, most of these, it sounds like they're fairly abstract...

Author: Let me go down. They're actually quite a bit more. So, there's also familial relationships, familial genetics, which are basically stored to be referenced, not stored to actually be used. Bob recently introduced a vascular simulation. There's a metabolism simulation, which has various other effects, so you've got heart rate, breathing, these kind of things. They have... Based on where they meet and eat and greet, basically they have an idea of territories as well, which is another thing that Bob... I did specifically. There was an earlier version of territories associated just with a simple cognitive simulation, but Bob's background, in terms of social robotics and these kind of things, liked, we liked to hard focus.

Speaker 5: So, just say a little bit about the territory since that's something we're... Be interested in. So when they're first seeded or born or something...

Author: They have no territory.

Speaker 5: Something anywhere and so on. So what is it that determines a territory?

Author: So, again, I prefer to talk to the code on this because it changes dynamically, irrespective of me sitting here. But what it means from my recollection is that, if they have points of meeting and eating and these kind of things where they see other apes, it reinforces a notion of a territory because they have a social graph representation associated with social meetings. So eventually, this becomes a referential thing where there is just an agreed upon territory from that. But the view of territory through that has changed quite a bit. So it's something...

Speaker 5: So as they're sort of moving around, they come across another for the first time, there's an interaction, that's then located in their social graph, it becomes a point there. Meet someone else. But something about the nature of the interaction will determine whether that becomes my territory, or is that how...

Speaker 2: And then, just follow up on that, when you have that variable, I guess you would call it, or object territorial information, that's sort of a predefined structure, data structure that can take certain values and...

Author: I think it's a scale of structure that represents a block. It's not a point, it's a region basically on the map that's described in that fashion.

Speaker 2: I see.

Author: And really, it's a shorthand. One of the things that interests me about the simulation is the fact that most simulations you throw in, then you can pull out bits and these kind of things. And my view from prior history, prior to Bob's work in there is that the territory is too much shorthand that can be described through other things as well. So, for example, through the social graph, you have this idea of meeting points and there's also a referential code around that. So, through the brain code, through the bytecode that they run, it's possible for them to link other events that occurred in that place, or link other places that occurred with an individual, or a wide variety of these kind of combinations, in terms of linking the variables to other things that happened, either involving those places or involving those... And this is part of the brain code execution. So...

Speaker 5: It has to be something that associates the social meetings with the geographical location.

Author: Certainly.

Speaker 9: So you're simulating, these are all separate simulations within the framework? So these are different algorithms or models that, or simulations that are modeling things that we think go on in apes?

Author: Certainly.

Speaker 9: So you're modeling everything from physio... You're simulating everything from the physiology and morphology, to the behavior and cognition and inner brain works, but these are all being done... A lot of these are being done through separate modules that do or don't interact?

Author: They interact, they don't have to. They're replaceable, in terms of the fact that they will either be zeroed or removed, basically. But yeah, the only... Exactly.

Speaker 9: Right, they're on and off at your leisure...

Author: Certainly.

Speaker 9: But once they're running, they do... The outputs of one of these parts of the model or simulation do input into the other parts?

Author: Certainly. There's a problem with the idea of levels and I don't like using the term "levels" to describe the simulation into relationships because there are some components which are horizontal in terms of the communications, some components that are vertical. But yes, it's designed basically to remove and put back. And certainly, in terms of long-term testing, that's what we've done to...

Speaker 9: And anytime someone comes up with what they think is a new or interesting behavior or a part of physiology, they add, they model it and then try to get it into the simulation?

Author: Certainly, Certainly.

Speaker 2: So then what can change, what processes or what operations can change the dynamics of the interaction? Do you have learning processes? Do you have evolutionary processes? Maybe you're getting there.

[laughter]

Author: Let me say that this variation selection inheritance within the simulation and all of these things can be changed accordingly. I would probably casually use the E word because it does contain variation, selection and inheritance. But in terms of perhaps biological studies or these kind of things, that's certainly something that can be expanded. And in terms of the very simple genome that I maintain currently, I think that can be expanded greatly.

Speaker 5: But currently, any of these things were represented on a genome that could be modified...

Author: Certainly, yeah, certainly. So that's...

Speaker 5: That's a pretty...

Author: Yeah.

Speaker 5: And it also seems, given there are lifespans here, I mean there are lots of things? You don't have many generations. I mean you've run this 10 days, how many generations do you have?

Author: I typically run it for between 500 to 1000 simulated years. They live typically from 16 to maybe in extreme cases, 40, 48 years.

Speaker 5: Not many generations...

Author: Well, certainly yes, but there's enough to, for social phenomena to arise. And I think that's the thing that interests me most through it. I think that's quite interesting, in and of itself. But certainly, doing it in a kind of organic industry simulation sense versus something that can be scientifically used are two quite distinct things. But my interest in coming and meeting folks such as yourself is certainly putting as much science in as possible, and hopefully of it being of some benefit, basically.

Speaker 6: How would you model the genetics of the... You clearly have juveniles and adults. The juveniles grow into adults, so you're also simulating or modeling in some way the developmental processes.

Author: Certainly.

Speaker 6: What are the underlying genetics for something like that in general? Do you have regulatory mechanisms where parts of the genome are turned on and turned off at particular times?

Author: Well, I mean, in terms of sexual things, yes. In terms of various diet things, I believe so as well. If there are more things that can be included, by all means, but certainly, in terms of sexual things, mating preferences, these kind of things, clearly.

Speaker 6: And things like life history, traits, age of reproduction and these sorts of things, are inherently integrated implicitly?

Author: Certainly. But they can be added, associated with explicit maturing genetics and these kind of things, too. The genetic model is relatively simple, but I think could be expanded, but does take into account all these kind of factors. In fact, certainly, Bob's influence was actually... I mean, he's done amazing work basically over the past three years. But to expand the genome and also start putting in what is it? It's a word I've learned only being here... Pleiotropic? Like multiple...

Speaker 6: Pleiotropic.

Author: Pleiotropic, yes. So, that was a phenomena that I wanted to put in early on and just behind the limited genome set, it's a consequence. But I think if the genome was expanded sufficiently, where there were some that were pleiotropic and some that weren't pleiotropic, in terms of these, it would be interesting as well. So, I mean, I'm interested in expanding all these parts, basically.

Speaker 6: You can imagine a system where each of these algorithms is coded by a separate set of gene or a separate set of genes, all that comprise the genome where these things can actually have recombination and whatnot in sexual circumstances or you can imagine a general genome that incorporates one gene that incorporates many of these. And it seems to me like as much stuff as you have in this, that's gonna be a really sophisticated thing to work out.

Author: Well. I mean, my own bias is through the... And the reason that I included pleiotropic in there early on was basically to maximize the speed of demonstrable evolution, basically. So that may be an artificial constraint that I put in there just for my own interests, basically. But I mean, if it requires kind of expanding and what have you, I'm more than happy to do that as well. So in terms of their language, specifically, let's... So, they have multiple internal... Well, they're just byte strings associated with the language and then they have one external byte string associated with the language. So, let's run this for 10 minutes, say, to see it run.

Author: So it's based on code, as all these things seem to be, with the addition of sensors and actuators that relate to everything from as I described, associated with referential settings and moving to things like rumors and things like that, that they can pass on amongst each other. So implicitly, within the communication, well, through the code language, they do describe relationships between the apes. And a phenomenon that I noted just before attending ALIFE, I talked about it at the conference, when people approached me, was the association of rumored or false parents. So you see, there's a notion of epic, which represents the apes that are most being talked about because it tracks basically, when they talk about other apes. What I found over, particularly even within 10-day runs, but more typically over 20 to 50-day runs, you would have an ape that would have really high epic number, and when you looked inside the ape, it would have...

Speaker 5: Can you just talk through when you say epic number...

Author: Unfortunately, that's the problem with running a simulation, I've got to wait till it stops before I can... But I'll come to it. But it's a scalar representation that refers to an ape that's being talked about. So you'll end up with a kind of top 10 representation of the apes in the simulation that are most being talked about.

Speaker 9: So it's basically the gossip column? [laughter]

Author: Mm-hmm. So it's catching references within the brain associated with specific apes. And what you see in the ape that's typically at the top in some of these circumstances is that they have... Usually, it's a false father who has been inserted in some way and their representations of typically both their parents, but at least always, their mothers, in what I've observed, becomes... Well, they have... Sorry, I should point those out... They have both friends and enemy relationships, which is associated with good and bad interactions. They have notions of like brandishing and flowering, and smiling and all this kind of stuff. So it can be both facial interactions, it can be some communication that they've had, or a wide variety of other factors because they maintain internal representations of apes. So all that you see here in terms of listed as friends and enemies, there'll be an internal language representation of that ape, almost like a internal simulation, basically associated with that ape.

Speaker 2: Can you say a little bit how that works? So they've got some slots into which they can say, "These are my friends, these are my enemies." So they're representing those as categories that you've placed there? Is that right? So those categories are already...

Author: It's based on... Yeah, again, it's social robotics, so it's based on interactions that they've had, that are then immediately classified as bad or good, and weighted accordingly.

Speaker 2: Alright. So those categories are in there. And then there's something associated with the interactions that led you to say, that was positive, that was negative. So, with enough positives, put them in the friend camp, and enough negatives, put them in the enemy camp?

Author: Yes, but this can be both external interactions and also internal things that their representation run against themselves basically, sets off as well. So the notion of the being language universally means that when two of them meet and have an external conversation, exactly the same process happens with regards to their internal representation. So maintain an internal representation of themself and an internal representation of all the apes listed, basically. And they run the code as they were... As if they were having an external conversation, but they're having an internal conversation, with the representation of the ape. So, the hypothesis, where the...

Speaker 5: Those happen automatically in parallel or they can diverge?

Author: Well, in terms of meeting, they obviously happen at the point of meeting. But in terms of internally, per cycle, I think and this is something in the code that's probably going to change, they will have, as you see here, their relationship attention. So when it says "relationship attention" itself, it's running an internal representation of itself, in this case with its external representation, so it can kind of talk about itself. But when it says grandson, for example, that is a representation, which in this case isn't represented by a specific ape, but it could be represented by some other representation that it's having.

Author: It's difficult when you start with the initial conditions where you have apes that are basically living and they have to have some kind of previous relationships. So I think in that case, specifically, it's a kind of false grandparent or whatever that's just stimulated there. But if they actually had interactions and produced... If they were a grandchild or things like that, then it would be represented in their internal...

Speaker 6: So how do these things go? I mean, all I see there, right now I see relationship attention. So that's referring to their internal representation and what they're representing at that moment? So this guy here is representing itself, it's thinking about itself, is that the way you put it forward... Okay. And let's say Flora interacts with Thora up there, who happens to be... Well, Eudora happens to be her daughter and enemy now, I assume...

Author: Yeah. Very negative relationship. Yes. Yes.

Speaker 6: Mother-daughter relationship.

Author: Exactly. Yes. Yes. But she likes the son.

Speaker 9: So how is it, I mean just so I understand what's going on, how is it that when they meet, whatever interactions they have, how does that get modeled in the representation? So is it just automatic at first? So whatever they're having to do, they're representing simultaneously?

Author: Yes, initially. But then through interaction, basically, it gets recalibrated. The initial conditions are always difficult. But initially, what will happen, from my recollection of the code, is that they will have run what they're describing here in terms of familial elements, they're actually are representing in kind of false familial elements, sort of pre-generated or they're actually referencing things within their friends and their enemies list that... In some kind of perceived social structure, that part of the code, I'm not particularly clear on, which is why I'm recording this because it goes back to Bob and then he gives me the answers in these circumstances. So I work on Noble Ape part-time. He works on it full-time.

Speaker 6: So this kind of reminds me of rehearsal, which is argued for a lot of animals learning cognitive abilities, and that is you don't... You go and do it, and then you can basically rehearse it in your head without actually acting it out. And that sounds similar to this, as they have these interactions and then they replay them in their mind. But this has the effect of reinforcing anything that they might glean from that interaction.

Author: Exactly. There's a condition of stability that's attained, which you can actually see in the two, because you've got your external and your internal, so you can see conditions of stability in the code that basically, they'll have an experience that will produce one reaction, and then they work against their internal to actually kind of produce stability in what the external reaction has produced.

Speaker 6: And does that save you... I mean, so in animals, the argument is that, that saves them actually having to interact with the world and put themselves at risk, over and over and over again to learn something or to make those associations.

Author: Certainly.

Speaker 6: Is that the purpose or is that just sort of...

Author: It was one of the purposes, but very much so. Also, the idea, you can find through running the simulation that they will maintain internal representations of apes that have passed away as well. So in long-term runs of the simulation, you typically only, I mean I've seen it within 20 days' worth of runs because there'll always be an ape that drowns and typically that might be an ape that is known in some regard... So they will continue to maintain internal representations of the apes...

Speaker 6: Of the drowned ape?

Author: Exactly.

Speaker 6: Interesting.

Author: And in terms of what deeper stuff comes through, obviously I'm still gathering the results associated with that, but the release kind of interactions, which I think originally... And unfortunately, this was my last writing on this because Bob changes the code relatively frequently, but originally, there was just a single internal and a single external language string, basically, that are running continuously. And what happened through that was interesting but not dynamic enough for the stuff that Bob was looking to model. So he then created multiple internal language strings, mapped them onto initially familial characteristics, but also now external enemies and friends and these kind of things.

Author: But the idea of really strong, almost kind of like, not necessarily deity-like relationships, but certainly a mythology particularly associated with heavy kind of nemesis apes and eulogized apes, for want of better terminology. And the fact that two or three generations after these apes have passed away, they can still be referred to quite heavily and say... So let me show you what epic is. It's very much a lift up. It's literally just a scalar representation of the apes that's talked about. But what you'll find, particularly with deceased apes that have had a lot of interaction is that they'll continue to maintain high epic numbers. So they'll continue to be talked about even after they've passed away.

Speaker 6: That's really interesting because I mean, you can maybe think that an ape that's dead would have an influence later on. And so you have something that's not doing anything anymore, but it's still sort of continuing in some fashion.

Author: Well, it is doing something. I mean that's the nature of the internal simulation is that it continues to do things even after its passing, basically because it's continued to be represented. And the more apes that have a representation of it... Now these representations can be completely skewed. They don't have to be of the same internal representation at all. In fact, it's perfectly feasible, particularly in the cases of nemesis apes, that it'll be a bad ape for a wide variety of reasons, depending on the internals of the ape that's maintained it...

Speaker 6: And possible... And good in others.

Author: Certainly.

Speaker 6: So if there's dominance hierarchies and you're on the better end of it, you might keep an epic number that's high, that's good. And then those that are at the bottom of that hierarchy could be an epic number that is high but bad.

Speaker 5: Well, presumably a high epic... Higher in the list means that you're talked about more than anyone, period.

Author: It doesn't have a good, bad...

Speaker 6: Regardless of the context. They're being thought about really, I mean, not so much even talked about...

Speaker 5: So there's a difference between what's going on in the representation and what you're doing as a result of that. So is epic what you're talking about or is it what you're thinking about?

Speaker 6: Or is it the sum total?

Author: This is why the recorder's here because I'm not absolutely precisely sure. I think it's actually spoken in terms of an external rather than internal. And that's what makes it even more interesting because if it's just an internal representation, then yes, it would be. But these apes are actually talking about this ape, which probably reinforces the propagation of the ape being maintained in conversation, basically.

Speaker 6: And that's even more interesting in the case of apes that have passed. I mean, why continue to talk about an entity that no longer exists? It's one thing to think about it as a memory, but to sit around chit-chatting about it seems odd.

Author: Well, I mean, what I've observed is actually, if you run the simulation for long periods of time, you get clans, you get clans just by naming conventions. So you can see genetically that they're maintaining territories and also breeding. But what you find through that environment as well is exactly this... This is where the... Because they're all sticking to the same territories, they're actively communicating. The referential associated with the deceased ape a few generations ago is actually part of the commonality that is also fundamentally part of the genetics and...

Speaker 5: Can you just show me one of those social maps, do you have a representation of that?

Author: The social graph is a particularly... Not in any of these versions, but the social graph is, if you can imagine, it's a point graph basically. And what you end up with is social clusters over time. What's particularly interesting is if one ape is ejected from the social cluster because then you see a shift of apes making decisions about whether they move from the dominant social cluster to this new radical ape or not. And visually, it's beautiful. It's, yeah, one of... One of the views I would certainly...

Speaker Unknown: But you can't show us, to us.

Author: Apparently, no. It's not in the version that I have here...

Speaker 5: Especially since your point here seems to be, to model social interactions. I don't yet see, I mean here I'm starting to see a little bit about that, but since you said this is tied to territories and so on, I was just curious to see how those...

Author: You don't want a spatial representation as part of the social graph though. It would be, you could have some linking, but we...

Speaker 5: So it doesn't match? The territories don't match?

Author: To some extent they do, but initially, and particularly early on in the simulation, they're still in a kind of exploratory phase. Their territorial areas are not really well defined. So what you see in the social group is more the kinds of interactions that occur as they kind of traverse the landscape. But to see it graphically, it's better to have it represented out in its own space, basically. So they have the freedom to actually move around in social clusters.

Speaker 6: But in the actual environment to interact, there's a spatial component to that? I can't interact with an ape that's 6 miles from me?

Author: Certainly.

Speaker 6: Okay. So there's inherently a spatial component in the interaction space?

Author: You can think about it that way. I mean the apes can, through the language as described. But the things, yes, they need to be within close proximity in order to...

Speaker 6: Yeah. If you're modeling a spatial part. If you're not modeling space, real physical space, by any means, then that's irrelevant, but it seems like you're doing that. So if you are doing that, then there's obviously gonna be a spatial component to any communication that they have, I would imagine.

Author: Certainly. But yeah, it's a direct...

Speaker 2: Direct communication, you can communicate indirectly by word of mouth...

Author: Certainly.

Speaker 6: I have a really...

Speaker 5: Some of these things are islands? So I would imagine if you looked at the social graph there, that would have its own little network? It couldn't have connected to the other one because they never had any physical interaction.

Author: Well, that was... I've tracked apes over...

Speaker 5: Do they swim?

Author: Yeah, they swim. So the other thing with the...

Speaker 2: There was one went swimming two hours ago. 10 minutes ago. 19.

Author: 19. 19.

Speaker Unknown: Yeah, but do they have parameters on how far they can swim?

Author: Yes, they do. They do.

Author: In fact, the early success, if you can imagine 16 years ago, this was released on an early internet, the early success was typically with undergraduate students that wanted to teach their apes to swim, but more importantly, drown the apes. So there was a kind of early brutalization that propagated very rapidly in terms of getting the simulation notoriety associated with it, trying desperately to drown all the apes in the shortest amount of time and all these kind of things. So yeah, if there's anything to say about Noble Ape in terms of visualization and these kind of things, I mean, I've had unimaginable amounts of success associated with the simulation for a wide variety of factors, but visualization is one of them.

Author: Early on, well, in 2003, because it would compile on multiple compilers, Apple picked it up. It was originally with two engineers and then internally within Apple, and then they demonstrated it at WWDC and then they put it on the CD-ROM that came with every Mac that they sold from 2003, basically as part of the Apple CHUD toolkit. And then Intel picked it up through the movement from AltiVec to SSE. So, in terms of Intel...

Speaker 5: As just a demo or something...

Author: No, no, no.

Speaker 5: Well, why did they put them on there?

Author: Because there are ideas... There are certain optimizations that you could do as the process has changed with vector processing, with multi-threading, and they used Noble Ape. There's ape brain cycles per second, which I'll bring up, which was the metric that they used to track optimization. And by using threading models, by using vector processing models and various tuning characteristics within Noble Ape, they're able to demonstrate both internally and with third parties, optimization that they... That the third parties or internally could then use. So here's Noble Ape, here's the ape brain cycles per second. If you implement this method, then you get this improvement in ape brain cycles per second, implement this in your code and you'll get the same improvement, basically. And that's how they used it. And Intel was a slightly smaller group of engineers than Apple, but at least they had a... As you entered an engineering team, you'd be given a project with Noble Ape just to test your chops associated with optimization. And a couple of years ago, I gave a talk at Intel and saw one of these teams, basically. The manager had first brought Noble Ape into Intel format...

Speaker 2: So they're not trying to simulate apes, they're just using as a pure sort of diagnostic tool?

Author: Yeah. They want something that is sufficiently, touches a sufficient number of their internals and also is scalable. So in the case of Intel, they would occasionally pass me back code where a number of processes would be in, for example. So clearly, they would have some internal processor testing. They would ramp up the number of processes and see how they could get the ape brain cycles per second to improve and see what changes needed to be made. So, I don't know, if I'm speaking Swahili, but Grand Central Dispatch and Intel's internal Atom development associated with optimized processing, basically they used Noble Ape for that as well.

Speaker 2: Wow, interesting.

Speaker 5: So they're not messing with the code or making a development, they're just using it then... Okay.

Author: Purely utilitarian. Exactly. But yeah, still an amazing...

Speaker 2: So what would be in those... Just I'm interested in this just 'cause that's kind of unexpected twist, I didn't expect that that's how this was being used. But what's the mapping between the apes in the simulation and the elements of whatever they're modeling and trying to optimize? I mean, what would be the...

Author: Well it's to do with...

Speaker 2: What would apes correspond to in the...

Author: It's to do with processing. So for example, the cognitive simulation. This basically modeled for each of the apes, has various internal mathematics. I mean, it's two competing formulae, calculated continuously, basically. And from that, because it touches both near memory and also far memory, and does so in a variety of fashions, it provides a good enough metric for what they were looking for to start. Then as processes changed and had mapping of memory and what have you, then they could do optimizations based on that.

Author: But Apple was also interested in the real-time graphics. It wasn't just the compiler, multiple compilers, the real-time graphics was important to them as well because they wanted to show that certain mathematical internal things didn't have to affect the graphics, that you could do the two in parallel. And certainly, early on, prior to my interaction with Apple, tuning the graphics and getting them real-time and reasonable in terms of general interaction was very important for me, and then the discussion of 68000 processor architecture, some things like that.

Author: And prior to Apple picking it up, I first went into Apple in 1998. I think and they were interested and in particular, I had a first-person perspective view or first-ape perspective view of the environment in real-time, that they liked because it was faster than some of their technology. So my relationship with both Apple and Intel has been purely to utilize it for whatever itches they want to scratch, and in terms of meaningful contribution.

Speaker 2: That you're associated with this?

Author: Yeah, so in terms of meaningful things, the contributions back really didn't have that much development, it was purely, they were looking at particular architectures, wanting to... But at the same point, they displayed it to hundreds, thousands of engineers. And although I couldn't attend the WWDC conferences, it was pretty cool to watch the videos.

Speaker 2: Yeah.

Author: And get a sense of the audience participation and this kind of stuff.

Speaker 2: So I have a really basic question about how... So you've got state transitions, you've got overt ones as they interact with the environment, but there's also internal state transitions. What causes a transition? I mean, there must... Some of the stuff just stuff happens to you. You drown. A flood comes or something. But in terms of the agency of the ape and it has to do something, what is the value function, let's say, that drives state transitions and how does that value function change?

Author: Aside from what I've represented associated with honor and these kind of scale of values, it all now leads into the language. And that's Bob's current work and that's what he's working on. It doesn't have to lead into the language, it can lead into other things as well, but every aspect now leads into the language with the view that the language provides the most dynamic means of changing state and the most...

Speaker 2: So language can be a stimulus, it can be... But you need a motivation, you need a reason, a value function. That's why frankly, value functions. Why should you respond to a word? Is there a fitness? Is there a learning signal? A reward?

Author: So in terms of the drives, that provides a motivation, but the drives are interesting because they're not equal in any way. I mean, fatigue and hunger, basically are dominant drives. But I mean, if that's what you're looking for...

Speaker 2: That'd be an example. Okay.

Author: Yeah.

Speaker 5: Those are the only two that you've got?

Author: No, there's sex and social as well. So hunger, fatigue, sex and social.

Speaker 2: Okay. So that... So hunger, fatigue. All right. And so, those are somehow integrated, all of them? And to then determine a state transition?

Author: Certainly.

Speaker 2: I see.

Speaker 5: So if you're... Have a hunger score of zero, you could be faced with food and you're not hungry, so you don't interact with food? If you get a high hunger score, you've got food in front of you, and that'll motivate you to do something in order to get it? Yeah. Okay. Is that the way it works? So it's the relationship of that...

Author: Basically. I mean there are other elements to it as well, but I think that's the easiest to categorize it.

Speaker 5: And so how does it work for these other ones? Like this one is really horny? All that high sex...

Author: Yes. That's relatively standard amongst the Noble Apes.

Speaker 5: Social interactions, do not care about life...

Author: Yes.

Speaker 5: Don't care about food, social interactions have to stop?

Author: She has children already. So the ape is well-defined. Okay? So yes.

Speaker 5: So in an interaction, they're pretty broad at this point now. So an interaction, well I suppose you can exchange... Can you exchange food, so if someone has food, so hunger could come into play in something like that. So how do the other ones work? What are the other things that would be a state change that you would do if you have a high fatigue or a low social or whatever?

Author: Well, in terms of motivation, the fatigue is an interesting one. I think it just causes them to kind of slow down, basically. So I really can't talk heavily to them because of the particular abstract nature of the code. But in terms of the fatigue, yeah, slow down. In terms of the social, they may be more interested in seeking out apes, other apes that they know or apes that they see in terms of their interactions. There is representation of... Well, I mean associated as well with their friendship groups and things like that. They will try to seek out apes that they've seen, that are friendly with, in familiar places. So that brings together the...

Speaker 2: Is that like a loneliness score?

Speaker 6: No, but there are only four scores here. I'm just trying to get some of the details of this. How do you translate a drive that I have, with a number associated, which can change, how does that translate into what I do...

Speaker Unknown: Actions...

Speaker 6: Then changes something...

Author: Well, as there are episodic memories, there are also social memories, and the social memories, for social in particular, although there are sexual components to it as well. So if they have a social memory and they have a particularly high social need, then they may find, based on their location, a representation of the social memory, which will cause them to return potentially. I mean, it's never returning to the exact point, but it's going back to the area with the view that other apes may be congregating there. This is a relatively abstract example. I'm not specifically sure.

Speaker 2: The way I'm starting to understand this is that the state space is absolutely gigantic because you can have all these locations, you can have the history you've had, the memories, the interactions you've had, and you have these internal and external representations and words coming in. And you may not ever visit the same state, in a specific sense...

Author: Without question, yeah.

Speaker 2: But there might be a hierarchical way of thinking about state, too, whether you're in the group or out of the group or on an island or in water, out of water. But you're gonna have, state is really gonna be, partly also where your motivational levels are. But so those partly define, seems to me, maybe I'm wrong about this, but your drives partly define your state, your current state, but that in conjunction... Your drive levels' conjunction with new information is going to influence whether you make a decision to make a state transition.

Author: Certainly. Yes.

Speaker 2: And I assume, too, that the drives can be... They can sort of jointly influence a state transition...

Author: Certainly.

Speaker 2: It's not just one at a time?

Author: Certainly.

Speaker 2: Okay. I think that's sort of helping me understand this...

Author: The states that are described here, I mean, in terms of just basic things like moving and these kind of things are very, very rough. They're clearly... And this is something that interests me about narrative in terms of combining it altogether and then making something that's human readable. Because you can then add more of the depth basically in describing the states.

Speaker 9: Well, ultimately they're gonna define... The apes will define the state for you.

Author: Certainly.

Speaker 9: I mean, that's what's gonna happen. I mean, you can call this or that a state, but how they integrate all that information and act upon it is what's gonna define the actual state for them.

Speaker 5: But there are some things that are explicitly predefined. Eat a twig or eat vegetation, pick up a twig. I mean, those are things that are presumably hard-coded in there. And other things, I assume, would be emergent. 'Cause you've got ways of being close or not, just by virtue of where you are. And if location is something they wind up tracking their representation and say we congregate where apes congregate, that's a territorial thing which emerged from the interaction place.

Author: Certainly.

Speaker 2: Well, you can imagine memories, and ideas about where food is also being emerging. Yes. They are the environment, they have the experience of eating a twig, but now they replay that in their mind over and over again. And maybe that there's some kind of locale associated with it. So now, they have a place and an action paired, and a state action pairing for that, that is emergent.

Speaker 6: Well, I don't know, can they track something like that? I mean, and association, how are associations...

Author: In terms of... Well, they're both episodic and social. So, the social associations relate to apes, basically. The episodic both relate to apes, but with the location. They both have locations, but they're represented differently.

Speaker 2: By episodic, you mean the classic definition of an episodic memory: What, when, where memory?

Author: Mm-hmm.

Speaker 2: Okay.

Author: And the social graph elements always referred to... So it's actually the last interaction that you had with a specific ape and is the associated brain code, the internal representation of that ape associated with that memory as well. So...

Speaker 5: So how... Just to sort to see how this could work. So I've interacted with Flora before in a certain place. And while interacting with her, we exchanged food or picked up a twig. [chuckle] So how is it that those associations are built or not?

Author: So you have a positive... So in that case you probably have both... Well, maybe you wouldn't have an episodic memory because it's associated with the person specifically. So if they passed you the twig, if they put it down, then you picked it up, then it would be associated, both episodic... Probably only episodic. But if she passed you the twig, there would be a social representation, which meant that she would be there and it would have some impact on her social code that you were running, that the ape was running internally as well. So that would both be a positive influence, which would up Flora in the friends category. And also, you would have the opportunity to take that experience away and run it as Flora's internal representation, basically.

Speaker 5: I guess I still don't understand how it works. So I can imagine you've got something that's a representation of... Well, this is Flora we're seeing... It's Eudora? Flora now has a representation of Eudora, having now met her. That's retained as a memory. Okay? But presumably, you can't have everything that you've ever done in the vicinity of Eudora...

Author: Exactly.

Speaker 5: Maintained.

Author: Exactly.

Speaker 5: That's what I'm asking, for associations to be...

Author: They're created in social graph memories. So when you have that interaction with Eudora and if you would have another interaction with Eudora, your new interaction with Eudora would be the social graph memory, but your executed code associated with that experience and the experiences prior would also be retained, associated with that memory. So the social graph both captures the interaction, but also the legacy history in a code sense as well.

Speaker 5: But again...

Speaker 2: All history?

Speaker 5: So many things that could have happened. And they're not all retained.

Author: They're not all retained.

Speaker 5: That's what I'm asking now. In my representation now of Eudora, I've got her, I'm Flora. I've got a new code. And I did some things in the past with Eudora, including meeting in certain places or exchanging twigs or swimming.

Speaker Unknown: Or drowning.

Speaker 5: But all of those things, to be usable have to be...

Author: Retainable. Certainly.

Speaker 5: Associated and then retained? So that's the question... I could see how my score, my social score piece of the Eudora goes up or down. I give her a positive if she gave me food, I give her negative if she attacked me... Whatever...

Author: The bit that I'm trying to describe is the block of language associated with Eudora. So while the most recent interaction is what is retained in a social graph, in memory, all your previous linguistic interaction, including the social interaction is what is retained, almost like an internal program associated with Eudora. And yes, it may contain a representation of the previous events. It won't contain a representation of all the previous events. It may contain a representation of event that was particularly important when run against your internal representation as well. So, in the notion of creating a stable language, basically, you would run against your internal against Eudora, and that could retain some of the information associated with the history. But it is... I can show you rather than... I think this...

Speaker 2: Can I ask you real quick? Can apes talk to themselves? Can they reinforce again?

Author: Certainly. So this, yes. They reinforce, from their external language they're in... So what you're seeing here actually is their external language and their internal language that's run against each other, and then stabilizes accordingly.

Speaker 6: Wait, so this is external and that's internal?

Author: Yes.

Speaker Unknown: Okay. But they're not matched?

Author: No, that's because we've stopped it at a point of transition. So they may not match as well, there may instabilities in the code. But sometimes... You see certainly when they do match.

Speaker Unknown: It doesn't look like any of the... Well, maybe just... I don't see it.

Author: So this here is what is retained in the social graph. So, you have this information as well, just associated with the last interaction and then the representation of the ape and the attraction. The friend or foe rating, which is a, the scalar all the way through it, it's represented as positive/negative. Some abstract belief, level of familiarity, the relationship that you have with the ape. But what I've been talking about is the local brain code that that ape is represented by, which will track... Continue to transition as you have future meetings. It'll just be the most recent social interaction. However, some of these things like friend or foe, belief, attraction, these things are basically maintained over multiple experiences as well...

Speaker 2: And updated...

Speaker 6: This is … well, the factor that's just updated though. It's not actual episodic experience being held by itself to recount.

Author: It can be accessed through the brain code though. So irrespective of what the brain code is running, they're also... Whether you call them sensors or actuators is immaterial. There are also things within the brain code that can access this independently of the formal way that you get to it through the social graph. So there are certain operators... Again, I think they're sense operators, that will be able to reach into this and get this information, and re-inject it back...

Speaker 2: So I guess we're all asking the same question. How accurate and how far back does that memory go? Is it basically everything?

Author: Well, it's dynamic. So, no, it refines over time because it's... The actual memory is linguistic. It's not associated with events. The only the most recent event is what's stored as an event, but within that, there are scalar values that will change through multiple interactions, too. So it's a combination of events specifically, a series of scalar interactions, plus the language associated with the ape, specifically.

Speaker 5: Where's the language?

Author: That's the brain code.

Speaker 5: The brain code?

Author: Yeah. So this will be a structure similar to this, basically.

Speaker 2: Okay. And in this simulation, there's nothing like... The language thing is really complex and really well developed, but something like just physical interaction with the environment doesn't have these same characteristics. So if I find resource at XY in the world, is that updated and kept around in the same way that the language thing is?

Author: That's episodic.

Speaker 2: Okay.

Author: So...

Speaker 2: That's strictly...

Author: Well episodic also, I mean, the episodic that's captured also contains social interactions as well.

Speaker 2: Okay.

Author: But it basically, typically in a resource sense, for example, like picking up twigs and things like that, that is episodic memory associated with location.

Speaker 2: Okay.

Author: Now... Things like being poisoned by particular kinds of shellfish and these kind of preferences too, have a language representation rather than anything deeper than that. But the hope is that the language representation will be enough to get them thinking twice before they eat the shellfish again that's poisoned them.

Speaker 6: How do they make those associations? It's usually time-delayed. They don't get sick that very second. I mean, know to associate the shellfish with...

Author: Well, that's interesting. I think with shellfish, they get sick relatively rapidly, with eating other things they don't necessarily. And toxicology is something else that Bob has added specifically associated with foods, but also parasitic diseases and things like that. So yeah, that was another piece that was added, but the time delay factor is very important.

Speaker 6: There's huge literatures about this stuff sort of gustatory associations versus visual associations. I mean there's a huge biological literature on this and...

Author: Certainly.

Speaker 6: It's interesting to understand how closely this is being modeled to what are currently accepted models on those fields.

Author: Yeah, so there's an open question associated with the language as well. And this is one of my interests too, whether the language is complicated enough. I mean, it's certainly complicated in terms of the sensors and actuators that go into it, but whether just as a language form, whether it needs additional complexity in there. There's a scripting language that I wrote for the simulation prior to the brain code called ApeScript, which is a C like language. And I have a shared, kind of ApeScript writing code overlap that I'm interested in exploring with the difference that rather than being bytecodes, it could be up to a 32-bit address space, and actually do quite a bit more, I think in terms of these representations.

Author: It is probably just a simple switch, based on the way the sensors and the actuators are positioned currently. So my hope is within the next six months to, rather than having this kind of abstract bytecode, having almost a C like language that was being changed dynamically to show this kind of information.

Speaker 6: In regards to this, there's a huge, obviously a huge psychological literature on human language. And what's some of the basic rules or thought to be in order to have language to evolve language, what is necessary? And I wonder if this is based on those sorts of... Like what's being done with this, is this in any way testing those kinds of hypotheses?

Author: Well, the reason that I introduced, with Bob's assistance, language, was actually spending a lot of time with a linguist.

Speaker 6: Right.

Author: And I think certainly his view was that everything we are is our language basically, relatively extreme view but his view nonetheless. And the lack of language, lack of language as I've described in Noble Ape was a serious flaw for him. And I thought, this kind of stuff is relatively easy to throw in there with the right kind of sensors and actuators. And the weighting on the sensors and actuators has been part of the overall tuning of this. But yes, certainly, I mean through a single interaction, specifically. But yeah, very much so.

Speaker 6: I mean, because I think our application with this kind of stuff would be just that, testing current hypotheses about how things either evolve or develop, or just are the way they are. And I mean, it'd be... The closer these simulations are to what... The closer they are implementable to ideas about these things in the field, the more useful this tool would end up being. The more sort of non-germane the code is and arbitrary the code is, the harder it's gonna be to relate it to biological concepts.

Author: So one of the other benefits of it being open source is that I get between... Well it's typically towards the lower end, but between two to 12 students and engineers contact me per month, wanting to work on the code for something.

Speaker 6: Wow.

Author: So one of the reasons that I came to ALIFE was because I have this influx of human power that is looking to do something with Noble Ape specifically. So for example, the termites, the temporal polytheism immediate fits, I thought, there were few others. The secondary mapping of or interaction producing, secondary mapping, that's an immediate fit because it's one of the early tests that I used with Noble Ape, for the initial cognitive simulation. So it's taking the literature, taking the human interest and putting the two together with the view that these are undergraduate or graduate students that are looking to get into this kind of stuff anyway, on that end, on the engineering end, that there are fascinating engineering problems as well associated with this kind of stuff. So it's just putting latent energy, basically towards an interesting purpose, which is one of the reasons that I'm here.

Speaker 6: Right. To recruit more of that human force?

Author: Well, in some regard, but actually it's more of a matchmaker than anything, that you can take a wide variety of papers and what have you, and just pass it to them. But if you have the people that actually wrote the papers who are interested and aware that this is going on, it's much more productive, I think. And in terms of the kind of interaction, these kind of things, I think it's net positive.

Speaker 5: I'm just wondering how we should proceed for time 'cause we have already our meeting's up now. And then I was thinking we could meet or if...

Speaker 2: Yeah, I had... It turns out I had, I'm jumping between and I have to go do some administrative stuff, so I'm not gonna be able to join in later on but feel free.

Speaker 5: Well, then, we do need to have our meeting... 'Cause I was wondering if we could just reverse the time...

Speaker 2: Yeah, no, I've gotta be up in that site at 2:45.

Speaker 5: Okay. So unfortunately, then we'll have to call it an end to this and...

Author: Not a problem.

Speaker 5: I can talk afterwards...

Author: Terrific. Looking forward to it.

Speaker 5: But boy, this is a quick look into a very deep pool. Thank you very much.

Speaker 6: Well thank you.

Speaker 5: Appreciate your time.

Author: Thank you. It's been wonderful coming.

Speaker 2: Thanks a lot.