From the Editor
by
Jeff Georgeson
previous next

full contents
The Prison-House
of Language
From the Editor
by
Jeff Georgeson
previous

full contents
next

The Prison-House
of Language
From the Editor
by
Jeff Georgeson
previous next

full contents
The Prison-House
of Language
previous

full contents
next

The Prison-House
of Language
From the Editor
by Jeff Georgeson
From the Editor
by Jeff Georgeson
When you think about AI—general AI, not generative—acting human, this tends to be all about emotions/feelings and personalities. Memory isn’t a concern. For some reason, we’re perfectly happy for our “human-like” AI to have computer-like memory systems—perfect, accessible, speedy. For my game AI engines, I created not only a personality system but also a memory system that more closely mimicked human memory, moving some memories from short- to long-term memory, misremembering some things, forgetting others completely. Now, even in a game, this isn’t what most programmers want; NPCs need to impart certain pieces of info to the players, and you wouldn’t want to run the risk that they give misremembered “facts” to the players, or forget to give info entirely. In fact, creating such an engine is more about just experimenting with it for its own sake.
One of the things you notice when running such systems in what amounts to an empty box is that the NPCs basically create their own reality with their memories, foggily remembered or not. And inside a blank and malleable canvas, the reality becomes, well, real to them and to any others within the same box; there’s really nothing to distinguish between what I know/think is the “real” memory originally entered into the system and the new “real memory” they’ve created and accepted.
This is to be expected, really, for the NPCs, isolated as they are. But would you expect it in “real” life, with “real” humans? Unfortunately, we seem to be running the same experiment ourselves, and unexpectedly with very similar results.
Although at one time the development of social media seemed to be leading us (along with the rest of the Internet) to better and more widespread knowledge, it also has lead us to more and more fragmented realities. We’ve become, or always were, able to put ourselves in boxes similar to those my NPCs were stuck in, and then within that isolation accept whatever “reality” is fed into it. Worse, the vast knowledge that does still flow around the ’Net and could help us all to help each other build a better future doesn’t seem to be able to penetrate into these self-made social isolation tanks; in fact, attempts to introduce facts into these systems just make the “walls” thicker; we more aggressively believe our misrememberences (or “lies”) (which is a whole other psychological thing beyond explaining here), and in some cases the isolation and lies become even more hyperbolic.
I guess my memory system didn’t go far enough, really; the NPCs didn’t develop this aggressive refusal to learn new ideas or revise old ones, and thus my engines were not as human-like as I thought. But maybe, in this case, I’d rather they weren’t. And maybe the future AI creatures won’t destroy us because we are emotional, illogical creatures, but instead because we’ve fragmented into tiny groups of flat-earthers who refuse to learn anything new or contradictory to our established beliefs. Maybe they won’t so much destroy us as wall us off, so that eventually the situation I created with the NPCs will be reversed, and we’ll be the ones stuck in blank little boxes.
And we probably won’t even notice.
On that grim thought, happy holidays,
Jeff Georgeson
Managing Editor
Penumbric