cover
art & g.narrative
fiction & poetry
cover
art &
g.narrative
fiction & poetry
about
archives
current html | pdf
submissions
vol viii, issue 1 < ToC
From the Editor
by
Jeff Georgeson
previous next

full contents Silver Angel
From the Editor
by
Jeff Georgeson
previous

full contents




next

Silver Angel
From the Editor
by
Jeff Georgeson
(previous)
full contents


(next)
Silver Angel
previous next

full contents Silver Angel
previous

full contents




next

Silver Angel
(previous)
full contents


(next)
Silver Angel
From the Editor
 by Jeff Georgeson
From the Editor
 by Jeff Georgeson
Confessions of an AI game developer ...


Well, the first confession is that that grammatically dodgy first line is an exaggeration. I am not an AI. Neither have I been a very successful game developer. But I have developed AI engines for games, and as you’ll see, directed exaggeration seems the name of the AI game.

When I wrote back in October 2020 (https://www.penumbric.com/archives/October2k20/aAI.html) that “the big companies don't appear to be focused on any sort of strong AI systems,” that indeed didn’t appear to be the case—data-driven algorithms meant to ferret out the best way to target consumers or voters were the Big Thing, and any sort of General Artificial Intelligence (GAI, or “Strong AI,” as little as that term seems to be used anymore) was way off the radar. The closest such attempts I could find were, to be honest, in game engines such as mine, which attempted to mimic human personalities and memories for NPCs. And, to continue being honest, my engines were simplistic, although they presented an opportunity to be way better than the general NPCs and chatbots and so forth of the time. I sold these engines through Unity, a game dev engine that, unfortunately, made as part of its requirements that you couldn’t know who was buying your asset, nor what they were using it for; you just had to hope that you could get the buyer to actually register their interest with you (or alternatively that they found a problem with your asset and asked for help). Add to this an unhealthy problem with Unity asset piracy and, well, to this day I have no idea how many copies of my “strong-ish” AI engines are out there, nor what they’ve been used for.

Does that sound scary? Horrifying? Disgusting? Ethically queasy? I totally agree. However, as much as my ego would like to believe, on some level, that the current sudden surge in GAI-ish developments are my fault, I know that really a) no big company would have a need to use my engines for either personality of memory systems, and b) as mentioned before, my engines were (and are) pretty simplistic, and I haven’t updated them in years.

But ... remember that fear and disgust you may have just felt? Rev it back up again, for now we do indeed have megacorporations like Google and Meta and Microsoft and whatever Elon Musk is all slouching toward General AI through things like ChatGPT and Sora and Kling and a number of others. We’ve been worried about ChatGPT for its development as a replacement for writers (or writing), and about various image creators for their creation of deepfake images and videos. But with the latest releases—ChatGPT-4o especially—we’re getting closer to the event horizon. Yes, THAT event horizon.

ChatGPT-4o now allows for more “intelligence and advanced tools”—so not just writing an essay for you, but interacting with you in a more human-like fashion. It can “look” at images and decipher text on them, it can “chat about photos you take,” and in the near future will have “natural, real-time voice conversation and the ability to converse with ChatGPT via real-time video” (according to the OpenAI website, https://openai.com/index/gpt-4o-and-more-tools-to-chatgpt-free/?ref=upstract.com). And it now has a memory! It will analyze your conversations with it and create “memories” of the things it thinks you like, don’t like, etc.—not just obvious things like you telling it “I don’t like beef,” but inferring that you don’t like beef from the clues you give it over time. You can also create individual “GPTs” that have their own specific knowledge sets and so forth to be “more helpful in [your] daily life, at specific tasks, at work, or at home.”

This may sound ... IDK, innocuous? Not very advanced? But there was a reason I developed a memory system as the second engine, and didn’t just rely on the fact that of course computers have hard drives that store hard data—in order to have realistic conversations, you need the AI to not only remember the facts, but make inferences based on those facts (I went a step further and made them able to forget or misremember things, but OpenAI hasn’t gone that far). And the next step after creating “personal” GPT assistants is to make them even more personal—giving them warmth, say, or limited feelings of some kind (even so they can make better inferences about their users’ feelings). I know from experience that you can create a system to mimic human personality and emotion, even as a solo developer. Imagine what a company with comparatively limitless resources can do.

The funny thing is that AI companies have started using “strong” AI terms for even their “weak” AI products: They are happy to tell you about ChatGPT’s “intelligence,” for example. It’s like this bizarre attempt to throw the public’s scifi-based fears of robots taking over the world into their faces, making them more and more used to “intelligent” AI systems and so forth in some sort of conditioning, trying to get us to the point where we basically ignore that latest conversational AI that hates us a little bit more for our stupid questions, or that likes dogs but doesn’t like other pets, or develops its own political leanings. Exaggerate now so that we ignore the real danger later. It seems to work for Far Right politicians; why not for AI? (And now I sorta want to ask ChatGPT to write me an essay using its very best Donald Trump impersonation, just to see what it does.)

And of course this is ignoring the ethical daemons of creating something that has feelings and a continuous sense of “being.” Are we creating a new caste system? Should we even be trying to do this? It is an ethical issue I’ve been wrestling with for years, and luckily my own limited capabilities mean I’ll never actually create such a being—but do companies like OpenAI or Meta have the ability to wrestle with such ideas? Or does each individual within the company think “I don’t have the ability to create such a being, so I don’t need to worry”?

I worry that we’ll worry too late. About a lot of things. It’s our way—see, e.g., climate change. And then we’ll just reach that point where we aren’t around to worry about it at all. (Now that’s an exaggeration—or is it?)

Jeff Georgeson
Managing Editor
Penumbric

(previous)
full contents