Friday, April 1, 2011

AI in The Black Desert

          I had originally planned on doing another Social Structure article, one on the Consensus government of Mars, but in order for that article to make sense, it is necessary to at least briefly explain the differences in AI in The Black Desert and the more traditional view of AI in most SF.

           The first and probably most important difference in Black Desert AI is that they are not an evolution of modern computer science.  Computer science, as near as I can tell, pursues the holy grail of AI by trying to make increasingly complex computers that mimic the functions of the human brain.  In The Black Desert, AI are a product of biomimicry, which means that AI designers attempt to mimic the function of a human brain by mimicking the design of a human brain.  This puts the evolution of AI more in the realms of biology and neurology than computer science.

           Here is an excerpt of the first draft of the AI species description for The Black Desert core book:


AI
          Intelligent computers are a far cry from the cold, calculating machines depicted in twentieth century fiction. Moore's Law, the axiom than dictated computer capacity doubling every eighteen months, began to break down in the 2020's as integrated circuits became too small to hold stable electrons. The next generation solution, quantum computing, relied on the precise alignment of atoms to achieve increased computational power. This was successful; quantum computers can perform calculations faster than anything else by orders of magnitude. These new computers had a level of problem-solving ability and “intuition” that was roughly on the level of a dog or cat. Despite being far, far more intelligent than ever before, quantum computers failed to achieve the long sought after human quality of wisdom.

         With conventional quantum computing having reached the extent of its potential, a new, even more radical solution was needed.

          Having essentially perfected the ability to calculate, computer science sought to make the next level of computing mimic the uniquely biological quality of true sentience.
The ultimate solution came from study of the old and mostly dismissed neuroscience Orchestrative Objective Reduction. Simply put, the theory states that within each neuron in the brain there are trillions of carbon micro-tubules, each in turn containing a single free electron. Because of interactions with the surrounding carbon atoms, these quantum particles are able to self-collapse from wave into particles. The theory states that this near constant state of self-collapse functioned as a sort of “organic binary” and from this consciousness emerges.

          The theory is still not accepted in neuroscience circles, but it did manage allow the development of computers that can actually think. The resulting Quantum Orchestrative Objective Reduction Processor gave computer systems the fabled Turing potential of true sentience.

             The ability to intuit solutions and problem-solve on a human level was not without trade-offs. Because QOOR (pronounced “core”) processing technology mimics the human brain to such a degree, AI are no smarter than a purely organic consciousness. AI do have access to to machine data storage and retrieval, however, so their memories are essentially perfect. While they lack the intense, hormone-fueled passion of organics, AI do possess some rudimentary emotions. AI feel what one would expect any consciousness aware of its own potential mortality to feel; Pain, fear, friendship and happiness are all possible. So, unfortunately, are hate, vengeance and even madness.

           It is for these reasons that AI have not really replaced conventional computers. Instead, they replaced humans in situations where human-type intelligence is needed but the frailties of human bodies are difficult or impossible to sustain. This often includes space-born military and other aerospace venues, as a single AI can replace the majority of a spacecraft's crew and only consumes electricity.

          The greatest limiting factor on an AI's ability to associate with others and with their environment are their size and isolation. A QOOR Processor is a sphere of interlaced carbon nano-tubes roughly a half-meter in diameter and weighing nearly 20 kilos. In addition, the Processors are fragile- too much trauma is as deadly for a computer brain as it is for an organic one. Because of these factors, AI are usually installed in the safest, most heavily shielded areas they can find.

           Obviously, this leads to physical isolation. But in addition to this, AI are essentially disembodied consciousnesses. They can control anthroids, robots, even entire spacecraft, but only a few are hard-wired to their virtual “bodies”. An AI without an extension to control is more effectively cut off from the outside world than the most physically impaired organic ever was. This can lead to all of the psychological disorders that humans in the same situations are prone to; paranoia, delusions, even outright psychotic episodes have been known to occur in AI that are too long without stimulation.

          Despite these disadvantages, AI have many superior attributes. Their cognitive ability is not impaired by being powered down- “turning off” an AI will not kill them. They can store their memories in multiple locations, so damaging their data libraries will not hurt them very badly either. While they must take time to associate new data just as an organic consciousness would (i.e: spend Character Points to increase skills or gain new ones), the time it takes to “memorize” that data is virtually instantaneous.

          Despite all of these difference from organic consciousness, AI share one trait with organics that make them unquestionably alive in the minds of AI advocates and AI themselves- they may be hard to kill, but once they die, they die. Each QOOR Processor is a unique construct. Because of this, once a QOOR Processor is destroyed, what made that AI an individual is also gone. All of their memories, skills and abilities may be intact, but if a different QOOR Processor is associated with them, a different AI results. It was this inability to reincarnate- known historically as the “Turing Fallacy", that led to the AI revolt during the Great War and their subsequent independence as a species and culture.

           Rather than continue at this point, I will leave the topic open to comments and emails until Monday, when we will address them and continue on to describe a social structure in which these AI and humans live together in equality, if not harmony.
           Have a great weekend, RocketFans! 
            

5 comments:

  1. Wow. That's an interesting on AI. Not the weakly godlike AI of most transhuman fiction.
    So, how do AI handle boredom with access to their bodies? I'm thinking of an AI on a long boring patrol in a ship.

    The size of the QOOR is a bit surprising though, still these things have to have some drawbacks.

    Talk to me about what happens if someone replaces all of an AIs memory libraries? How does this impact their sense of identity?

    ReplyDelete
  2. ...AIs handle boredom the same way that humans will; via Virtual Worlds. Also, An AI hooked to a spacecraft considers the ship itself to be their body; and the various maintenance/engineering robots on-board to be their bodies as well. An AI would have to be in total isolation to really start having problems. If still hooked up to its memory, the AI may descend into a kind of madness where it replays those memories over and over (with clarity so perfect it's like experiencing a current event) and possibly start altering the outcomes of some of the situations. This can progress to the point that the AI favors the newer, artificial memories over the originals. Humans do it all the time, and with AI, all memories are perfect, so there is little sense of the passage of time. It's weird.

    As for AIs without their memory that are in isolation...They have no mouths and they must scream...unless you simply cut power to them. Then time effectively stops for the AI and they suffer no ill effects. So remember boys and girls: Always turn your AI off once they are removed from their memory cores!

    Moving on, replacing all of a QOOR Processor's associated memory will result in a dysfunctional example of the Manchurian candidate. The AI will figure out that it's customary way of thinking is incompatible with it's new associated memories, and then suspect that there was some shenanigans. This can lead to (justified) paranoia and other mental issues. For this reason, one cannot re-use QOOR Processors.

    ReplyDelete
  3. But can you reuse the storage media?

    The other thing is hooking old memories up to a brand spanking new AI. I know it wouldn't be the same person, but I suspect there'd be a more than passing resemblance. After all, that AI wouldn't have anything to judge against except the memories.

    ReplyDelete
  4. Oh sure, storage is storage; it's the QOOR that's unique. And you are correct; it is almost impossible to tell the difference between AI "twins", where two QOORs use the same memory. It wasn't until two of these AI twins had a chance to compare their reactions to different scenarios that they realized they were in fact different entities.

    It has to do with how AI define their identities. I had to think about this a lot when I was blue-skying what AI would be like. With humans, our identities are defined by our memories, since no human can share their exact experiences with another. All human art, literature, science and - everything, really - is in a way an attempt to communicate our unique memories and ideas with others. AI can share their memories no problem; just make a copy and you're good. That's why their identities are defined by their consciousness. To AIs, the way they think and react to situations is what makes them unique, not their memories. A human who had brain surgery will often exhibit different behavior and reactions, while not having lost any memories. From our point of view, they're still the same person. From an AI's POV, they are not. Being non-human means that AI have an alien way of looking at the world, which is what makes them so much fun for me as a game designer.

    ReplyDelete
  5. I like this very much. Keep up the good work.

    ReplyDelete

Questions, comments, criticisms? All non-Trolls welcome!