I finally found time to play the video game F.E.A.R. a few months ago, and I was initially impressed by the artificial intelligence (AI). The enemy soldiers appeared to work together towards a common goal (i.e., finding and killing my character); they seemed to use realistic tactics (such as taking cover and providing suppressing fire for advancing teammates); and they would flank me. But after a certain point, the AI seemed more like "artificial imitation intelligence" (or "AII"), which is a term I made up. "AII" does not try to be AI; it is an intelligent system that tries to imitate intelligence. The test for "AII" is fairly simple: Does someone think they are encountering intelligence at least part of the time?
F.E.A.R. fascinated me, even though it became obvious that what appeared so smart at first was really quite simplistic. When I was halfway through the mission, I was "gaming the system" and using the "AII"'s predictability against it. For example, in areas that I knew the computer would try to split the team to flank me, I used it as a chance to fight only half the team at a time and quickly reposition myself to intercept the flanking team. In other areas, I would trigger the "hunt and kill" mode so that I could face individual soldiers on a search sweep instead of the full team. For the first time in my decades of video game playing experience, it felt like the computer was attempting a true strategy rather than weighing the utility of possible outcomes like chess programs do or throwing resources into the breech like most other games do.
My curiosity was piqued, so I looked into the F.E.A.R. algorithm and discovered a paper by Jeff Orkin, who is with M.I.T.'s Cognitive Machines Group and worked on F.E.A.R., called "Three States and a Plan: The A.I. of F.E.A.R.". He describes an innovative use of state machines in order to imitate an intelligence. I believe he is ignoring F.E.A.R.'s shortcomings when he categorizes this as AI. The state machine system used is "mechanical" and not "intelligent," and it explains the monotonous predictability that I saw. I am not disparaging the F.E.A.R. team's efforts -- it was the closest thing to AI that I have ever seen in a video game -- but the mechanism that he describes is simply not intelligent. It replicates the actions of an intelligent, aware system without trying to replicate the intelligence or awareness (thus, "AII").
There are two important statements in Orkin's paper that deserve to be quoted. The first is in relation to the notion of complex squad behaviors:
Now let's look at our complex behaviors. The truth is, we actually did not have any complex behaviors at all in F.E.A.R. Dynamic situations emerge out of the interplay between the squad level decision making, and the individual A.I.'s decision making, and often create the illusion of more complex squad behavior than what actually exists!
The second interesting nugget is about the verbal communication between enemy soldiers:
A gamer posting to an internet forum expressed that they he [sic] was impressed that the A.I. seem to actually understand each other's verbal communication. "Not only do they give each other orders, but they actually DO what they're told!" Of course the reality is that it's all smoke and mirrors, and really all decisions about what to say are made after the fact, once the squad behavior has decided what the A.I. are going to do.
The F.E.A.R. team stumbled upon the heart of the "AII" concept: appearance is everything. The user does not care if you have a "mechanical Turk" system where a human is appearing to be a machine; if you have a true AI; or if the system is rolling dice and making things up randomly. All that matters is the appearance of intelligence. In F.E.A.R., the state machines provided just enough depth to seem like the system was making true decisions. In reality, the system merely transitions from one state to another based on what amounts to a routing protocol with weighted paths.
A blog post that recently floored me is Andrew Doull's summary of his approach to AI in Unangband, a "roguelike" game that he is developing. The roguelike games are fairly simplistic on many levels; in terms of graphics and sound, they are approximately at 1983 levels of technology (colored ASCII characters). This frees the developers to concentrate on other aspects of the game. Doull seems to be putting his time into the AI. I find it fascinating that he is not focusing on providing a great or perfect AI -- he is trying to make it more entertaining to play against. It is the perception of AI that is more important than truly achieving AI. I suggest that you read Doull's six-part series to get a great overview of just how difficult even "AII" (let alone true AI) is to program.
On a side note, I found the F.E.A.R. system intriguing because of the Windows Workflow Foundation (WWF) stuff introduced in Windows Vista. WWF ties together presentation logic and workflow logic in XAML, which is tied to "behind the scenes" code in .NET bytecode. Theoretically, it should be pretty easy to use the principles espoused in Orkin's paper and implement them using WWF in actual applications.
It seems as though proper AI was mostly discredited some time ago as a serious business goal. No one could really quite figure out precisely what AI was even supposed to be or what defined it. Some people blamed the state of hardware, while others blamed the programming languages for the lack of progress. The big nail in AI's coffin as a business objective was a lack of demand. It became apparent that it is cheaper to pay people to guide software than it is to develop AI for any given software system.
The one exception to this rule is video games. With the exception of games explicitly designed to be multiplayer (and many multiplayer games have computer-controlled "players"), the quality of the "AII" play a large part in how well a game is received. The fact that the multi-hundred-billion-dollar video game industry (which has a lot of R&D money floating around) has to settle for decent "AII" says a lot about the chances of true AI hitting the market any time soon.
Justin James is the Lead Architect for Conigent.