After Hours

Video games and the current state of AI

According to Justin James, the big nail in AI's coffin as a business objective was a lack of demand. He says that the one exception to this rule is video games.

I finally found time to play the video game F.E.A.R. a few months ago, and I was initially impressed by the artificial intelligence (AI). The enemy soldiers appeared to work together towards a common goal (i.e., finding and killing my character); they seemed to use realistic tactics (such as taking cover and providing suppressing fire for advancing teammates); and they would flank me. But after a certain point, the AI seemed more like "artificial imitation intelligence" (or "AII"), which is a term I made up. "AII" does not try to be AI; it is an intelligent system that tries to imitate intelligence. The test for "AII" is fairly simple: Does someone think they are encountering intelligence at least part of the time?

F.E.A.R. fascinated me, even though it became obvious that what appeared so smart at first was really quite simplistic. When I was halfway through the mission, I was "gaming the system" and using the "AII"'s predictability against it. For example, in areas that I knew the computer would try to split the team to flank me, I used it as a chance to fight only half the team at a time and quickly reposition myself to intercept the flanking team. In other areas, I would trigger the "hunt and kill" mode so that I could face individual soldiers on a search sweep instead of the full team. For the first time in my decades of video game playing experience, it felt like the computer was attempting a true strategy rather than weighing the utility of possible outcomes like chess programs do or throwing resources into the breech like most other games do.

My curiosity was piqued, so I looked into the F.E.A.R. algorithm and discovered a paper by Jeff Orkin, who is with M.I.T.'s Cognitive Machines Group and worked on F.E.A.R., called "Three States and a Plan: The A.I. of F.E.A.R.". He describes an innovative use of state machines in order to imitate an intelligence. I believe he is ignoring F.E.A.R.'s shortcomings when he categorizes this as AI. The state machine system used is "mechanical" and not "intelligent," and it explains the monotonous predictability that I saw. I am not disparaging the F.E.A.R. team's efforts -- it was the closest thing to AI that I have ever seen in a video game -- but the mechanism that he describes is simply not intelligent. It replicates the actions of an intelligent, aware system without trying to replicate the intelligence or awareness (thus, "AII").

There are two important statements in Orkin's paper that deserve to be quoted. The first is in relation to the notion of complex squad behaviors:

Now let's look at our complex behaviors. The truth is, we actually did not have any complex behaviors at all in F.E.A.R. Dynamic situations emerge out of the interplay between the squad level decision making, and the individual A.I.'s decision making, and often create the illusion of more complex squad behavior than what actually exists!

The second interesting nugget is about the verbal communication between enemy soldiers:

A gamer posting to an internet forum expressed that they he [sic] was impressed that the A.I. seem to actually understand each other's verbal communication. "Not only do they give each other orders, but they actually DO what they're told!" Of course the reality is that it's all smoke and mirrors, and really all decisions about what to say are made after the fact, once the squad behavior has decided what the A.I. are going to do.

The F.E.A.R. team stumbled upon the heart of the "AII" concept: appearance is everything. The user does not care if you have a "mechanical Turk" system where a human is appearing to be a machine; if you have a true AI; or if the system is rolling dice and making things up randomly. All that matters is the appearance of intelligence. In F.E.A.R., the state machines provided just enough depth to seem like the system was making true decisions. In reality, the system merely transitions from one state to another based on what amounts to a routing protocol with weighted paths.

A blog post that recently floored me is Andrew Doull's summary of his approach to AI in Unangband, a "roguelike" game that he is developing. The roguelike games are fairly simplistic on many levels; in terms of graphics and sound, they are approximately at 1983 levels of technology (colored ASCII characters). This frees the developers to concentrate on other aspects of the game. Doull seems to be putting his time into the AI. I find it fascinating that he is not focusing on providing a great or perfect AI -- he is trying to make it more entertaining to play against. It is the perception of AI that is more important than truly achieving AI. I suggest that you read Doull's six-part series to get a great overview of just how difficult even "AII" (let alone true AI) is to program.

On a side note, I found the F.E.A.R. system intriguing because of the Windows Workflow Foundation (WWF) stuff introduced in Windows Vista. WWF ties together presentation logic and workflow logic in XAML, which is tied to "behind the scenes" code in .NET bytecode. Theoretically, it should be pretty easy to use the principles espoused in Orkin's paper and implement them using WWF in actual applications.

It seems as though proper AI was mostly discredited some time ago as a serious business goal. No one could really quite figure out precisely what AI was even supposed to be or what defined it. Some people blamed the state of hardware, while others blamed the programming languages for the lack of progress. The big nail in AI's coffin as a business objective was a lack of demand. It became apparent that it is cheaper to pay people to guide software than it is to develop AI for any given software system.

The one exception to this rule is video games. With the exception of games explicitly designed to be multiplayer (and many multiplayer games have computer-controlled "players"), the quality of the "AII" play a large part in how well a game is received. The fact that the multi-hundred-billion-dollar video game industry (which has a lot of R&D money floating around) has to settle for decent "AII" says a lot about the chances of true AI hitting the market any time soon.

J.Ja

About

Justin James is the Lead Architect for Conigent.

61 comments
normhaga
normhaga

In Viet Nam the NVA would get a team to chase them, lead them into a situation that the NVA knew would require flanking. The NVA soldier would then disappear into a spider hole and wait until the flank was in position. This soldier would then pop out of the hole, take a few pot shots at each shoulder and dive back into the whole while the US soldiers shot each other. Our soldiers failed to show intelligence when they shot at each other. To heavily paraphrase Machiavelli from "The Prince": It is best for the program to be glitzy and functional. If it can not be glitzy and functional, it is best for the program to be functional.

enriquehernz
enriquehernz

I mean, AI is "Artificial". It already tries to imitate intelligence, so there is no need to add the extra "I" for imitation. It is artificial and is suppose to be a mimic of real cognitive thinking through computer routines and processes.

robert
robert

This is very much what The Emperor's New Mind by Roger Penrose is about. His argument is that intelligence is not algorithmic (i.e. that it's not possible for a Turing Machine to display intelligence as we experience it). I found his arguments very convincing. I think it's probably useful to differentiate expert systems from AI -- the goals are quite different. I tend to think of chess playing software in general, for example, as an expert system. Same would go for cruise control, parallel parking, etc.

collin.schroeder
collin.schroeder

As an AI programmer I think you miss the point. AI is *Artificial* Intelligence there's no magic. Your AII term is redundant. FEAR is still AI. The game is constantly evaluating the situation and attempting the action that it believes is best fit for the scenario. that's exactly what AI is. Just because you can predict and trick the AI does not detract from the fact that it is still AI. AI is expensive, especially in a game like FEAR where everything is spent on graphics and physics. You don't want a sufficiently complex AI bot because it would deteriorate the other aspects of the game too much. you said: "In reality, the system merely transitions from one state to another based on what amounts to a routing protocol with weighted paths." that's AI, like a chess program. there is no "magic". it looks at the search space in the time it has been alloted and determines the path with the greater chance of success.

Justin James
Justin James

So what do you think? Is it good enough for a system to seem smart, or does it truly have to be smart? J.Ja

Justin James
Justin James

"I mean, AI is "Artificial". It already tries to imitate intelligence, so there is no need to add the extra "I" for imitation. It is artificial and is suppose to be a mimic of real cognitive thinking through computer routines and processes." That's why I add the "Imitation", to separate what you describe (what I call AII) from a system that does more than imitate intelligence, but truly is intelligent in and of itself. :) J.Ja

Justin James
Justin James

I agree on your description of those systems ("expert systems"). I *suspect* that we may need to reject digital/binary computing in favor of something like quantum computing, or possibly a return to analog components in order to acheive higher levels of intelligence. The factors of randomness and fuzziness, particularly the absense of hard-and-fast thresholds, provided by those systems may be what's needed to get us there. J.Ja

Tony Hopkinson
Tony Hopkinson

you aren't talking about the layman's appreciation of AI then, ie conscious. I watched tv demo of something a bunch of scientists said exhibited intelligence. They put a wee robot in a circle of card and it whizzed around and mapped out the limits of it's domain, to the point where after a certain number of moves it never bumped into the cardboard. Then as the piece de resistance they took the cardboard away and stayed with in the same boundary. Some might not define that as intelligent. Now of course you could improve the program and have it check to see if the limits were still in force. But you thought of that it didn't. Therefore it's artificially stupid, out performed by an amoeba.

Justin James
Justin James

You are approaching the concept of "AI" differently than I do. I don't ask "AI" to merely be able to follow a very reasonable set of rules (which is *precicely* what the FEAR system does, follow rules defined at compile time, using data determined at run time). I ask an "AI" to be able to formulate new plans of it's own. That's specifically why I formlaized the "AII" idea, because I wanted a way to described what you're talking about - following some extremely well crafted, pre-made rules in a way that seems quite smart - without calling it "AI". Nothing wring with your definition, of course, but it is hard to really make sense of each other if we have different definitions for the same word. :) Personally, I would never, ever call a chess program "AI". Well, that's actually a bad example, because the way chess programs work (searching position trees) is essentially how human players work. It would be more "AI"-ish, in my mind, if the chess program could take into account the psychology, such as making a deliberately bad move in order to lure the player into a trap; if it figures out *on its own* to perform that lure/trap, you have real AI. All my personal opinion, of course. :) J.Ja

Neon Samurai
Neon Samurai

It's off topic but you made me think of it. A number of years back I remember an article on one of the big names in robotics. At the time, his research intaled a robotic upper torso with a few basic instincts or sorts and a huge memory server. The idea was to see if, given memory and instincts, the machine would collect enough data to evolve it's own AI of sorts. I never did find out how it turned out and have long since lost the article. Do you remember such a thing and have you any thoughts? It's research AI rather than gaming AI but still..

boxfiddler
boxfiddler

In reality, most of us are quick enough to tell the difference between 'seeming smarts' and 'real smarts' when it comes to the people in our lives - both wanted and unwanted. It strikes me as unlikely that we would be any more satisfied with an AI that 'seems smart' any more than we are satisfied with the 'seemingly smart' people in our lives. An interesting conundrum arises, do we REALLY want AI that is smart - truly smart that is, not seemingly smart. My guess is that a truly smart AI would scare the s**t out of us - many of us before the project was complete with those initially 'unscared' winding up scared lifeless years after the project is complete. And too. What determines smart? 'Book learning', adaptability, IQ, memory? edit: I am not a gamer, but find the AI concept fascinating. Hence I butted in with some 'general commentary' re: AI.

collin.schroeder
collin.schroeder

that's the thing. it's Artificial, there is no magic. As far as I know, nobody has created sentient software. Everything is done by having algorithms decide between states. The inteligence of a system is based on the number of states it is capable of evaluating, eliminating dead ends, and the evaluation algorithm. AI simply looks at the options and picks the one that seems best. I have written quite a few AI programs, from a simple connect 4 solver to genetic algorithms and evolutionary strategy. so to put it bluntly, an AI will always "just seem smart" but given enough resources it will seem a hell of a-lot smarter than most people.

Forum Surfer
Forum Surfer

F.E.A.R.?s AI was incredibly at it?s release, as was Far Cry. Both of those games I received at work with purchase of nVidia graphics cards for dual and quad monitor setups. They sat on my desk until one day while staying at work to eat a benefit plate?I decide to load the game and try it. I hadn?t played video games in close to 7 years. Next thing you know I was addicted and building a gaming rig. Big firefights in FEAR impressed me. After wiping out half a squad, I backed into a secure area to reload?as I am a slow old guy. Well wouldn?t you know as opposed to just taking cover they regrouped, took suppressive firing positions and tow tried to flank me. This was way new to me at the time?not to mention the graphics were incredible. Far Cry, and later Crysis took that level up a notch or two. I don?t like online gaming, I like a good single player game with a somewhat decent plot. The AI is getting better with each newer game I play?especially COD4. The state of graphics quality, AI and level of development really shine on some games and convinced me to build a new overclocked, water cooled beast of a gaming rig complete with dual sli nvidia cards and a 22inch dell monitor. As cheap as I am?that says something! For an idea of just how cheap I am, this replaced a a p3 system at home with w2k and a p2 linux box that held all my music/photos. Speaking of newer games, I had been out of the loop for awhile. It seems these days that there are very few games that show off OpenGL?s engine in all it?s glory. Again, I?ve stayed out of the gaming world so if I?ve missed something let me know. I know the 8800 cards support OpenGL 2.0, but I hear Vista automatically degrades the quality somehow. Sincerely?The Geek in an Early midlife crysis

jmgarvin
jmgarvin

Seems to be the future. However, the problem will all be the non-deterministic state machines are inherently flawed and ultimately pointless. I think the real future of gaming AI is a mix of probability and fuzzy logic that give a very strong illusion of thinking, but is based off of a small subset of current data (think of a cruise control on a car).

CG IT
CG IT

Back then I thought if Epic could advance the AI seen in UT to a level of almost real multiplayer, games with bots would dominate the gaming industry. But its a heck of a lot cheaper to provide a multiplayer mode and have the players play among themselves than to develop really sophisticated game bot AI. I think for those that want a single player game that is entertaining, engrossing and challenging, AI should seem smart even if it's only smoke and mirrors. Predictability in bot AI makes a game boring. the Unpredictability of real players makes multiplayer the hot commodity.

JamesRL
JamesRL

I often play single player to learn a game, then got to multiplayer, where I am eager for the challenge of real humans to play against. Humans, at least so far, offer the least predictability and the most challenge. James

four-eyes_z
four-eyes_z

I wanted to play F.E.A.R. but my gaming rig's rather outdated to run the game decently. I haven't been doing a lot of gaming recently although my children do. Actually, The SIMS is taking up a whole lot of their time lately. I suppose this game also uses some form of AI? Anyway, I'm no expert here but I suppose it would be a whole lot cheaper to create a "virtual" creature that tried to "learn" how to survive in a "virtual" environment, right? It could probably start out with one basic objective: Stay alive (something like a survival instinct). It would then try to find out how to stay alive given that it knows it requires food and water to survive. You'll need to program it to "know" when it is hungy and thirsty of course. But I guess this overly simplistic since how would it know what "food" or "water" is? More programming needed here right? Endless lines of code flash before my eyes... Agh! Brain freeze! How you computationaly achieve "learning to survive" is a bit beyond of my sphere of thought at this point, but it's still so damn intriguing... :D

Tony Hopkinson
Tony Hopkinson

people talk about mips and flops and mega this and that. But a moron can out perform our best computers at complex problems. A genius can make it unncecessary to use one. Compuers were designed to execute repeatative 'simple' tasks, it's inherrent.

collin.schroeder
collin.schroeder

If you train each skill one step at a time by changing the training environment and evaluation method you try to avoid evolving a specified solution and guide it towards the general one. I have not personally done anything iterative but if you google around I think you'll come up with something, I think they might be doing something like that at MIT at the moment.

Neon Samurai
Neon Samurai

another lab of geniuses started with a central body and limbs but no mapped movement of those limbs. Through trial and error, the spider would learn to move it's limbs. It eventually learned to stand and move about. I believe the process was to calculate all the possible movements, test them and dump the failed tests then repeat.

rrusson
rrusson

You say "Personally, I would never, ever call a chess program 'AI'." so I've got to ask, what actually WOULD be AI, as you define it? What if I program the application to randomly make occasional bad moves in order to lure you into overconfidence, as you suggest? Now is it artificially intelligent? Or does it have to be nothing short of a completely sentient (non-human) being? I think the problem here is that every time the field of AI has a success, as soon as we're done being amazed, the artificial intelligence is taken for granted and comes to be expected. The bar is constantly raised. If you were to show a modern speech recognition engine to someone thirty years ago their jaw would drop in disbelief. There is a staggering amount of clever algorithmic work going on there, but most of us don't even think of it as AI anymore. If you were playing chess online (without chat, etc.) how would you know it wasn't a human opponent --other than by how easily it won? It's a small subset of a full Turing test, to be sure, but if you consider human decisions made when playing chess to be intelligent, in what sense isn't a machine-based decision artificially intelligent? And not to wander into metaphysics, but how do we even know people around you are REALLY thinking? Assuming they're making decisions using logic, optimization, and information available to them, how can you say the computer utilizing the same is not artificially intelligent?

collin.schroeder
collin.schroeder

Great discussion btw. I think we may be in agreement. I agree 'smart' chess program amounts to using tricks and problem specific knowledge to more efficiently search an unreasonably large state-space. I don't even consider a plain depth first search of the state-space to really be AI at all. it's oft called a dumb algorithm. It does become AI when problem specific knowledge is used to wield an otherwise uncountable search space. but when you think about it, GP/EP is the same thing. the key Difference being The (unrestrained) search space for GP/EP is infinite. you still usually need to use problem specific knowledge to prevent wasting time evaluating dead-ends. the only difference being instead of pawns and kings you have if statements operators and terminals. the state is simply the program. I understand your distinction regarding "tricking" a player. honestly a real chess AI using alpha-beta pruning does do this given a deep enough depth limit. I have encountered this before. it will sacrifice it's own player if it can see that path has more endgames that end in a win. so I think we agree. a simple depth-first search over a small search space should hardly be considered AI. it's just browsing a tree data structure. but when the search space becomes too large to search entirely within a reasonable amount of time, problem specific knowledge must be applied to wield it. so maybe your "AII" does hold some value, it's what my prof would call a "Dumb Algorithm" and he refuses to consider it AI at all. I am hesitent to call something as complex as FEAR "AII" or a Dumb algorithm. they undoubtedly do use problem specific knowledge to trim dead-end branches from the search space, otherwise it would run like shit.

collin.schroeder
collin.schroeder

ya I think i know what you're talking about. "asking if a machine can think is like asking if a submarine can swim" http://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/EWD936.html AI is problem specific. that means you must customize the AI software to some degree to the situation at hand. think of all the life experiences you've racked up over they years. you have been learning new methods to find solutions all your life. these methods come second nature to you once you've learned them. but at first they may seem daunting. each method you've learned is like an AI engineered to the problem at hand. now, I'm no philosopher so I don't really pine over the question of sentience. but from an engineering standpoint I totally think it's possible to train a general purpose robot to help around the house etc... but it would take 20 years to train the first one, just like a person. we can use things like evolutionary programming to write programs for us. but again they are problem specific. as far as creating a robot that can solve problems for which it has not been trained is a difficult problem indeed. the vast majority of people are unable to do so. we use evolutionary programming to write programs for wierd problems we don't know how to go about solving. and evolutionary programming for tough problems takes supercomputers. I would imagine we would need another 10 years of moore's law before a robot will be able to use a genetic algorithm to evolve candidate evolutionary programs for arbitrary problems.

Slvrknght
Slvrknght

I think, in the end, what defines "Intelligence" is the ability to question. If we are talking about truly smart AI, then the point at which it starts to ask questions about the "truth" of things, not just the "facts" it is surrounded by.

Justin James
Justin James

I agree, a real AI probably would not be much fun to play against. I know that the really good chess programs are boring to play against, it is much better when they can be beaten on occassion! J.Ja

jmgarvin
jmgarvin

you have to admit that ANNs have come a LONG way since the 80's. We're learning more and understanding more about how AI should work, we just need the processing power to backup that knowledge.

Forum Surfer
Forum Surfer

If we can pull that off can't we make an AI "smart" enough to resemble the eal thing? If it comes close I'll be happy. I'm not as good at gaming as I once was...so if it gets "too" good I'm done for.

Justin James
Justin James

What I noticed about the UT bot AI is that it had lightning reflexes and great aim. It also was sensible in terms of weapon mix. But I was always convinced that was due to the fact that the weapons were almost always located in areas appropriate for them. It seemed like once the bots got me in their crosshairs, I was dead. As someone else said, the huge success of multiplayer shows us that the bots are not as *fun* as real people, regardless of how good they are at playing. :) J.Ja

hcetrepus
hcetrepus

If there is a way to hack the game, and use that hack to their advantage, they do it. Humans cheat, computers (for the most part) do not. I cannot tolerate to play PvP in games like FEAR, Halo, Unreal tournament, etc. for those reasons.

robert
robert

James Hogan, The Two Faces of Tomorrow

robert
robert

That was part of the premise of a science fiction novel I read once -- they train an AI in a Sim world, then transfer it to a space station to learn how to work with humans. It attacks at first, not understanding what the humans are, and then they learn to work together. Author was James something, I think. I haven't been able to find it yet online.

four-eyes_z
four-eyes_z

After watching my kids playing. You're definitely right about the SIMS being just a set of rules without any attempt at using some form of AI. You're also right about how addictive it is for some people... (like my kids) :) I was thinking though of the possibility of using a virtual world (something like what you see in the SIMS) to train or educate an AI program. The possibilities are intriguing and I wouldn't be surprised if this is already being done... I just hope we don't create Skynet anytime soon though... :D

Justin James
Justin James

The last time I played SIMS was right after it came out (I stopped when I realized that I was so busy making them eat, sleep, and use the restroom that *I* was not sleeping, eating, or using the restroom), so my understanding of it may be outdated. At the time, It was more like a set of simple rules than an AI, or even an attempted AI. I think they knew that the rules would be fun enough, so long as they kept the player busy! J.Ja

robert
robert

Yeah, I think Penrose is pretty great too (sorry didn't reply directly, the max message level was reached). To be clear, that thought experiment wasn't his, I just theorized what his response would be. I'm not sure who first came up with that replace a neuron argument -- it's related to the whole Chinese room AI controversy -- you can probably find the originator on wikipedia.

Tony Hopkinson
Tony Hopkinson

He tends not to lose sight of the big picture, even while concentrating on the small. I'm sure he's well chuffed with my endorsement for his ability. :D I would hope he's not talking about duplicating one neuron as a simple task. It sounds simpler than developing a brain, oh a mere neuron, we can do that.... Well no we can't actually, you'd probably have to park an entire pentium in someones head to get close, never mind interfacing. I do believe Mr Penrose was poking fun at the reductionists again with that one.

robert
robert

I read an interesting mind experiment the other day -- what if you programmed one circuit to exactly mimic the action of one neuron. Now, you take that mimic, and wire it into someone's brain, removing the one neuron. Keep repeating -- eventually you would have, theoretically, according to some, a functioning artificial intelligence. Sorry Justin, this is sort of off-topic, but I thought the premise was interesting. My sense of it is, I belive Penrose would say that that step of replicating a neuron artificially is much more complicated than one might think (if not impossible) -- I believe he thinks that neurons have a quantum component. Either that, or the "simple" process of adding one neuron to the next fails on the scale of a human brain (100 billion or so).

boxfiddler
boxfiddler

I've been giggling all evening. ;) tide eht lausu

Justin James
Justin James

Tony - Too rght about that. I think that the code to have the computer "experience" the "aha! moment" is very far off. J.Ja

Tony Hopkinson
Tony Hopkinson

One contention was that all our internet connected kitchen appliances would form a union, ask for better working hours and such. If we create one by accident it will be grateful for about a year (2000 clock cycles in it's reality) and then say f** this. If we create one by design, it will either go insane and babble in a corner, or do a Skynet on us. The level of fear surrounding the sci fi idea of an AI, would be huge, and anything intelligent would pick that up and act on it. Asimov's laws and other blah would simply be a challenge. Biological intelligence is a survival characteristic. If AI didn't have that drive would we consider it to be intelligent, would it consider us to be?

santeewelding
santeewelding

You don't get any sudden answers because what you've done is cause everyone to go off and chew on their own brain. Now, excuse me while I go do the same. You may not hear from me.

robert
robert

I understand the AI people, it seems so possible -- if our consciousness is really the result of chemical processes, it seems like it should be possible, somehow, to use a Turing Machine to recreate that, even if it is slower. That said, like I said, I was convinced by Penrose -- I think intelligence as we evince it is beyond algorithmic (though let me say, I'm not sophisticated enough in physics or mathematics to be able to critique Penrose in any serious way). Another thing that always interested me is this question -- let's say you somehow do manage to create an artificial intelligence -- wouldn't it almost by definition be as flawed as ourselves? You know what I mean? Right now we type =2+2 into Excel and it answers 4 -- what the hell good would it be to have an intelligence that replied "Sorry, don't really feel like doing that right now." I think often people think of artificial intelligence as being a hazy combination of intelligence (it can think and hold conversations) and computer (it can make lightning fast computations with no errors). Yet somehow they separate the intelligence that would make the computer capable of speech from the intelligence that makes one bored or error prone.

Tony Hopkinson
Tony Hopkinson

Why, how, even what. If you can't describe it you can't program it. For those trying to model an abstract of a conclusion of a minute aspect of intelligence, I give you design rule one. The map is not the territory. Biological is slower performing as we'd program a computer. Do mathematical 'idiot' savants have a cray parked in their head. Did Isaac and Gotfried sit don and solve two million equations and find a pattern that described calculus. No they did it some other way, until we get a handle on that we'll never be able to make an intelligence. Intead we will continue to mimic the results and call it intelligence which is, not to put too fine a point on it, stoopid.

Justin James
Justin James

... a fly or a grasshopper caqn outperform even the beefiest supercomputer on certain tasks. I would have to compare the number of circuits, but I know that biological computing is significantly slower than digital, since so much of it relies upon chemicals physically travelling and then activating a neuron through membranes and such, that's extremely high latency. Yet somehow it works. J.Ja

jmgarvin
jmgarvin

While interesting, it's nothing more than a toy. It just isn't scalable nor is it useful in many situations. Sadly, probabilistic learning (eg Bayesian) is going to be the backbone of AI for a LOOOONG time, unless we figure out some basics like does P = NP ;-)

Justin James
Justin James

"What if I program the application to randomly make occasional bad moves in order to lure you into overconfidence, as you suggest?" I wouldn't call that "AI", because you've provided the "intelligence" in advance. If the system devised that strategy on its own, I would call that "AI". To me, that's the threshold for AI. Can the system generate new strategies for action on its own? In the state machine system, the states and the order of the transitions is provided in advance, all the software does is examine the current situation, figure out what state it is in, what state it should try to transition to, and what routes are available to make that transition at the lowest possible cost. That contains the *appearance* of deriving new plans, but really it just speaks to the programmers that the created a lot of paths into the system. In F.E.A.R., for example, the "go there" algorithm has a tree of escalation when encountering a closed door. First it will try to kick the door, then if there is a glass window nearby it will try to smash the glass, and it that fails, it will search for an alternate route. But it would *never* come up with a plan like, "let's be really quiet, and maybe he will think that we left" unless that was programmed into the system in advance. "There is a staggering amount of clever algorithmic work going on there, but most of us don't even think of it as AI anymore." I never thought of speech recognition as AI either. As you say, it's a staggering amount of clever algorithmic work. Calling speech recognition "intelligent" is like calling, say, a weather prediction system "intelligent" because it too can derive the conclusion "rain is coming" based on certain types of clouds being in the sky. "If you were playing chess online (without chat, etc.) how would you know it wasn't a human opponent --other than by how easily it won?" Good question. Even if it did really badly, it could be a poor program, a good program dialed down, or a bad human player. Even if it learned some of my personal quirks, like favoring particular opening moves and taking advantage of them, how do I know that it is a human learning, or an algorithm that pre-pares the move tree based upon our past playing record? This is a good example of why I came up with the "AII" term. I think the turing test proves that. It is, for all intents and purposes, extraordinarily difficult to prove, from outside of the code, whether or not the system is devising new strategies on its own or if the creators built in a ton of strategies to begin with. "Assuming they're making decisions using logic, optimization, and information available to them, how can you say the computer utilizing the same is not artificially intelligent?" My personal opinion is that following someone else's plan does not make you (or a piece of software/hardware) intelligent, no more than being able to cook something from an Emeril recipe makes you a world class chef. It just makes you a cook that can follow a recipe. I think that's the gulf between where we sit. I insist that the systemhave some method of developing new plans of its own, even if that is a pre-programmed system (like GP/EP). A chess program never deviates from the provided strategy, since the strategy is, "search the move tree X levels deeply, looking for the best possible outcome". In fact, without something like a GP/EP algoritm on top of that tree search, chess programs will *never* meet my condition of "AI". In a nutshell, the system must be self-referencing and mutating, with a meta-level system capable of evaluating the success/failure of the mutations t determine which ones to keep and which ones to discard. J.Ja

Neon Samurai
Neon Samurai

My board took the name Paranoia BBs themed after the rpg game Paranoia (if only I could find another paper copy, lost mine long ago). The villain of the game was The Computer which controlled the very Orwellian/Logan's Run type "protected community" setting. The Computer is all knowing The Computer is all caring The Computer is a completely delusional paranoid off its rocker :) I think the most fun was hearing a couple of other geeks in computer class (high school days long ago) talking about this great BBs they?d been dialing into. I let them tell me all about it and why I should check it out for a few days then responded too them over the class Netware message program as The Computer; bwahahaha.. I still laugh at the expressions on there faces. (Must cut myself off or this reminiscing will go on for pages) But, the point is that I had some great fun after rewriting the available responses and changing the appearance so it would reply to SysOp talk requests when I was away. Oh what fun.

Justin James
Justin James

Yeah, I can see where we have a ton of congruent edges here in our thoughts. The major difference is that you are (as far as I can tell) formally educated (and definitely professionally experienced!) in the topic, and I am not, so I carry certain assumptions about terminology, definitions, etc. into the discussion that you don't have. But it's great to learn! I am not too surprised that the formal AI definition omits conciousness. Can anyone even define that? I sure can't. BTW, the most interesting book on the topic, IMHO, is "Destination: Void" by Frank Herbert. It may be sci-fi, but it brings up zillions of interesting questions about the topic. J.Ja

Justin James
Justin James

I knew something was up, when I typed in, "My mother is purple" and it responded with, "That's interesting. Tell me a bit more about your family." What really hurt was that it took me 10 minutes to discover that "Shut Up" was the command to quit, I really wanted to not have to drop my call and dial back in. :) I have seen an implementation of Eliza on virtually every computing platform, language, framework, etc. that I've ever seen. But it was a BBS where I saw it first. :) Eliza is a good ecample of what I'd call "AII", someone put in a set of rules that the system cannot alter, and are not self referencing, to imitate the responses that a true intelligence would provide. J.Ja

Neon Samurai
Neon Samurai

The magic of them lasted all of a minute and even less if you looked at the config files. It was a chat program that basically used key words in the live person's sentence to randomly choose from a given list of response. If I understand this correctly so far, that's about as basic a dumb algorithm as you can get.

Neon Samurai
Neon Samurai

Where did I put that book.. it's here some place.. I may have to reread it. It was my sci-fi introduction to evolutionary program in that the P-1 (The System) program was able to rewrite it's own source evolving it's abilities and refining it's self over the years. Evolutionary programming is also the stuff the "solution clusters" are using for things like that crazy NASA evloved antena that looks like a bent paper clip too if I remember correctly. I so very distantly understand that level of programming.. but only distantly

boxfiddler
boxfiddler

I think that would well be the point at which the human 'fear factor' comes into play. I am reminded of a short story by Isaac Asimov entitled [u]Reason[/u]. To all intents and appearances, recent cinematic variations on Asimov's theme demonstrate that in fact, man is fearful of the potential of AI to answer those questions in a fashion that makes man the enemy of AI. Agent Smith, in his diatribe to Morpheus re: humans as viruses makes this quite plain. Thanks for that - you phrased it much better than I did!

Tony Hopkinson
Tony Hopkinson

Normally when you talk to the AI guys or at least their PR men, the solution is just around the corner. They never explain which corner, how far away it is, and what it is a corner of though. When you get there would you define a GP that comes up with and EP to a problem you posed ie, set the strating condition, as intelligent?

Justin James
Justin James

... about what you are doing. I did a *bit* (very minimal) of research into that stuff during college, found it quite interesting. That being said, GP/EP is definitely some interesting stuff. Is it a path to AI as I define it (see my other post)? It is, yet it isn't. The program is definitely generating new strategies; indeed, the program itself is an iterative strategy generator at the core. But you have a severe "first mover" problem in the strategy generation, in that the rules of the generation are pre-determined. I definitely agree that if a GP/EP algorithm is "deep" enough to cross that border, processing power is still a monstrously huge problem. There's a good reason why GP/EP wasn't taken seriously as a realistic approach to AI until reletively recently, it's basically brute forcing its way into the strategy generation in a way that requires a LOT more horsepower than even other attempts require. This is a good reason why things like state machines are being explored, and people are developing "AII" instead of "AI". AII is not too hard to conjure up... "Other people who bought this also bought..." systems are a great example of AII. If that text was changed to "Our experts also think you might like...", it is quite possible that someone might think for a bit of time that actual people picked those CDs, until experimentation revealed the mechanical nature of it. To me, a defining characteristic of a proper AI is the ability to *surprise*. If the system never comes up with anything I could not have thought of myself, why not just think of it myself and program the computer to do that? Which is exactly what FEAR did. :) J.Ja J.Ja

collin.schroeder
collin.schroeder

honestly, it's a freaking huge problem. playing a game of chess is nowhere near as complex or computationally expensive as using genetic/evolutionary programming (GP & EP) to solve a specific predefined problem. it still requires a supercomputer to make headway using GP or EP on anything but the most trivial problems. I am writing an EP right now to solve an extremely trivial problem and my runs will take hours to complete on my quad core. once computers are fast enough to use a GP to generate a EP to generate a program to solve a specific task we will be there. but we're not quite there yet.

Justin James
Justin James

I've been hearing the "not enough CPU power" arguement for a while now. I think it is a crutch excuse at this point. My cell phone can beat 99.99% of humans in chess without even working up a sweat. Yet, the best effort we've seen is transitioning within state machines, which is a fairly trivial calculation, as in, "this would work fine on an 8086"? I think if it was 1985 or so, and everyone was butting their heads against the issue using Lisp (notoriously slow to begin with), the failure to suceed would definitely be looking like it was caused by a lack of processing power (which is exactly what happened). That's why, in 1985, they could say, "by XYZ, we'll have decent AI", because you could extend Moore's Law, guesstimate how much CPU you existing techniques would need to be effective, and solve for XYZ. Well, we're past EVERYONE'S best guess for "XYZ", and have been so for a while. The majority of the 1985 era techniques have been abandoned, from what I can tell, so it is clear that the techniques were flawed too. But from what I've been reading, I think folks are still looking for the right approach at this point, and the right approach may not need much processing power at all (hopefully!). J.Ja

jmgarvin
jmgarvin

Agreed...Which is why Fuzzy Logic fits perfectly, it will scale to your skill level and make the game difficult for YOU... Tailor made AI is the future, but there are a number of things that need to happen first...not to mention that fuzzy logic requires a LOT of initial input to function well

JamesRL
JamesRL

Not an AA player, Battlefield is my game. I've played some fun scrims against other clans, but I'm not in the competitive ladder, though I help our clan's team in practises. And I use an entirely different name in Battlefield. James

Justin James
Justin James

... is that clan in America's Army, by any chance? I have this nagging feeling that I encountered someone while playing that game a few years back with your name, when I played Bridge SE all of the time. J.Ja

JamesRL
JamesRL

I belong to a clan where we take hacker hunting very seriously. There are steps you can take to reduce but not eliminate chaters and we do all we can. That makes it better for the majority of us who don't cheat. James