In a mostly grateful comment on my review of his
Raging Tiger and
Falklands War 1982,
Curt Pangracs writes that I underrate the AI in those games when I refer to it as "lackluster".
A few days later, veteran wargame reviewer
Jim Cobb writes an editorial on Combat Sim that accuses developers of not spending enough time or energy on the development of competent computer opponents. (Full disclosure: I owe Mr. Cobb a review of
Blitzkrieg II that I *promise* to get around to very soon.)
The "lackluster AI" criticism I levy at the ATF games is, I admit, wargamer boilerplate. Hell, I suspect that the word "lackluster" is found more often in game magazines than in any other press form. As I explain in my reply to Mr. Pangracs, my more general point was that the AI seemed well able to handle the expected and obvious, but not the creative. Cobb's complaints about AI in are along similar lines, only he wants the AI to not just respond to creativity, but be capable of (or programmed to) surprise.
All of this raises the obvious question of what to expect in wargame AI. What makes an opponent believable?
There is one sector of wargaming opinion that holds that, since most real wargamers seek out human opponents, energy spent on the AI is wasted to begin with. To me, this puts the cart before the horse. If wargame AI was, in general, better, there would be less need or desire to seek out humans.
I have a more basic question, provoked by Pangracs' reply to my review. Do we know good AI when we see it? And should we believe what we are told by developers?
Really bad AI is easy enough to recognize. It was very common in early sports management sims, where opposing GMs would never challenge you for big free agents. Early wargames had computer opponents that had a good sense of the mathematic value of objectives, but poor sense of geography. The latest offering from Paradox,
Diplomacy, has multiple AI opponents, none of whom are sharp enough to cut soft cheese.
When a game brags about its AI, it's never a good sign. The chaotically stupid
Superpower games were promoted on their realism and "learning" opponent. Make a game complicated enough and it may appear that the AI is learning (people can convince themselves of anything) but even if the AI was good, the games are far too random to test how good.
But AI that ranges from OK to good is hard to detect. Most difficult computer opponents are just given more advantages. They "cheat" in order to provide a challenge. This is not greater intelligence, of course, so a challenging game is not a sign of a good AI.
Wargames AI seems easy to program. There are limited goals defined by the scenario. There are limited resources available and rarely a need to produce more (most "strategic wargames" like
Grigsby's World at War are strategy games to me, not wargames). Include a combat resolution table or a sense of depreciating supply assets through a mathematical thingamajig and voila.
Apparently not so easy. Even in a wargame as simplistic as
Rome: Total War's battles, the computer opponent can be easily convinced to prioritize its General's uber-power over the same unit's importance for the preservation of the army. Result: suicidal generals who are easily destroyed.
So what do we expect? An opponent that plays by the historical rules is fine, even though, as Cobb notes, any human opponent who did things purely historically would be beaten because you're not dumb enough to act historically when you attack him. A computer opponent who had more than one programmed opening and the good sense to know when stall an advance would work.
As I still struggle with Noble level in
Civilization IV (I love games, but fear I'm not very good at them), I am reminded of one of the most enjoyable wargames I've ever played.
Sid Meier's Gettysburg. I won't deny that it is more fun in multiplayer. Me and one of my MP arch-nemeses have many war stories to tell about the times he took a strong position on a hill or when I forced marched reinforcements through the woods to hit his rear. All great times.
But the computer opponent was more than acceptable. It seemed to know how to regroup, when to withdraw its guns and where to withdraw them to, could scout, would extend its line, would refuse its flanks in trouble...Sure, with practice I could beat it pretty soundly. But there was a lot of practice.
Does this mean that AI is not all that hard? Probably not. Meier probably had some tricks up his sleeves, or, like many gamers, I have chosen to believe something that is not exactly true.
So maybe we don't really need better AI. We just need to be fooled better.