Learning / Neurlogical AI in Arma/Video games
#1
Hello, many people won’t know this but I went to university and Studied how to program and make video games, in my final year I researched learning agents, AI which could be taught and change their behaviour based on what they see.
I thought I would start this topic is discuss the advantages, disadvantages and any questions people have about the possibility of learning agents in video games. Bellow I have attached some common questions.
Just to explain my background I have just graduated, I am no expert, I do not fully understand this, this is based on my current experience and what I have learnt. I could be wrong; this information will likely be outdated in a few months’ time, this field is expanding and changing.I am here to learn and share this awesome thingI am going to do an FAQ; I will explain stuff at a beginner level.I would love for people to get involved and ask questions and talk about this.
Keep it kind, keep it respectful.
Just ask for me to respond in more detail and I will.

So, FAQ:

Is an AI that learns possible? Yes, a few companies have made some as well.

Is an AI that learns possible in a video game? Yes, Unity has a SDK for it, it is really basic though

Why don’t all games have learning agents?:  It takes allot of time and resources to train, it took me a week just to teach an AI not to shoot a wall and it would still do it, current learning technology is still being developed. Also, the current “standard” in video game AI is golden and most of the time the AI is realistic.

What is the best use for a learning agent?: Commander, it is the easiest thing for agents to learn, they do not need to learn complex tasks just strategy so it’s pure learning which is good.

Would a Learning agent be possible in Arma?: I do not fully know, I was researching into it, you would most likely need to break the engine and stop Armas AI, the best use case I came up with was the enemy commander controlling what the AI doneDo I see Learning agents in the future of video games? Yes but I think they will be more rare then I would like, I think it is a really cool feature for certain use cases.
J.Raider, J.Keyes, L.James like this post

#2
Are there any notable or AAA games out there that you know of that uses this type of AI? Very interesting thread btw.
[Image: J.Keyes.png]

#3
Short answer no, without a mod there is no current game which actually uses it.
Long answer yes some games have been rumoured to use it and there are projects for most classic games or well known games for an learning agent to play the game.
There is a difference from playing the game to actually being an AI for the players to go against etc.
Both Dota and star craft have deep learning agents as a player, there was this fighting series where you would train against an AI which imitated against you. 
Planetary Annihilitaion apparently has it.
The current issue with NN/Learning agents is its very early accesss.
There allot of people experitmenting around though and building small projects.

#4
I think the biggest challenges with AI and machine learning in this context are data collection, reward functions, and probably also the search/state space.

First of all, the current state of the art shows that you need vast amounts of data to have any chance of creating an AI that is equal to, or smarter than, human players. For example, Google's AlphaGo (the AI that beat the world's best Go player multiple times) uses what is called deep learning. Deep learning tries to mimic the way humans learn, by creating many layers of abstractions and repeatedly trying different approaches to see what works best. AlphaGo played thousands, if not millions, of games against itself to learn the best moves in any situation. In ArmA context: to have a kind of self learning AI requires tons of the data, and to get this data the AI needs the ability to run simulations against itself, without human intervention. This way it can learn certain things by itself, for example shooting through walls is hard, shooting through windows is easier. Of course the AI can learn while we are playing against it, but this wont be fast enough, as there wont be enough data points (as playing ArmA in real-time takes a long time).

Then, there is the reward function. A reward function in machine learning basically determines whether one action is better than another action. For example, if you have a road navigation AI and there it arrives at a junction where you can go left and right, it has to make the correct decision. If the goal is to minimize the distance travelled, and going left results in a 10KM drive, and going right results in a 15KM drive, then it is clear that going left offers the best reward (i.e. minimal distance). The way this reward function is defined influences the behaviour of the AI. If you minimize for time instead of distance, and going left is a 20 minute drive and going right is a 10 minute drive, the outcome would be completely different. In ArmA context: for the AI to know what is the best action to take at any given point, it needs some kind of reward function. For example, going prone when underfire is worth 10 points, but just standing there and doing nothing is worth -5 points. Of course the amount of actions (shooting, moving, cover, support, etc.) an AI can take at any point in time in ArmA is massive and it somehow needs to quantify the reward for each action. And then there is the fact that some actions do not offer immediately reward. For example, when facing two enemies do you shoot or take cover? You may kill one, but then you could sustain injuries from the other enemy. Taking cover means you kill neither enemy, but you stay alive longer (which is a delayed reward). 

And finally, we come to the search/state space. An AI solves problems, it has a certain goal and tries to optimise its actions to achieve this goal. When playing Chess, each piece on the board has a certain pattern in which it moves, the rules of the game are crystal clear. There is only a finite (though large) number of moves, the entire collection of all possible moves is called the search space. This allows the AI to look ahead and calculate all possible moves and determine which moves have the best outcome (i.e. which move has the highest reward). The more complex the game is, the more states are possible in the game and the larger the search space will be. In ArmA context: there are so many actions that an AI can take while out in the field, ranging from movement, to engaging enemies, to taking cover, providing support, so on and so forth. The rules of the game are not nearly as clear as in Chess for Checkers for example. Because there are so many actions, the search space to find the best action at any point in time is massive, and will require more computational power then any of our PCs could possibly have. 

I agree with what Buxton said about having a more sophistacted AI acting as a commander. When looking at a tactical level or even a strategic level, the search space is reducded massively as the number of possible actions is greatly reduced. There could be a self learning AI that controls how squads/platoons/companies behave on a marco level (strategic, maybe even tactical). I think (deep) reinforcement learning could have potential there. But having this kind of AI for every soldier in the field would be too difficult at this point in time. 

This topic is super interesting (I'm a researcher in a somewhat related field), but I think we are nowhere near having this type of self learning AI in games that are as complex as ArmA. Games such as DotA and Starcraft have fixed rules, limited actions, and are not open world sandboxes like ArmA.
J.Buxton likes this post

#5
(02 Aug 20, 0656)B.De Jong Wrote: I think the biggest challenges with AI and machine learning in this context are data collection, reward functions, and probably also the search/state space.

First of all, the current state of the art shows that you need vast amounts of data to have any chance of creating an AI that is equal to, or smarter than, human players. For example, Google's AlphaGo (the AI that beat the world's best Go player multiple times) uses what is called deep learning. Deep learning tries to mimic the way humans learn, by creating many layers of abstractions and repeatedly trying different approaches to see what works best. AlphaGo played thousands, if not millions, of games against itself to learn the best moves in any situation. In ArmA context: to have a kind of self learning AI requires tons of the data, and to get this data the AI needs the ability to run simulations against itself, without human intervention. This way it can learn certain things by itself, for example shooting through walls is hard, shooting through windows is easier. Of course the AI can learn while we are playing against it, but this wont be fast enough, as there wont be enough data points (as playing ArmA in real-time takes a long time).

Then, there is the reward function. A reward function in machine learning basically determines whether one action is better than another action. For example, if you have a road navigation AI and there it arrives at a junction where you can go left and right, it has to make the correct decision. If the goal is to minimize the distance travelled, and going left results in a 10KM drive, and going right results in a 15KM drive, then it is clear that going left offers the best reward (i.e. minimal distance). The way this reward function is defined influences the behaviour of the AI. If you minimize for time instead of distance, and going left is a 20 minute drive and going right is a 10 minute drive, the outcome would be completely different. In ArmA context: for the AI to know what is the best action to take at any given point, it needs some kind of reward function. For example, going prone when underfire is worth 10 points, but just standing there and doing nothing is worth -5 points. Of course the amount of actions (shooting, moving, cover, support, etc.) an AI can take at any point in time in ArmA is massive and it somehow needs to quantify the reward for each action. And then there is the fact that some actions do not offer immediately reward. For example, when facing two enemies do you shoot or take cover? You may kill one, but then you could sustain injuries from the other enemy. Taking cover means you kill neither enemy, but you stay alive longer (which is a delayed reward). 

And finally, we come to the search/state space. An AI solves problems, it has a certain goal and tries to optimise its actions to achieve this goal. When playing Chess, each piece on the board has a certain pattern in which it moves, the rules of the game are crystal clear. There is only a finite (though large) number of moves, the entire collection of all possible moves is called the search space. This allows the AI to look ahead and calculate all possible moves and determine which moves have the best outcome (i.e. which move has the highest reward). The more complex the game is, the more states are possible in the game and the larger the search space will be. In ArmA context: there are so many actions that an AI can take while out in the field, ranging from movement, to engaging enemies, to taking cover, providing support, so on and so forth. The rules of the game are not nearly as clear as in Chess for Checkers for example. Because there are so many actions, the search space to find the best action at any point in time is massive, and will require more computational power then any of our PCs could possibly have. 

I agree with what Buxton said about having a more sophistacted AI acting as a commander. When looking at a tactical level or even a strategic level, the search space is reducded massively as the number of possible actions is greatly reduced. There could be a self learning AI that controls how squads/platoons/companies behave on a marco level (strategic, maybe even tactical). I think (deep) reinforcement learning could have potential there. But having this kind of AI for every soldier in the field would be too difficult at this point in time. 

This topic is super interesting (I'm a researcher in a somewhat related field), but I think we are nowhere near having this type of self learning AI in games that are as complex as ArmA. Games such as DotA and Starcraft have fixed rules, limited actions, and are not open world sandboxes like ArmA.


Buxton Says:
Yeah +1000 on what De Jong says from what I know this is all correct, the future for video games will likely be reward based or analyising what players doing and reacting to it. The simple way to put it is having a brain for each agent woudl get laggy, 10 agents running the learning software made it run at 30 fps so i am so interested to see what video games will do.
One of the use cases i think they will do is AI which are players for multuiplayer sessions which learn and react and are used to fill in a lowly populated server and allow you to still enjoy and play the game without knowuing any difference. 
I think if its pushed you could use it for anything in video games and any agent you just need peopel to start using it. 
I think I predicted the first game to use it will be 2021 to 2022 due to new technolgoy and all that lot.



Forum Jump:


Users browsing this thread: 1 Guest(s)