I'm no AI scientist, but if this stuff was efficient and useful for game design, someone would be using it already.
Indeed.
Not going to spend much time on rehashing since this kind of thread pops up every few months, but to summarize:
- it's not efficient to play hundreds of thousands of campaigns of a PDS game for training (this would require an enormous server farm or equivalent cloud compute costs).
- Pure black box approaches (just sending in e.g. video frames) and entirely too inefficient, so inputs (features) have to be carefully chosen by AI dev. So do outputs. A database approach implies all of the game implementation is exposed to AI, which is massively constraining for gameplay programmers.
- The game changes all the time during development except the very end, breaking AI which is working several times. While an ML algorithm is trivially retrained on new gameplay, as the inputs have to be carefully chosen (if nothing else because computing them for all agents is costly).
- Defining the objective function is impossible (no, it's not only about winning), in essence it's all about making as few players pissed off as possible that have wildly differing individual expectations. Players assume the AI is bugged if they do not understand its reasoning.
- On a technical note, reinforcement learning which is far more tricky to get robustly working than supervised learning is necessary.
Last edited: