A side interest of mine is deep learning. I always remember playing games when I was younger
(particularly Civilization) and, while I enjoyed it, I at the same time wished the AI would be
more challenging. In Civilization, the AI difficulty levels have, at least in past versions,
involved calibrating the bonuses (cheats) the AI gets against you, rather than actually
adjusting the skill of its tactics. Deep learning has the capacity to create AI which is both
a challenging (even prohibitively so) adversary and similar to humans in the errors it makes.
In theory, to adjust how strong a deep neural network is at playing a game, you can just
train it for more or less time (not unlike a human!).
Anyway, largely unrelated to Arcane Fortune, I have worked on re-implementing some of DeepMind's algorithms for playing Go and have had some, at least in my opinion, encouraging results with even the modest hardware that I own. (In general, training the algorithms, which only needs to be done once is the computationally difficult part -- running the networks for playing can be done on pretty much any computer, even phones these days easily can make neural network inference predictions). Below are some articles I've posted on Medium describing some of my work with Go.