Deepmind's AlphaStar wipes professional starcraft 2 players


Using a regular desktop with one GPU for inference, AlphaStar defeats 2 human pro-players, winning all 10 matches.

Although so far the AI is limited to playing just the protoss against other protoss opponents on one single map.


Once AI starts fixing / generating its own code… the pandoras box is open


Question, does the AI operate keyboard controls and view the screen like a human player or is it plugged into the game somehow ?


Blizzard released an open API for Starcraft2. So AlphaStar is plugged into the game to gather info and control elements.

It is still confined by the fog of war and can’t see invisible units without detectors.


Why I asked is because it has an advantage in terms of making commands faster then and also accuracy.


These neural networks are less about code, and more about network architecture. There are already things like autoML that generates the optimal network structure using another neural network.


Discussion by Vox about this linked below. Lots of weird little bits and pieces, like how they gave the AI 200 years of practice. I wonder how many life-years these AIs are going to have before we recognize that they’re sentient? (I don’t mean they’re at the stage yet, but I strongly suspect they’ll reach that stage before we’re aware of it or willing to accept it):


Both the average and max APM of AlphaStar are slower than a typical professional StarCraft 2 player. So it is not really faster. However, I think during skirmishes AlphaStar has a higher sustained APM. There’s no measurements for that, it’s just what I observed in the video. Even then, Alpha Star tops out under 2000 APM.

It probably enjoys a higher accuracy when it comes to selecting units though. There’s an open source code for AIs to select units in StarCraft without the mouse. Although it seems like AIs can only send out commands the same way human players can, by that I mean they only have all the commands mapped to the keyboard available to them.


DeepMind’s official blog with updated info about the inner workings of AlphaStar.


Need to define actions per minute. For example there are a lot of people who keep clicking until a unit reaches the destination but its actually only one action and the AI obviously lacks nervousness and impatience so it won’t do that. Also while limited by fog of war it probably still watches the whole “visible” map at all times. A real human has to click all around continuously if it wants to do that.


It really doesn’t work like that. Just cause it can carry out a complex action doesn’t mean it will understand anything else let alone a concept as obscure as self-awareness. We will need to give it some code that defines self-awareness or give it access to evolve by writing the whole code from ground up. It will happen at some point by an idiot but we will come up with laws that prevents that from happening legally when we get close.


That’s discussed in the article I linked to above:

During the 10 matches, the AI had one big advantage that a human player doesn’t have: It was able to see all of the parts of the map where it had visibility, while a human player has to manipulate the camera.

DeepMind trained a new version of AlphaStar that had to manipulate the camera, and used the same process — 200 years of training and then selecting the best agents from a tournament. This afternoon, in a live-streamed competitive match, this new AlphaStar AI lost to MaNa — it looked significantly hampered by having to manage the camera itself, and wasn’t able to pull off as many of the spectacular strategies the other versions of AlphaStar had managed in earlier games.

But they assume it’ll overcome that difficulty in a hurry.

I know these are nowhere near sentient yet, but we’re not entirely sure what makes us sentient; we still debate whether or not different animals have self-awareness; and government laws aren’t even close to catching up to the internet. I’m very skeptical that the emergence of self-aware AI is going to be something controlled and managed. I’ve probably read too much sci-fi, but I wouldn’t be surprised if the first self-aware entity we create is an accident that has memories of being gruesomely murdered in some computer game millions of times. (I get the feeling I’m stealing a Black Mirror plot here.)

Again, I assume this is “distant future”, but given how tech can proceed in sudden fits and starts it may be within 20-30 years.


Perhaps the AIs will one day be able to tell us? :slight_smile:

I must say this stuff freaks me out; the rapidity of progress is mind-boggling. I recall building NNs from scratch at university on 68000- and 68020-based computers. They clocked in at something less than 1MFLOP. That was 30 years ago, and now we have GPUs and ASICs that can perform the same computations 1000 times faster on models 1000 times bigger. What the hell is going to happen in the next 30 years?


Pretty impressive, although the first AI opponent did some pretty weird stuff, did you see it build that stack of observers? It was also quite risk taking in that it engaged the enemy on ramps. One of the AI opponents, it may also have been the first, did some pretty sloppy scouting. All in all, not bad, but it didn’t blow me away. These models still have a long way to go.


The problem is that neural network based AIs aren’t about code. They are about neural network structures.

It’s pretty clear that humans have not yet discovered what sort of neural characteristics makes something sentient. We probably still haven’t ironed out what we mean by sentience.

If by sentience we mean the ability of any entity to have subjective perceptual experiences, then for now it’s hard to see sentience from any current neural networks.

AlphaStar would have to be able to show itself being happy or unhappy with the match results, or show that it likes or dislikes playing StarCraft to be able to called sentient.

Perhaps we haven’t figured out a neural network structure capable of sentience. Since we don’t know what that network would look like, there’s a good chance we might discover it without realizing that we’ve created sentience.


You can probably fake sentience.