AlphaGo uses a value-assessing deep convolutional neural net for training, and a policy-assessing DCNN for playing. It combines this with Monte-Carlo tree search.
AlphaGo beat Fan Hui 2p 5-0 in a five game match (19×19 boards, no handicap) in October 2015. The new was not announced until January 27th 2016. It was then more than four stones stronger than any other Go-playing program.
Its web site: https://storage.googleapis.com/deepmind-data/assets/papers/deepmind-mastering-go.pdf