Posts

Showing posts from 2018

Brief intro to recurrent neural networks

Image
Note: This post is meant to be a brief and intuitive summary of recurrent neural networks.   I have adapted this material from the Coursera deep learning course.  The value I hope to add here is that I have attempted to summarize the information in a way that is easy (hopefully) to understand, and can be used as reference or refresher material for the future. Part 1: Recurrent Neural Networks: Recurrent neural networks are a class of neural networks where the nodes/neurons form a directed graph along a sequence.  They are very effective at tasks such as Natural Language Processing because they have a "memory," in other words they can receive context from previous inputs.  They can take in input one at a time, and can pass information from one node to the next via hidden activation layers.  This information serves as the "memory" or "context" layer of the network, which can be used in conjunction as more new input is being processed. RNN Cell: ...

Vehicle detection for self driving cars

Image
Building a data pipeline to detect vehicles on the road I am working on building a data pipeline to detect vehicles from a video feed for a self driving car. Various computer vision techniques are used, including Histogram of Oriented Gradients (HOG), as well has a sliding window approach combined with a machine learned classifier. The general steps for creating this data pipeline are as follows: 1.  Perform a  histogram of oriented gradients  (HOG) feature extraction process on a labeled training set of images. 2.  Use the output of the HOG to train a supervised classifier (SVM, neural network, etc.)  3.  Implement a sliding window technique with windows of various sizes using the trained classifier to search for vehicles in the images using the classifier. 4.  Create a heat map of recurring detections.  Create a overlap threshold to reject false positives.  Also estimate a bounding box based on pixels detected. The data...

building a chess ai - part 4: learning an evaluation function using deep learning (keras)

Image
Note: this post is still in progress. Below I will discuss approaches for training a deep learning chess game and the results of my implementation. Approach 1: Train against the outcome of the game *y* is the outcome of the game (1 is win for white, 0 is loss for white, .5 is draw) We want to learn a function * f(p)* that can approximate this. *p*  is the chess position (8x8 chess board), actually is an 8x8x12 = 768 dimensional vector (since there are 12 pieces)      8x8x6  = 384 dimensional vector with positive values for squares with a white piece and negative values for squares with a black piece.  (Using 64 x 12 introduces too many degrees of freedom) The goal is to try to learn a function * f(p)* that can predict the winner of the game * y* given a chess position * p* . The model: Dataset: http://www.ficsgames.org/download.html millions of high quality games played by grand masters or international masters Objective/Loss Function...