Posts

Recommendation Systems - Brief overview of Matrix Factorization

Image
The goal of recommendation systems is to predict the rating that a user will give an item that they haven't seen yet.  One type of recommendation system is known as collaborative filtering, where we try to infer the preferences of a user by filling in the missing items in a user-item matrix. The User-Item Matrix: Suppose we had a user-item matrix like the one below, where each cell represents what a given user (u ) has rated an item ( i ).   There are several missing entries in this matrix, which represent the ratings that we want to predict, and eventually recommend to the user.  If you were working at Netflix, these ratings could mean the number of stars a user has rated a given movie. Matrix Factorization: One way of performing collaborative filtering is via matrix factorization.  In matrix factorization, we try to find two low rank matrices U and V , such that when multiplied together, they recreate the original sparse matrix. ...

Using neural embeddings for search ranking and recommendations

Image
This is a summary of a project that I worked on while working as a data scientist / software engineer at Peerspace . Peerspace is a two-sided marketplace for renting unique spaces for meetings, events, off-sites and things like that.  It's basically like Airbnb, but leaning more towards commercial use cases and hourly rentals. A screenshot of what the Peerspace search results page looks like While I was at Peerspace, I had the opportunity to work on alot of pretty interesting things.  This included learning Clojure, which is a dynamically typed Lisp dialect which runs on the JVM.  In addition, I got to do a lot of work with search ranking and search infrastructure.  One of the search-related projects I worked on was exploring and applying deep learning to improve our search ranking and recommendations. One algorithm I explored was called product2vec . which essentially uses deep learning to learn a neural "embedding" for each of the products in a product ...

Brief intro to recurrent neural networks

Image
Note: This post is meant to be a brief and intuitive summary of recurrent neural networks.   I have adapted this material from the Coursera deep learning course.  The value I hope to add here is that I have attempted to summarize the information in a way that is easy (hopefully) to understand, and can be used as reference or refresher material for the future. Part 1: Recurrent Neural Networks: Recurrent neural networks are a class of neural networks where the nodes/neurons form a directed graph along a sequence.  They are very effective at tasks such as Natural Language Processing because they have a "memory," in other words they can receive context from previous inputs.  They can take in input one at a time, and can pass information from one node to the next via hidden activation layers.  This information serves as the "memory" or "context" layer of the network, which can be used in conjunction as more new input is being processed. RNN Cell: ...