Building movie recommender systems with deep learning.

Spotlight on a stage.
Spotlight on a stage.
Photo by Nick Fewings on Unsplash

In my previous article, I demonstrated how to build shallow recommender systems based on techniques such as matrix factorization using Surprise.

But what if you want to build a recommender system that uses techniques that are more sophisticated than simple matrix factorization? What if you want to build recommender systems with deep learning? What if you want to use a user’s viewing history to predict the next movie that they will watch?

This is where Spotlight, a Python library that uses PyTorch to create recommender systems with deep learning, comes into play. …


A step-by-step approach to getting started and developing your skills in this rapidly changing field.

Image for post
Image for post
Photo by Myriam Jessier on Unsplash

For several years, Data Scientist was ranked as the best job in America by Glassdoor. Today it no longer holds the top spot in job rankings but it still ranks near the top of the list. It’s no secret that data science is a broad and rapidly growing field, especially as advances in artificial intelligence push the limits of what we previously believed was possible.

If you are reading this article, you probably want to learn data science or get better at data science if you’ve already started learning. One of the most challenging parts of learning data science is knowing where to start and how to get started. Data science is an interdisciplinary field with so many subfields and newly developed technologies and techniques that it is easy for a beginner to get overwhelmed. …


Using this Python library to build a book recommendation system.

Surprise with confetti.
Surprise with confetti.
Photo by Hugo Ruiz on Unsplash

If you’ve ever worked on a data science project, you probably have a default library that you use for standard tasks. Most people will probably use Pandas for data manipulation, Scikit-learn for general-purpose machine learning applications, and TensorFlow or PyTorch for deep learning. But what would you use to build a recommender system? This is where Surprise comes into play.

Surprise is an open-source Python library that makes it easy for developers to build recommender systems with explicit rating data. In this article, I will show you how you can use Surprise to build a book recommendation system using the goodbooks-10k dataset available on Kaggle under the CC BY-SA 4.0


An introduction to Facebook’s updated forecasting library.

Stock price prediction with NeuralProphet.
Stock price prediction with NeuralProphet.
Image created by author using NeuralProphet.

Just recently, Facebook, in collaboration with researchers at Stanford and Monash University, released a new open-source time-series forecasting library called NeuralProphet. NeuralProphet is an extension of Prophet, a forecasting library that was released in 2017 by Facebook’s Core Data Science Team.

NeuralProphet is an upgraded version of Prophet that is built using PyTorch and uses deep learning models such as AR-Net for time-series forecasting. The main benefit of using NeuralProphet is that it features a simple API inspired by Prophet, but gives you access to more sophisticated deep learning models for time-series forecasting.

How to Use NeuralProphet

Installation

You can install NeuralProphet directly with pip using the command below. …


Exploring the strengths and limitations of this metaphor in the information age.

Oil refinery at night.
Oil refinery at night.
Photo by Robin Sommer on Unsplash

Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.” — Clive Humby, 2006

Clive Humby, a British mathematician and data science entrepreneur, originally coined the phrase “data is the new oil” and since then several others have repeated this phrase. In 2011, the senior vice-president of Gartner, Peter Sondergaard, took this concept even further.

“Information is the oil of the 21st century, and analytics is the combustion engine.” …


An introduction to the models that have revolutionized natural language processing in the last few years.

Image for post
Image for post
Photo by Arseny Togulev on Unsplash

One innovation that has taken natural language processing to new heights in the last three years was the development of transformers. And no, I’m not talking about the giant robots that turn into cars in the famous science-fiction film series directed by Michael Bay.

Transformers are semi-supervised machine learning models that are primarily used with text data and have replaced recurrent neural networks in natural language processing tasks. The goal of this article is to explain how transformers work and to show you how you can use them in your own machine learning projects.

How Transformers Work

Transformers were originally introduced by researchers at Google in the 2017 NIPS paper Attention is All You Need. Transformers are designed to work on sequence data and will take an input sequence and use it to generate an output sequence one element at a time. …


An in-depth comparison of Keras, PyTorch, and several others.

A complex circuit board.
A complex circuit board.
Photo by Michael Dziedzic on Unsplash

As deep learning has grown in popularity over the last two decades, more and more companies and developers have created frameworks to make deep learning more accessible. Now there are so many deep learning frameworks available that the average deep learning practitioner probably isn’t even aware of all of them. With so many options available, which framework should you pick?

In this article, I will give you a tour of some of the most common Python deep learning frameworks and compare them in a way that allows you to decide which framework is the right one to use in your projects.


It doesn’t replace your job, it only makes it a little easier.

A robot playing piano.
A robot playing piano.
Photo by Possessed Photography on Unsplash

In the past five years, one trend that has made AI more accessible and acted as the driving force behind several companies is automated machine learning (AutoML). Many companies such as H2O.ai, DataRobot, Google, and SparkCognition have created tools that automate the process of training machine learning models. All the user has to do is upload the data, select a few configuration options, and then the AutoML tool automatically tries and tests different machine learning models and hyperparameter combinations and comes up with the best models.

Does this mean that we no longer need to hire data scientists? No, of course not! In fact, AutoML makes the jobs of data scientists just a little easier by automating a small part of the data science workflow. Even with AutoML, data scientists and machine learning engineers have to do a significant amount of work to solve real-world business problems. The goal of this article is to explain what AutoML can and cannot do for you and how you can use it effectively when applying it to real-world machine learning problems.


Especially when presenting them to a non-technical audience

Image for post
Image for post
Photo by Samuel Pereira on Unsplash

One of the biggest challenges involved in solving business problems using machine learning is effectively explaining your model to a non-technical audience.

For example, if you work as a data scientist in an internship or a full-time position at a company, at some point you may have to present the results of your work to management. Similarly, if you decide to start a business based on machine learning, you will have to explain your models to stakeholders and investors in a way that makes sense. In both situations, your audience may lack a detailed understanding of machine learning algorithms. They probably aren’t concerned with the number of layers in your neural network or the number of trees in your random forest. …


How you can build more robust models using stacking.

A stack of white bricks in the middle of the forest.
A stack of white bricks in the middle of the forest.
Photo by Greg Rosenke on Unsplash

Introduction

In the last two decades, ensemble methods such as random forests and gradient boosting, which combine several instances of the same types of models using voting or weighted averaging to produce strong models, have become extremely popular. However, there is another approach that allows us to reap the benefits of different models by combining their individual predictions using higher-level models.

Stacked generalization, also known as stacking, is a method that trains a meta-model to intelligently combine the predictions of different base-models.

The goal of this article is to not only explain how this competition-winning technique works but to also demonstrate how you can implement it with just a few lines of code in Scikit-learn.

About

Amol Mavuduru

Software Engineer, Former Researcher, and Aspiring Data Scientist

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store