Natural Language Processing Specialization

  • 4 courses
  • Intermediate
  • >
    4 months (6 hours/week)
  • >
    Younes Bensouda Mourri, Łukasz Kaiser, Eddy Shyu

What you will learn

Use logistic regression, naïve Bayes, and word vectors to implement sentiment analysis, complete analogies & translate words

Use dynamic programming, hidden Markov models, and word embeddings to implement autocorrect, autocomplete & identify part-of-speech tags for words

Use recurrent neural networks, LSTMs, GRUs & Siamese network in TensorFlow & Trax for sentiment analysis, text generation & named entity recognition

Use encoder-decoder, causal, & self-attention to machine translate complete sentences, summarize text, build chatbots & question-answering

Skills you will gain

  • Sentiment Analysis
  • Siamese Networks
  • Hidden Markov Model
  • Transformers
  • Attention Models
  • Machine Translation
  • Word Embeddings
  • Locality-Sensitive Hashing
  • Vector Space Models
  • Word2vec
  • Parts-of-Speech Tagging
  • N-gram Language Models

Natural Language Processing (NLP) uses algorithms to understand and manipulate human language. This technology is one of the most broadly applied areas of machine learning. As AI continues to expand, so will the demand for professionals skilled at building models that analyze speech and language, uncover contextual patterns, and produce insights from text and audio.

This specialization will equip you with the state-of-the-art deep learning techniques needed to build cutting-edge NLP systems. By the end of this Specialization, you will be ready to design NLP applications that perform question-answering and sentiment analysis, create tools to translate languages and summarize text, and even build chatbots.


Syllabus

Course 1: Natural Language Processing with Classification and Vector Spaces

In this course, you will: a) Perform sentiment analysis of tweets using logistic regression and then naïve Bayes; b) Use vector space models to discover relationships between words and use PCA to reduce the dimensionality of the vector space and visualize those relationships; and, c) Write a simple English to French translation algorithm using pre-computed word embeddings and locality sensitive hashing to relate words via approximate k-nearest neighbor search.

Enroll

Week 1: Logistic Regression for Sentiment Analysis of Tweets

Use a simple method to classify positive or negative sentiment in tweets

Week 2: Naïve Bayes for Sentiment Analysis of Tweets

Use a more advanced model for sentiment analysis

Week 3: Vector Space Models

Use vector space models to discover relationships between words and use principal component analysis (PCA) to reduce the dimensionality of the vector space and visualize those relationships

Week 4: Word Embeddings and Locality Sensitive Hashing for Machine Translation

Write a simple English-to-French translation algorithm using pre-computed word embeddings and locality sensitive hashing to relate words via approximate k-nearest neighbors search

Course 2: Natural Language Processing with Probabilistic Models

In this course, you will: a) Create a simple auto-correct algorithm using minimum edit distance and dynamic programming; b) Apply the Viterbi Algorithm for part-of-speech (POS) tagging, which is important for computational linguistics; c) Write a better auto-complete algorithm using an N-gram language model; and d) Write your own Word2Vec model that uses a neural network to compute word embeddings using a continuous bag-of-words model.

Enroll

Week 1: Auto-correct using Minimum Edit Distance

Create a simple auto-correct algorithm using minimum edit distance and dynamic programming

Week 2: Part-of-Speech (POS) Tagging

Apply the Viterbi algorithm for POS tagging, which is important for computational linguistics

Week 3: N-gram Language Models

Write a better auto-complete algorithm using an N-gram model (similar models are used for translation, determining the author of a text, and speech recognition)

Week 4: Word2Vec and Stochastic Gradient Descent

Write your own Word2Vec model that uses a neural network to compute word embeddings using a continuous bag-of-words model

Course 3: Natural Language Processing with Sequence Models

In this course, you will: a) Train a neural network with GLoVe word embeddings to perform sentiment analysis of tweets; b) Generate synthetic Shakespeare text using a Gated Recurrent Unit (GRU) language model; c) Train a recurrent neural network to perform named entity recognition (NER) using LSTMs with linear layers; and d) Use so-called ‘Siamese’ LSTM models to compare questions in a corpus and identify those that are worded differently but have the same meaning.

Enroll

Week 1: Sentiment with Neural Nets

Train a neural network with GLoVe word embeddings to perform sentiment analysis of tweets

Week 2: Language Generation Models

Generate synthetic Shakespeare text using a Gated Recurrent Unit (GRU) language model

Week 3: Named Entity Recognition (NER)

Train a recurrent neural network to perform NER using LSTMs with linear layers

Week 4: Siamese Networks

Use so-called ‘Siamese’ LSTM models to compare questions in a corpus and identify those that are worded differently but have the same meaning

Course 4: Natural Language Processing with Attention Models

In this course, you will: a) Translate complete English sentences into German using an encoder-decoder attention model; b) Build a Transformer model to summarize text; c) Use T5 and BERT models to perform question-answering; and d) Build a chatbot using a Reformer model.

Enroll

Week 1: Neural Machine Translation with Attention

Translate complete English sentences into French using an encoder/decoder attention model

Week 2: Summarization with Transformer Models

Build a transformer model to summarize text

Week 3: Question-Answering with Transformer Models

Use T5 and BERT models to perform question answering

Week 4: Chatbots with a Reformer Model

Build a chatbot using a reformer model

Program Instructors

Younes Bensouda Mourri Teaching Assistant

Mathematical & Computational Sciences, Stanford University, deeplearning.ai

Łukasz Kaiser Instructor

Staff Research Scientist at Google Brain and Chargé de Recherche at CNRS

Eddy Shyu Senior Curriculum Developer

Product Lead, DeepLearning.AI

Sign Up

Be notified of new courses

    Frequently Asked Questions

    This Specialization is for students of machine learning or artificial intelligence as well as software engineers looking for a deeper understanding of how NLP models work and how to apply them.

    Learners should have a working knowledge of machine learning, intermediate Python including experience with a deep learning framework (e.g., TensorFlow, Keras), as well as proficiency in calculus, linear algebra, and statistics. If you would like to brush up on these skills, we recommend the Deep Learning Specialization, offered by deeplearning.ai and taught by Andrew Ng.

    This Specialization will equip you with the state-of-the-art deep learning techniques needed to build cutting-edge NLP systems:

    1. Use logistic regression, naïve Bayes, and word vectors to implement sentiment analysis, complete analogies, and translate words, and use locality sensitive hashing for approximate nearest neighbors.
    2. Use dynamic programming, hidden Markov models, and word embeddings to autocorrect misspelled words, autocomplete partial sentences, and identify part-of-speech tags for words.
    3. Use dense and recurrent neural networks, LSTMs, GRUs, and Siamese networks in TensorFlow and Trax to perform advanced sentiment analysis, text generation, named entity recognition, and to identify duplicate questions.
    4. Use encoder-decoder, causal, and self-attention to perform advanced machine translation of complete sentences, text summarization, question-answering and to build chatbots. Models covered include T5, BERT, transformer, reformer, and more!

    The deeplearning.ai Natural Language Processing Specialization is one-of-a-kind.

    • It teaches cutting-edge techniques drawn from recent academic papers, some of which were only first published in 2019.
    • It covers practical methods for handling common NLP use cases (autocorrect, autocomplete), as well as advanced deep learning techniques for chatbots and question-answering.
    • It starts with the foundations and takes you to a stage where you can build state-of-the-art attention models that allow for parallel computing.
    • You will not only use packages but also learn how to build these models from scratch. We walk you through all the steps, from data processing to the finished products you can use in your own projects.
    • You will complete one project every week to make sure you understand the concepts for a total of 16 programming assignments.

    This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning.

    Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning SpecializationŁukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.

    Both Younes and Łukasz are passionate about increasing access to cutting-edge AI education around the globe by providing instruction and opportunities for practical application so developers can expand their skills.

    This is a Specialization made up of 4 Courses. Course 3 is scheduled for the end of July. Course 4 will launch in September.

    You can enroll in this deeplearning.ai Natural Language Processing Specialization on Coursera. You will watch videos and complete assignments on Coursera as well.

    We recommend taking the courses in the prescribed order for a logical and thorough learning experience.

    A Coursera subscription costs $49 / month. Course #1 and Course #2 of this Specialization are available right now. Course #3 and Course #4 will be available in summer 2020.

    Yes, Coursera provides financial aid to learners who cannot afford the fee. Visit the Coursera Course Page and click on ‘Financial Aid’ beneath the ‘Enroll’ button on the left.

    You can audit the courses in the Specialization for free. Visit the Course Page, click on ‘Enroll’ and then click on ‘Audit’ at the bottom of the page. Note that you will not receive a certificate at the end of the course if you choose to audit it for free instead of purchasing it.

    You will receive a certificate at the end of each course if you pay for the courses and complete the programming assignments. There is a limit of 180 days of certificate eligibility, after which you must re-purchase the course to obtain a certificate. If you audit the course for free, you will not receive a certificate.

    If you complete all four courses in the deeplearning.ai Natural Language Processing Specialization and are subscribed to the Specialization, you will also receive an additional certificate showing that you completed the entire Specialization.

    This Specialization consists of four Courses. At the rate of 5 hours a week, it typically takes 4 weeks to complete each Course.