Getting Started with Stream Processing with Spark Streaming
The Spark Streaming module lets you to work with large scale streaming data using familiar batch processing abstractions. This course starts with how standard transformations and operations are performed on streams, and moves to more advanced topics.
Traditional distributed systems like Hadoop work on data stored in a file system. Jobs can run for hours, sometimes days. This is a major limitation in processing real-time data such as trends and breaking news. The Spark Streaming module extends the Spark batch infrastructure to deal with data for real-time analysis. In this course, Getting Started with Stream Processing with Spark Streaming, you’ll learn the nuances of dealing with streaming data using the same basic Spark transformations and actions that work with batch processing. Next, you’ll explore you how you can extend machine learning algorithms to work with streams. Finally, you’ll learn the subtle details of how the streaming K-means clustering algorithm helps find patterns in data. By the end of this course, you’ll feel confident in your knowledge, and you can start integrating what you’ve learned into your own projects.
Author Name: Janani Ravi
Author Description:
Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework. After spending years working in tech in the Bay Area, New York, and Singapore at companies such as Microsoft, Google, and Flipkart, Janani finally decided to combine her love for technology with her passion for teaching. She is now the co-founder of Loonycorn, a content studio focused on providing … more
Table of Contents
- Course Overview
2mins - Getting Started with Discretized Streams
42mins - Transforming Blocks of Data with DStreams
33mins - Applying ML Algorithms on DStreams
41mins - Building a Robust Spark Streaming Application
35mins
There are no reviews yet.