Apache Spark Fundamentals
This course will teach you how to use Apache Spark to analyze your big data at lightning-fast speeds; leaving Hadoop in the dust! For a deep dive on SQL and Streaming check out the sequel, Handling Fast Data with Apache Spark SQL and Streaming.
Our ever-connected world is creating data faster than Moore’s law can keep up, making it so that we have to be smarter in our decisions on how to analyze it. Previously, we had Hadoop’s MapReduce framework for batch processing, but modern big data processing demands have outgrown this framework. That’s where Apache Spark steps in, boasting speeds 10-100x faster than Hadoop and setting the world record in large scale sorting. Spark’s general abstraction means it can expand beyond simple batch processing, making it capable of such things as blazing-fast, iterative algorithms and exactly once streaming semantics. In this course, you’ll learn Spark from the ground up, starting with its history before creating a Wikipedia analysis application as one of the means for learning a wide scope of its core API. That core knowledge will make it easier to look into Spark’s other libraries, such as the streaming and SQL APIs. Finally, you’ll learn how to avoid a few commonly encountered rough edges of Spark. You will leave this course with a tool belt capable of creating your own performance-maximized Spark application.
Author Name: Justin Pihony
Author Description:
Justin is a software journeyman, continuously learning and honing his skills. Most of his early professional career was spent in C# and MSSQL, but he loves learning about many different languages, especially Scala. This passion for Scala led him to join the Lightbend (formerly Typesafe) team, diving even deeper into the Scala ecosphere. And, as much as he loves to learn, he also loves to spread his knowledge through teaching and helping others. He is a very active answerer on StackOverflow… more
There are no reviews yet.