Scalable Programming with Scala and Spark
Use Scala and Spark for data analysis, machine learning and analytics
Course Description
Get your data to fly using Spark and Scala for analytics, machine learning and data science.
If you are an analyst or a data scientist, you're used to having multiple systems for working with data. SQL, Python, R, Java, etc. With Spark, you have a single engine where you can explore and play with large amounts of data, run machine learning algorithms and then use the same system to productionize your code.
Scala is a general purpose programming language - like Java or C++. It's functional programming nature and the availability of a REPL environment make it particularly suited for a distributed computing framework like Spark.
Using Spark and Scala you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Dataframes to manipulate data with ease.
Learning Outcomes
Use Spark for a variety of analytics and Machine Learning tasks
Understand functional programming constructs in Scala
Implement complex algorithms like PageRank or Music Recommendations
Work with a variety of datasets from Airline delays to Twitter, Web graphs, Social networks and Product Ratings
Use all the different features and libraries of Spark : RDDs, Dataframes, Spark SQL, MLlib, Spark Streaming and GraphX
Write code in Scala REPL environments and build Scala applications with an IDE
Pre-requisites:
All examples work with or without Hadoop. If you would like to use Spark with Hadoop, you'll need to have Hadoop installed (either in pseudo-distributed or cluster mode).
The course assumes experience with one of the popular object-oriented programming languages like Java/C++.
Who is this course intended for?
Engineers who want to use a distributed computing engine for batch or stream processing or both
Analysts who want to leverage Spark for analyzing interesting datasets
Data Scientists who want a single engine for analyzing and modelling data as well as productionizing it.
Course Curriculum
Introdcution
Available in
days
days
after you enroll
Introduction to Spark
Available in
days
days
after you enroll
-
StartWhat does Donald Rumsfeld have to do with data analysis? (8:46)
-
StartWhy is Spark so cool? (12:23)
-
StartAn introduction to RDDs - Resilient Distributed Datasets (9:39)
-
StartBuilt-in libraries for Spark (15:38)
-
StartInstalling Spark (11:44)
-
StartThe Spark Shell (6:55)
-
StartTransformations and Actions (17:06)
-
StartSee it in Action : Munging Airlines Data with Spark (3:44)
Resilient Distributed Datasets
Available in
days
days
after you enroll
-
StartRDD Characteristics: Partitions and Immutability (12:35)
-
StartRDD Characteristics: Lineage, RDDs know where they came from (6:05)
-
StartWhat can you do with RDDs? (11:08)
-
StartCreate your first RDD from a file (14:54)
-
StartAverage distance travelled by a flight using map() and reduce() operations (6:59)
-
StartGet delayed flights using filter(), cache data using persist() (6:10)
-
StartAverage flight delay in one-step using aggregate() (12:21)
-
StartFrequency histogram of delays using countByValue() (2:10)
Frequently Asked Questions
When does the course start and finish?
The course starts now and never ends! It is a completely self-paced online course - you decide when you start and when you finish.
How long do I have access to the course?
How does lifetime access sound? After enrolling, you have unlimited access to this course for as long as you like - across any and all devices you own.
What if I am unhappy with the course?
We would never want you to be unhappy! If you are unsatisfied with your purchase, contact us in the first 30 days and we will give you a full refund.