This course is a zoom-in, zoom-out, hands-on workout involving Hadoop, MapReduce and the art of thinking parallel.
This course is both broad and deep. It covers the individual components of Hadoop in great detail, and also gives you a higher level picture of how they interact with each other.
This course will get you hands-on with Hadoop very early on. You'll learn how to set up your own cluster using both VMs and the Cloud. All the major features of MapReduce are covered - including advanced topics like Total Sort and Secondary Sort.
MapReduce completely changed the way people thought about processing Big Data. Breaking down any problem into parallelizable units is an art. The examples in this course will train you to "think parallel".
Develop advanced MapReduce applications to process Big Data.
Master the art of "thinking parallel" - how to break up a task into Map/Reduce transformations.
Self-sufficiently set up their own mini-Hadoop cluster whether it's a single node, a physical cluster or in the cloud.
Use Hadoop + MapReduce to solve a wide variety of problems : from NLP to Inverted Indices to Recommendations.
Understand HDFS, MapReduce and YARN and how they interact with each other.
Understand the basics of performance tuning and managing your own cluster.
You'll need an IDE where you can write Java code or open the source code that's shared. IntelliJ and Eclipse are both great options.
You'll need some background in Object-Oriented Programming, preferably in Java. All the source code is in Java and we dive right in without going into Objects, Classes etc.
A bit of exposure to Linux/Unix shells would be helpful, but it won't be a blocker
Who is this course intended for?
Analysts who want to leverage the power of HDFS where traditional databases don't cut it anymore.
Engineers who want to develop complex distributed computing applications to process lot's of data.
Data Scientists who want to add MapReduce to their bag of tricks for processing data.