To become an expert in Big Data Hadoop Ecosystem you are required to have in-depth understanding of Spark applications using Scala programming. This course is designed to help you in understanding the core concept of Apache Spark such as Spark Streaming, RDD, Spark SQL, DataFrames, Datasets, Spark MLlib, Spark GraphX and Spark Shell. Under this course you will learn how to customize Spark application using Scala programming.
After completing this course you will be able to –
Understand the core concept of Apache Spark
Use Scala to write programs
Work with Spark on a cluster
Understand different features of Spark like Spark Streaming, RDD, SparkSQL
Programming with Spark MLlib and Spark GraphX
As such there is no formal prerequisite to join this course but having a fundamental knowledge about any programming language, database, SQL queries and basics of Linux will help to cover this course in quick way.