Course Overview
Apache Hadoop is the open source data management software and is a very hot topic across the tech industry that helps organizations analyze massive volumes of structured and unstructured data. Employed by such big named websites such as Facebook, Yahoo and eBay etc Hadoop can be presented on cloud environment like Windows HDinsight Service , where we need to pay for the computing resource we use
Class format is 50% Lecture/50% Lab. Ten hands-on exercises make up the lab portion of the class that include setting up Hadoop in pseudo distributed mode, managing files in HDFS, writing map reduce programs in Java, Hadoop monitoring, sqoop, hive and pig
Requirements
- Prior knowledge of Core Java and SQL will be helpful but is not mandatory
Curriculum
-
Writing Java code for UDF
Writing JAVA code to connect with Hive and perform CRUD Operations using JDBC
-
Scala
Duration: 4 Hours
-
Introduction to BigData
-
Introduction to Hadoop
-
Hadoop Distributed File System (HDFS)
-
Working with HDFS
-
Map-Reduce Abstraction
-
Programming MapReduce Jobs
-
MapReduce Features
-
Troubleshooting MapReduce Jobs
-
Hive – This class will help you in understanding Hive concepts, Loading and Querying Data in Hive and Hive UDF.
- Hive Background, Hive Use Case, About Hive, Hive Vs Pig, Hive Architecture and Components, Metastore in Hive, Limitations of Hive, Comparison with Traditional Database
- Hive Data Types and Data Models, Partitions and Buckets, Hive Tables(Managed Tables and External Tables), Importing Data, Querying Data, Managing Outputs, Hive Script, Hive UDF, Hive Demo on Healthcare Data set.
-
Hands On
- Understanding the map reduce flow in the Hive-SQL
- Creating Static partition table
- Creating Dynamic partition table
- Loading a unstructured text file into table using Regex serde
- Loading a JSON file into table using Json serde
- Creating transaction table
- Creating view and indexes
- Creating ORC, Parquet tables and using compression techniques
- Creating Sequence file table
-
Collections
-
Types
-
Options
-
Anonymous Classes
-
Special Methods
-
Closure and functions
-
Implicits
-
For Loops
-
Var Args
-
Partial Functions
-
Introduction of Spark
- Evolution of distributed systems
- Why we need new generation of distributed system?
- Limitation with Map Reduce in Hadoop
- Understanding need of Batch Vs. Real Time Analytics
- Batch Analytics – Hadoop Ecosystem Overview, Real Time Analytics Options
- Introduction to stream and in memory analysis
- What is Spark?
- A Brief History: Spark
-
Using Scala for creating Spark Application
- Invoking Spark Shell
- Creating the SparkContext
- Loading a File in Shell
- Performing Some Basic Operations on Files in Spark Shell
- Building a Spark Project with sbt
- Running Spark Project with sbt, Caching Overview
- Distributed Persistence
- Spark Streaming Overview
- Example: Streaming Word Count
- Testing Tips in Scala
- Performance Tuning Tips in Spark
- Shared Variables: Broadcast Variables
- Shared Variables: Accumulators
-
Hands On
- Installing Spark
- Installing SBT and maven for building the project
- Writing code for Converting HDFS data into RDD
- Writing code for Performing different transformation and action
- Understanding tasks, stages related to spark job
- Writing code for using different storage levels and Caching
- Creating broadcast and accumulators and using them
-
Running SQL queries using Spark SQL
- Starting Point: SQLContext
- Creating DataFrames
- DataFrame Operations
- Running SQL Queries Programmatically
- Interoperating with RDDs
- Inferring the Schema Using Reflection
- Data Sources
- Generic Load/Save Functions
- Save Modes
- Saving to Persistent Tables
- Parquet Files
- Loading Data Programmatically
- Partition Discovery
- Schema Merging
- JSON Datasets
- Hive Tables
- JDBC To Other Databases
- Hbase Integration
- Read Solr results as a data Frame
- Troubleshooting
- Performance Tuning
- Caching Data In Memory
- Compatibility with Apache Hive
- Compatibility with Apache Hive
-
Hands On
- Writing code for Creating SparkContext , HiveContext and HbaseContext objects
- Writing code for Running Hive queries using Spark-SQL
- Writing code Loading , transforming text file data and converting that into Dataframe
- Writing code Reading and storing JSON files as Dataframes inside the spark code
- Writing code for Reading and storing PERQUET files as Dataframes
- Reading and Writing data into RDBMS (MySQL for example) using Spark-SQL
- Caching the dataframes
- Java code for Reading Solr results as a DataFrame
-
Spark Streaming
- Micro batch
- Discretized Streams (DStreams)
- Input DStreams and Receivers
- Dstream to RDD
- Basic Sources
- Advanced Sources
- Transformations on DStreams
- Output Operations on DStreams
- Design Patterns for using foreachRDD
- DataFrame and SQL Operations
- Checkpointing
- Socket stream
- File Stream
- Stateful operations
- How stateful operations work?
- Window Operations
- Join Operations
-
Tuning Spark
-
Spark ML Programming
-
Hands On
-
Data Loading
-
Flume and Sqoop
-
Kafka
-
Hands On