• Online, Self-Paced
Course Description

Apache Spark is an open-source cluster-computing framework used for data science and it has become the de facto big data framework. In this Skillsoft Aspire course, you will explore the basics of Apache Spark, an analytics engine for working with big data that is built on top of Hadoop. Discover how it allows operations on data with both its own library methods and with SQL while delivering great performance.

Learning Objectives

Accessing Data with Spark: An Introduction to Spark

  • Course Overview
  • recognize where Spark fits in with Hadoop and its components
  • describe Spark RDDs and their characteristics, including what makes them resilient and distributed
  • identify the types of operations which are permitted on an RDD and describe how RDD transformations are lazily evaluated
  • distinguish between RDDs and DataFrames and describe the relationship between the two
  • list the crucial components of Spark and the relationships between them and recognize the functions of the Spark Session, Master and Worker nodes
  • install PySpark and initialize a Spark Context
  • create and load data into an RDD
  • initialize a Spark DataFrame from the contents of an RDD
  • work with Spark DataFrames containing both primitive and structured data types
  • define the contents of a DataFrame using the SQLContext
  • apply the map() function on an RDD to configure a DataFrame with column headers
  • retrieve required data from within a DataFrame and define and apply transformations on a DataFrame
  • convert Spark DataFrames to Pandas DataFrames and vice versa
  • describe basic Spark concepts

Framework Connections

The materials within this course focus on the Knowledge Skills and Abilities (KSAs) identified within the Specialty Areas listed below. Click to view Specialty Area details within the interactive National Cybersecurity Workforce Framework.