Data Streaming and Real Time Data Processing Training Course
Course Overview
This program offers a practical and structured introduction to developing real-time data streaming systems. It explores core concepts, architectural patterns, and industry-standard tools utilized for processing continuous data at scale. Participants will acquire the skills to design, implement, and optimize streaming pipelines using modern frameworks. The curriculum advances from foundational theories to hands-on applications, empowering learners to confidently construct production-ready real-time solutions.
Training Format
• Instructor-led sessions with guided explanations
• Concept walkthroughs accompanied by real-world examples
• Hands-on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A sessions
Course Objectives
• Comprehend real-time data streaming concepts and system architecture
• Distinguish between batch and streaming data processing models
• Design scalable and fault-tolerant streaming pipelines
• Utilize distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimize real-time data solutions tailored to business use cases
This course is available as onsite live training in South Korea or online live training.Course Outline
Course Outline Day 1
• Introduction to data streaming concepts
• Fundamentals of batch versus real-time processing
• Basics of event-driven architecture
• Common industry use cases
• Overview of the streaming ecosystem
Day 2
• Streaming architecture design patterns
• Fundamentals of distributed messaging systems
• Understanding producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Stream processing concepts and frameworks
• Event time versus processing time
• Windowing techniques and their use cases
• Stateful stream processing
• Basics of fault tolerance and checkpointing
Day 4
• Data transformation within streaming pipelines
• ETL and ELT processes in real-time systems
• Schema management and evolution
• Stream joins and enrichment
• Introduction to cloud-based streaming services
Day 5
• Monitoring and observability in streaming systems
• Security and access control fundamentals
• Performance tuning and optimization
• End-to-end pipeline design review
• Real-world use cases, including fraud detection and IoT processing
Open Training Courses require 5+ participants.
Data Streaming and Real Time Data Processing Training Course - Booking
Data Streaming and Real Time Data Processing Training Course - Enquiry
Data Streaming and Real Time Data Processing - Consultancy Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Upcoming Courses
Related Courses
Administrator Training for Apache Hadoop
35 HoursTarget Audience:
This course is designed for IT professionals seeking solutions to store and process large-scale datasets within distributed system environments.
Course Objective:
To provide in-depth knowledge of Hadoop cluster administration.
Big Data Analytics with Google Colab and Apache Spark
14 HoursThis instructor-led live training in South Korea (online or onsite) targets intermediate-level data scientists and engineers who want to apply Google Colab and Apache Spark for big data processing and analytics.
By the conclusion of this training, participants will be able to:
- Establish a big data environment using Google Colab and Spark.
- Process and analyze large datasets efficiently with Apache Spark.
- Visualize big data in a collaborative environment.
- Integrate Apache Spark with cloud-based tools.
Big Data Analytics in Health
21 HoursBig data analytics encompasses the examination of extensive and diverse datasets to uncover correlations, hidden patterns, and other valuable insights.
The healthcare sector generates vast quantities of complex, heterogeneous medical and clinical data. Leveraging big data analytics within this domain holds significant potential for deriving insights that enhance healthcare delivery. However, the sheer volume of these datasets presents substantial challenges for analysis and practical implementation in clinical settings.
In this instructor-led, live remote training, participants will learn how to conduct big data analytics in healthcare by engaging in a series of hands-on live laboratory exercises.
Upon completion of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Grasp the unique characteristics of medical data
- Apply big data techniques to manage medical data
- Examine big data systems and algorithms within the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course
- A blend of lectures, discussions, exercises, and extensive hands-on practice.
Note
- To request customized training for this course, please contact us to arrange it.
Hadoop For Administrators
21 HoursApache Hadoop is the leading framework for processing Big Data across server clusters. Over the course of three (or four, optional) days, participants will explore the business value and use cases of Hadoop and its ecosystem, learn to plan for cluster deployment and scalability, and master the installation, maintenance, monitoring, troubleshooting, and optimization of Hadoop. Attendees will also gain practical experience with bulk data loading, become familiar with various Hadoop distributions, and practice installing and managing ecosystem tools. The course concludes with a discussion on securing clusters using Kerberos.
“...The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized.”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
Lectures combined with hands-on labs, with an approximate balance of 60% lectures and 40% labs.
Hadoop for Developers (4 days)
28 HoursApache Hadoop stands as the leading framework for processing Big Data across server clusters. This course provides developers with an introduction to the key components of the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop stands as one of the most widely adopted frameworks for processing Big Data across server clusters. This course provides an in-depth exploration of data management within HDFS, alongside advanced techniques in Pig, Hive, and HBase. These sophisticated programming strategies are designed to be particularly advantageous for seasoned Hadoop developers.
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
Hadoop Administration on MapR
28 HoursTarget Audience:
This course is designed to demystify big data and Hadoop technologies, demonstrating that these concepts are accessible and straightforward to grasp.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in South Korea (online or onsite) is designed for system administrators who wish to learn how to set up, deploy, and manage Hadoop clusters within their organizations.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Grasp the four core components of the Hadoop ecosystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Leverage the Hadoop Distributed File System (HDFS) to scale a cluster across hundreds or thousands of nodes.
- Configure HDFS to serve as the storage engine for on-premise Spark deployments.
- Configure Spark to access alternative storage solutions like Amazon S3 and NoSQL databases such as Redis, Elasticsearch, Couchbase, Aerospike, and others.
- Perform essential administrative tasks, including provisioning, management, monitoring, and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course provides an introduction to HBase, a NoSQL database built on top of Hadoop. It is designed for developers who intend to build applications using HBase, as well as administrators responsible for managing HBase clusters.
The curriculum guides developers through HBase architecture, data modeling, and application development. It also covers the integration of MapReduce with HBase and addresses administrative topics related to performance optimization. The course is highly practical, featuring numerous laboratory exercises.
Duration : 3 days
Audience : Developers & Administrators
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source platform for data integration and event processing that utilizes a flow-based architecture. It facilitates the automated, real-time routing, transformation, and mediation of data between diverse systems, featuring a web-based user interface and granular control capabilities.
This instructor-led live training, available either onsite or remotely, is designed for intermediate-level administrators and engineers who aim to deploy, manage, secure, and optimize NiFi dataflows within production environments.
Upon completing this training, participants will be capable of:
- Installing, configuring, and maintaining Apache NiFi clusters.
- Designing and managing dataflows originating from various sources and destinations.
- Implementing logic for flow automation, routing, and transformation.
- Optimizing performance, monitoring operations, and resolving issues.
Course Format
- Interactive lectures accompanied by discussions on real-world architecture.
- Practical labs focused on building, deploying, and managing dataflows.
- Scenario-based exercises conducted in a live-lab environment.
Course Customization Options
- For inquiries regarding customized training for this course, please reach out to us to arrange details.
Apache NiFi for Developers
7 HoursIn this instructor-led live training in South Korea, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
PySpark and Machine Learning
21 Hours
This training offers a hands-on introduction to developing scalable data processing and Machine Learning workflows with PySpark. Participants will gain insights into how Apache Spark functions within contemporary Big Data ecosystems and learn to efficiently handle large datasets by applying distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led, live training in South Korea, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in South Korea (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio offers a data-centric platform that seamlessly integrates big data, artificial intelligence, and governance into a unified solution. Its Rocket and Intelligence modules empower organizations to perform rapid data exploration, transformation, and advanced analytics within enterprise settings.
This instructor-led live training, available both online and onsite, targets intermediate-level data professionals looking to effectively utilize Stratio's Rocket and Intelligence modules with PySpark. The curriculum emphasizes looping structures, user-defined functions (UDFs), and complex data logic.
Upon completion of this training, participants will be equipped to:
- Navigate and operate within the Stratio platform using the Rocket and Intelligence modules.
- Apply PySpark techniques for data ingestion, transformation, and analysis.
- Utilize loops and conditional logic to manage data workflows and execute feature engineering tasks.
- Develop and manage user-defined functions (UDFs) to enable reusable data operations within PySpark.
Course Format
- Engaging lectures and interactive discussions.
- Numerous exercises and hands-on practice sessions.
- Practical implementation exercises in a live laboratory environment.
Customization Options
- For tailored training requests, please contact us to arrange specific requirements.