Safe & Explainable Robotics: Verification, Safety Cases & Ethics Training Course
Safe & Explainable Robotics is a comprehensive training focused on the safety, verification, and ethical governance of robotic systems. The course bridges theory and practice by exploring safety case methodologies, hazard analysis, and explainable AI approaches that make robotic decision-making transparent and trustworthy. Participants will learn how to ensure compliance, verify behaviors, and document safety assurance in line with international standards.
This instructor-led, live training (online or onsite) is aimed at intermediate-level professionals who wish to apply verification, validation, and explainability principles to ensure the safe and ethical deployment of robotic systems.
By the end of this training, participants will be able to:
- Develop and document safety cases for robotic and autonomous systems.
- Apply verification and validation techniques in simulation environments.
- Understand explainable AI frameworks for robotics decision-making.
- Integrate safety and ethics principles into system design and operation.
- Communicate safety and transparency requirements to stakeholders.
Format of the Course
- Interactive lecture and discussion.
- Hands-on simulation and safety analysis exercises.
- Case studies from real-world robotics applications.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction to Safety and Explainability in Robotics
- Overview of safety and transparency in robotic systems
- Regulatory and ethical context for robotics and AI
- Standards and frameworks: ISO 26262, ISO 10218, and ISO/IEC 42001
Risk and Hazard Analysis
- Identifying hazards in autonomous and semi-autonomous systems
- Performing Failure Mode and Effects Analysis (FMEA)
- Quantifying risk and mitigation through safety design
Verification and Validation Techniques
- Testing robotic behaviors in simulated environments
- Formal verification and test case design
- Data-driven validation and monitoring techniques
Safety Case Development
- Structure and content of a safety case
- Documenting compliance and traceability
- Using tools for evidence management and risk justification
Explainable AI for Robotics
- Making decision-making processes transparent
- Interpretability techniques for ML-based control systems
- Explaining robotic behaviors to users and regulators
Ethical and Governance Considerations
- Ethical principles in robotics and autonomous systems
- Bias, accountability, and responsibility in AI-driven robotics
- Balancing innovation with public trust and regulation
Hands-On Workshop: Building a Safe and Explainable Robotics Scenario
- Designing a small robotic simulation in ROS 2 or Gazebo
- Applying verification and validation procedures
- Developing and presenting a safety case summary
Summary and Next Steps
Requirements
- Basic understanding of robotics systems and control architectures
- Familiarity with Python programming and simulation tools
- Knowledge of system engineering or safety processes
Audience
- System engineers working on robotics or autonomous systems
- Safety officers ensuring compliance with functional safety standards
- Technical managers overseeing robotics integration and deployment
Open Training Courses require 5+ participants.
Safe & Explainable Robotics: Verification, Safety Cases & Ethics Training Course - Booking
Safe & Explainable Robotics: Verification, Safety Cases & Ethics Training Course - Enquiry
Safe & Explainable Robotics: Verification, Safety Cases & Ethics - Consultancy Enquiry
Testimonials (2)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
its knowledge and utilization of AI for Robotics in the Future.
Ryle - PHILIPPINE MILITARY ACADEMY
Course - Artificial Intelligence (AI) for Robotics
Upcoming Courses
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursRobotics powered by Artificial Intelligence (AI) integrates machine learning, control systems, and sensor fusion to build intelligent machines that can perceive, reason, and act autonomously. Utilizing contemporary tools such as ROS 2, TensorFlow, and OpenCV, engineers can now design robots that navigate, plan, and interact with real-world environments in an intelligent manner.
This instructor-led live training (available online or onsite) is designed for intermediate-level engineers who want to develop, train, and deploy AI-driven robotic systems using current open-source technologies and frameworks.
Upon completion of this training, participants will be able to:
- Utilize Python and ROS 2 to construct and simulate robotic behaviors.
- Implement Kalman and Particle Filters for localization and tracking tasks.
- Apply computer vision techniques via OpenCV for perception and object detection.
- Leverage TensorFlow for motion prediction and learning-based control.
- Integrate SLAM (Simultaneous Localization and Mapping) for autonomous navigation.
- Develop reinforcement learning models to enhance robotic decision-making.
Course Format
- Interactive lectures and discussions.
- Practical implementation using ROS 2 and Python.
- Hands-on exercises involving simulated and real robotic environments.
Customization Options
To request customized training for this course, please contact us to arrange the details.
AI and Robotics for Nuclear - Extended
120 HoursThis instructor-led, live training in South Korea (online or onsite) provides participants with the technologies, frameworks, and techniques necessary for programming diverse robotic systems used in nuclear technology and environmental systems.
The six-week course meets five days a week. Each day consists of four hours of instruction, including lectures, discussions, and hands-on robot development in a live lab. Participants will work on real-world projects applicable to their jobs to practice their acquired knowledge.
The hardware targets for this course are simulated in 3D via simulation software. The open-source framework ROS (Robot Operating System), along with C++ and Python, will be utilized for robot programming.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Extend a robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a robot in realistic scenarios.
AI and Robotics for Nuclear
80 HoursIn this instructor-led live training in South Korea (online or onsite), participants will learn the different technologies, frameworks and techniques for programming different types of robots to be used in the field of nuclear technology and environmental systems.
The 4-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The code will then be loaded onto physical hardware (Arduino or other) for final deployment testing. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Test and troubleshoot a robot in realistic scenarios.
Autonomous Navigation & SLAM with ROS 2
21 HoursROS 2 (Robot Operating System 2) is an open-source framework designed to support the development of complex and scalable robotic applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers and developers who wish to implement autonomous navigation and SLAM (Simultaneous Localization and Mapping) using ROS 2.
By the end of this training, participants will be able to:
- Set up and configure ROS 2 for autonomous navigation applications.
- Implement SLAM algorithms for mapping and localization.
- Integrate sensors such as LiDAR and cameras with ROS 2.
- Simulate and test autonomous navigation in Gazebo.
- Deploy navigation stacks on physical robots.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using ROS 2 tools and simulation environments.
- Live-lab implementation and testing on virtual or physical robots.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Developing Intelligent Bots with Azure
14 HoursAzure Bot Service integrates the Microsoft Bot Framework with Azure Functions to deliver a robust platform for rapidly developing intelligent bots.
Through this instructor-led live training, participants will learn how to efficiently develop intelligent bots using Microsoft Azure.
By the conclusion of the training, participants will be able to:
Grasp the fundamental concepts behind intelligent bots.
Develop intelligent bots leveraging cloud-based applications.
Acquire practical expertise in the Microsoft Bot Framework, Bot Builder SDK, and Azure Bot Service.
Implement established bot design patterns in real-world scenarios.
Create and deploy their first intelligent bot using Microsoft Azure.
Target Audience
This course is tailored for developers, hobbyists, engineers, and IT professionals interested in bot development.
Course Format
The training blends lectures and discussions with exercises, emphasizing hands-on practice.
Computer Vision for Robotics: Perception with OpenCV & Deep Learning
21 HoursOpenCV serves as an open-source computer vision library that facilitates real-time image processing, while deep learning frameworks like TensorFlow offer the necessary tools for intelligent perception and decision-making in robotic systems.
Guided by an instructor, this live training (available online or on-site) targets intermediate robotics engineers, computer vision specialists, and machine learning professionals looking to leverage computer vision and deep learning techniques to enhance robotic perception and autonomy.
Upon completion of this training, participants will be capable of:
- Developing computer vision pipelines using OpenCV.
- Integrating deep learning models for object detection and recognition tasks.
- Leveraging vision-based data for robotic control and navigation.
- Blending classical vision algorithms with deep neural networks.
- Deploying computer vision systems on embedded and robotic platforms.
Course Format
- Interactive lectures and discussions.
- Practical exercises using OpenCV and TensorFlow.
- Live laboratory implementation on simulated or physical robotic systems.
Customization Options
- To arrange customized training for this course, please contact us for details.
Developing a Bot
14 HoursA bot, or chatbot, acts as a digital assistant designed to automate user interactions across various messaging platforms, enabling tasks to be completed more efficiently without requiring direct human involvement.
In this instructor-led live training, participants will learn the fundamentals of bot development by building sample chatbots using specialized development tools and frameworks.
By the conclusion of this training, participants will be able to:
- Identify various applications and use cases for bots
- Grasp the complete lifecycle of bot development
- Explore the tools and platforms utilized in bot construction
- Construct a sample chatbot for Facebook Messenger
- Develop a sample chatbot using the Microsoft Bot Framework
Audience
- Developers interested in creating their own bots
Course Format
- A blend of lectures, discussions, exercises, and extensive hands-on practice
Edge AI for Robots: TinyML, On-Device Inference & Optimization
21 HoursEdge AI allows artificial intelligence models to operate directly on embedded or resource-limited devices, thereby minimizing latency and power usage while enhancing the autonomy and privacy of robotic systems.
This instructor-led, live training session (available online or onsite) is designed for intermediate-level embedded developers and robotics engineers who aim to deploy machine learning inference and optimization techniques directly onto robotic hardware using TinyML and edge AI frameworks.
Upon completing this training, participants will be capable of:
- Grasping the core principles of TinyML and edge AI in robotics.
- Converting and deploying AI models for on-device inference.
- Optimizing models for speed, compactness, and energy efficiency.
- Integrating edge AI systems into robotic control architectures.
- Evaluating performance and accuracy in real-world scenarios.
Course Format
- Interactive lectures and discussions.
- Practical hands-on exercises utilizing TinyML and edge AI toolchains.
- Practical applications on embedded and robotic hardware platforms.
Customization Options
- For inquiries regarding customized training for this course, please contact us to arrange.
Human-Centric Physical AI: Collaborative Robots and Beyond
14 HoursThis instructor-led, live training in South Korea (online or onsite) targets intermediate-level participants interested in exploring the role of collaborative robots (cobots) and other human-centric AI systems in modern workplaces.
Upon completion of this training, participants will be able to:
- Comprehend the principles of Human-Centric Physical AI and its practical applications.
- Examine how collaborative robots contribute to enhancing workplace productivity.
- Identify and resolve challenges associated with human-machine interactions.
- Develop workflows that optimize collaboration between humans and AI-driven systems.
- Foster a culture of innovation and adaptability in AI-integrated workplaces.
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control
21 HoursHuman-Robot Interaction (HRI): Voice, Gesture & Collaborative Control is a practical, hands-on course designed to introduce participants to the design and implementation of intuitive interfaces for human–robot communication. This training integrates theoretical foundations, design principles, and programming practice to help learners build natural and responsive interaction systems using speech, gesture, and shared control techniques. Participants will learn how to integrate perception modules, develop multimodal input systems, and design robots that safely collaborate with humans.
This instructor-led, live training (available online or onsite) is aimed at beginner-level to intermediate-level participants who wish to design and implement human–robot interaction systems that enhance usability, safety, and user experience.
By the end of this training, participants will be able to:
- Understand the foundations and design principles of human–robot interaction.
- Develop voice-based control and response mechanisms for robots.
- Implement gesture recognition using computer vision techniques.
- Design collaborative control systems for safe and shared autonomy.
- Evaluate HRI systems based on usability, safety, and human factors.
Format of the Course
- Interactive lectures and demonstrations.
- Hands-on coding and design exercises.
- Practical experiments in simulation or real robotic environments.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursIndustrial Robotics Automation: ROS-PLC Integration & Digital Twins is a practical course designed to bridge the gap between industrial automation and modern robotics frameworks. Participants will gain hands-on experience integrating ROS-based robotic systems with PLCs for synchronized operations, while also exploring digital twin environments to simulate, monitor, and optimize production processes. The curriculum emphasizes system interoperability, real-time control capabilities, and predictive analysis by utilizing digital replicas of physical systems.
This instructor-led live training is available in both online and onsite formats, catering to intermediate-level professionals seeking to develop practical skills in connecting ROS-controlled robots with PLC environments and implementing digital twins to enhance automation and manufacturing efficiency.
Upon completion of this training, participants will be equipped to:
- Comprehend the communication protocols linking ROS and PLC systems.
- Execute real-time data exchange between robotic units and industrial controllers.
- Create digital twins for monitoring, testing, and simulating processes.
- Seamlessly integrate sensors, actuators, and robotic manipulators into industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Course Format
- Interactive lectures and detailed architecture walkthroughs.
- Hands-on exercises focused on ROS and PLC system integration.
- Implementation of simulation and digital twin projects.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Artificial Intelligence (AI) for Mechatronics
21 HoursThis instructor-led live training, held in South Korea (either online or onsite), is intended for engineers interested in applying artificial intelligence to mechatronic systems.
Upon completing this training, participants will be able to:
- Gain a comprehensive understanding of artificial intelligence, machine learning, and computational intelligence.
- Understand the principles of neural networks and various learning methodologies.
- Select the most effective AI approaches for solving real-world problems.
- Apply AI solutions within the field of mechatronic engineering.
Multi-Robot Systems and Swarm Intelligence
28 HoursThis advanced training course delves into the design, coordination, and control of robotic teams, drawing inspiration from biological swarm behaviors within the domain of Multi-Robot Systems and Swarm Intelligence. Participants will acquire the skills to model agent interactions, execute distributed decision-making processes, and optimize collaboration across multiple entities. By blending theoretical concepts with practical simulation exercises, the course prepares learners for real-world applications in logistics, defense, search and rescue operations, and autonomous exploration.
This instructor-led live training is available in both online and onsite formats, targeting advanced professionals who aim to design, simulate, and deploy multi-robot and swarm-based systems using open-source frameworks and algorithms.
Upon completion of this training, participants will be capable of:
- Grasping the core principles and dynamics of swarm intelligence and cooperative robotics.
- Developing communication and coordination strategies tailored for multi-robot systems.
- Executing distributed decision-making and consensus algorithms.
- Simulating collective behaviors, including formation control, flocking, and area coverage.
- Applying swarm-based techniques to solve real-world scenarios and optimization challenges.
Course Format
- Advanced lectures featuring deep dives into algorithms.
- Practical coding and simulation exercises using ROS 2 and Gazebo.
- A collaborative project focused on applying swarm intelligence principles.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Smart Robots for Developers
84 HoursA Smart Robot refers to an Artificial Intelligence (AI) system capable of learning from its environment and past experiences, thereby enhancing its capabilities based on acquired knowledge. These robots can collaborate with humans, working alongside them and learning from their behaviors. They are equipped not only for manual labor but also for complex cognitive tasks. Beyond physical hardware, Smart Robots can exist purely as software applications within a computer, operating without moving parts or direct physical interaction with the world.
In this instructor-led live training, participants will explore the various technologies, frameworks, and techniques required to program different types of mechanical Smart Robots, applying this knowledge to complete their own Smart Robot projects.
The course is structured into 4 sections, each spanning three days of lectures, discussions, and hands-on robot development within a live lab environment. Each section concludes with a practical hands-on project, allowing participants to practice and demonstrate their acquired skills.
The target hardware for this course is simulated in 3D using simulation software. The open-source ROS (Robot Operating System) framework, along with C++ and Python, will be utilized for robot programming.
By the end of this training, participants will be able to:
- Grasp the key concepts underlying robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Comprehend and implement the software components that form the foundation of Smart Robots.
- Build and operate a simulated mechanical Smart Robot capable of seeing, sensing, processing, grasping, navigating, and interacting with humans via voice.
- Enhance a Smart Robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a Smart Robot in realistic scenarios.
Audience
- Developers
- Engineers
Format of the course
- A blend of lectures, discussions, exercises, and extensive hands-on practice.
Note
- To customize any aspect of this course (e.g., programming language, robot model), please contact us to arrange.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursSmart Robotics involves integrating artificial intelligence into robotic systems to enhance perception, decision-making capabilities, and autonomous control.
This instructor-led live training, available online or onsite, targets advanced-level robotics engineers, systems integrators, and automation leaders seeking to implement AI-driven perception, planning, and control within smart manufacturing settings.
Upon completion of this training, participants will be able to:
- Understand and apply AI techniques for robotic perception and sensor fusion.
- Develop motion planning algorithms for collaborative and industrial robots.
- Deploy learning-based control strategies for real-time decision making.
- Integrate intelligent robotic systems into smart factory workflows.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.