Bundle Adjustment: A key optimization technique in computer vision that adjusts and refines 3D coordinates of landmarks and camera parameters. This technique seeks to minimise the reprojection error (the difference between observed feature location and projected feature location) to improve the precision of visual reconstruction. Computer Vision: A field of artificial intelligence that enables computers to interpret and understand the visual world. It involves methods for acquiring, processing, analysing, and understanding digital images to produce numerical or symbolic information. Dead Reckoning Data: Data derived from dead reckoning, a process of calculating current position by using a previously determined position and advancing that position based upon known or estimated speeds over elapsed time, and the course. Edge Computing: A distributed computing paradigm that brings computation and data storage closer to the sources of data. This is done to improve response times and save bandwidth. Global Map Optimization: The process of improving the accuracy of a global map (a map of a large area or the whole world) by reducing the accumulated localisation error in a system such as Simultaneous Localization And Mapping (SLAM). It usually involves techniques such as loop closure and global pose graph optimisation. Global Positioning System (GPS) Signal: The signal transmitted by GPS satellites, which carry time-stamped information necessary to allow GPS receivers on the ground to triangulate their precise location. GPS-Degraded Environment: An environment in which GPS signals are present but unreliable or weak due to factors like multipath propagation, urban canyons, dense foliage, or electronic jamming. This can lead to inaccurate positioning. GPS-Denied Environment: An environment in which GPS signals are unavailable or blocked due to various reasons such as being inside a building, underwater, in a cave, or due to intentional signal jamming. Human Pose Estimation (HPE): A technique in computer vision that predicts the pose or configuration of the human body, often represented by a set of key body joint positions. It can be done in 2D or 3D, and can function in real time, enabling the tracking of body movements. Inertial Measurement Unit (IMU): A device that measures and reports on a craft's velocity, orientation, and gravitational forces, using a combination of accelerometers and gyroscopes, sometimes also magnetometers. Commonly used for navigation, stabilisation, and correction of GPS data. Keyframe Selection: In computer vision, a process where certain frames are selected from a sequence of images based on a certain criteria. Keyframes often represent significant changes in the scene or motion, and help to reduce computational load by focusing on these selected frames. Key Points/Pairs: In computer vision, key points refer to distinctive locations in the image such as corners, edges or blobs. These are used as a reference system to describe objects. Key pairs refer to corresponding key points between different images. Light Detection and Ranging (LIDAR): A remote sensing method that uses light in the form of a pulsed laser to measure variable distances to the Earth. These light pulses—combined with other data recorded by the airborne system— generate precise, three-dimensional information about the shape of the Earth and its surface characteristics. Object Occlusion: In computer vision, occlusion refers to the event when an object, part of an object, or objects, are hidden from view. This can happen due to their position relative to the viewer or to other objects blocking the line of sight. Odometry Sensor: A sensor used to estimate change in position over time (odometry). Common examples are wheel encoders, which measure wheel rotation, and inertial measurement units (IMUs), which measure linear and angular acceleration. Optimisation: A process or methodology of making something as fully perfect, functional, or effective as possible. In computer science, optimisation often refers to choosing the best element from some set of available alternatives. Relocalisation: The ability of a system to recognise a previously visited location and accurately determine its position within a pre-established map or model. This is a key capability in systems like SLAM (Simultaneous Localisation and Mapping). Rigid Pose Estimation (RPE): The process of estimating the position (3D translation) and orientation (3D rotation) of a rigid object with respect to a certain coordinate system. The "rigid" part refers to the assumption that the object does not deform between different views. Robot Drift: A common problem in robot navigation where small errors in movement estimation accumulate over time, causing the robot's perceived position to drift away from its true position. Simultaneous Localisation and Mapping (SLAM): A computational problem in robotics and AI where a device needs to build or update a map of an unknown environment while simultaneously keeping track of its location within this environment. Sensor Fusion Model: A technique where data from several different sensors are combined to compute something more than could be determined by any one sensor alone. An example is combining data from a camera and a LIDAR sensor to improve object detection in an autonomous vehicle. Visual Simultaneous Localisation and Mapping (vSLAM): A variant of SLAM that uses visual data from cameras as the primary sensor to create a map of the environment while simultaneously tracking the camera's location in that environment. - Initialisation: The first stage of vSLAM, where the initial camera pose (position and orientation) and the structure of the surrounding environment are estimated. This usually involves estimating the relative motion of the camera between two frames and using this to triangulate the position of the observed keypoints. - Local Mapping: The process of creating a detailed map of the immediate surroundings or the part of the environment currently being observed by the robot. This map is updated continuously as the robot moves and observes new features. - Loop Closure: This refers to the situation when the robot returns to a place it has already visited. Recognising this, the robot can correct errors that have accumulated over time in its map and pose estimate. It often involves matching the current view with a previous one and adjusting the map for consistency. - Relocalisation: The capability of the system to recover its pose (location and orientation) after being lost, usually due to tracking failure or being initialised in a previously mapped area. The system matches the current observations with the existing map to determine its location. - Tracking: The process of locating the robot's pose in real-time as it moves through the environment. It involves identifying and following keypoints from frame to frame to estimate the camera's motion. The tracking quality is crucial for the performance of the vSLAM system. Path Planning |The process of determining a route or path for a robot to follow from its current location to a specified destination, taking into consideration various constraints such as obstacles and terrain. Obstacle Avoidance | Techniques and algorithms that enable a robot to detect potential obstacles in its path and navigate around them to prevent collisions. Inertial Navigation System (INS) | A navigation aid that uses a computer, motion sensors (accelerometers), and rotation sensors (gyroscopes) to continuously calculate by dead reckoning the position, orientation, and velocity of a moving object without the need for external references. Sensor Fusion | The process of integrating data from multiple sensors to produce more consistent, accurate, and useful information than that provided by any individual sensor. Adaptive Navigation | Navigation methods that allow a robot to alter its path in real-time based on changes in the environment or sensor inputs, enhancing its ability to deal with unpredictable scenarios. Visual Navigation | The use of visual data, processed through computer vision techniques, to guide a robot's movement in an environment. Terrain Analysis | The evaluation of ground surfaces to determine their characteristics, such as texture, stability, and slope, and to identify the presence of obstacles, aiding in the planning of safe navigation paths. Feature Detection and Tracking | Techniques in computer vision that identify and follow specific points of interest within visual data, often used in mapping and navigation. Depth Perception | The ability to perceive the world in three dimensions (3D) and the distance of an object from the observer, crucial for navigating around objects and through environments. 3D Mapping | The process of creating a three-dimensional model of an environment, incorporating structures, objects, and terrain features, essential for robots to understand and interact with their surroundings. Object Recognition and Classification | Computer vision tasks that involve identifying objects within visual data and categorizing them into predefined groups. Semantic Segmentation | A computer vision process that involves dividing an image into segments or pixels that are grouped by category (e.g., roads, humans), enabling a robot to understand the context of its environment. Visual SLAM (vSLAM) | A technique that allows a robot to construct or update a map of an unknown environment while simultaneously keeping track of its own location within that map, using only visual inputs. Machine Learning and AI for Predictive Navigation | The use of artificial intelligence and machine learning algorithms to predict potential hazards and adjust navigation strategies accordingly, improving a robot's ability to navigate complex environments. Behaviour-based Navigation | A navigation approach where a robot selects and switches between a set of predefined behaviours (e.g., follow wall, avoid obstacle) based on its current environmental context. Human Detection Algorithms |Computational methods designed to identify human presence in digital images or sensor data, utilizing various techniques to differentiate humans from their surroundings. Thermal Imaging | A technology that captures the infrared spectrum of light, translating it into visible images. It detects heat emitted by objects, making it useful for finding humans based on their body heat, particularly in low-visibility conditions. Machine Learning (ML) | A branch of artificial intelligence that focuses on building systems that learn from data, enabling machines to improve their performance on a given task with experience. Convolutional Neural Networks (CNNs) | A class of deep neural networks, most commonly applied to analysing visual imagery, known for their ability to automatically and adaptively learn spatial hierarchies of features from images. Feature Recognition | The process by which specific attributes or patterns (features) are detected in the data, such as edges or shapes in images, often used to identify objects or entities within that data. Optical Flow | A method used to estimate motion between two consecutive frames caused by the movement of objects or the camera, useful in detecting and analysing movement. Motion Sensors | Devices that detect moving objects, particularly people, often using technologies such as accelerometers, gyroscopes, or infrared sensors. Behavioural Analysis | The study and interpretation of human behaviours, especially in response to external stimuli or in specific environments, with the aim of predicting or understanding those behaviours. Survivor Location Prediction | The use of algorithms and behavioural models to estimate the most likely locations of survivors in disaster scenarios, based on patterns of human behaviour in crises. Signs of Life Detection | Techniques utilized by search and rescue systems to identify indications of human life, such as movement or heat signatures, especially in environments where survivors may be trapped or hidden. Approach Strategies | Planned methods or actions taken by rescue robots to engage with or move towards survivors in a manner that is safe, effective, and cognizant of the survivors' potential physical and psychological states. Simulation and Virtual Reality Training | The use of simulated environments and virtual reality systems to train rescue robots and their algorithms in recognizing human behaviours and navigating complex disaster sites. Ethical and Psychological Considerations | Factors concerning the moral implications and psychological impacts of deploying robots for search and rescue missions, emphasizing the importance of sensitive interaction with survivors. Wireless Communication Standards | Protocols and technologies that enable data transmission through the air without requiring direct connections, such as Wi-Fi, Bluetooth, and satellite communications. Mesh Networks | A network topology where nodes connect directly, dynamically, and non-hierarchically to as many other nodes as possible and cooperate with one another to efficiently route data. Data Transmission Security | Measures and protocols implemented to protect transmitted data from interception, unauthorized access, and tampering during its journey from source to destination. Remote Operation Protocols | Standardized methods and communication protocols designed to control robots or devices remotely, ensuring accurate execution of commands and real-time feedback. Visual Odometry | The process of determining the position and orientation of a device by analysing the associated camera images. Beacons and Repeaters | Devices used to extend the range of communication by repeating or amplifying signals or to serve as reference points for navigation and localization. Satellite Navigation Systems (e.g., GLONASS, Galileo, BeiDou) | Satellite-based systems that provide global or regional coverage for navigation and precise positioning. Ultra-Wideband (UWB) Technology | A radio technology used for short-range high-bandwidth communications by using a very low energy level for short-range, high-bandwidth communications over a large portion of the radio spectrum. Encryption | The process of converting information or data into a code, especially to prevent unauthorized access. Authentication and Authorization | The processes of verifying the identity of a user or device (authentication) and determining their rights or permissions (authorization). Frequency Hopping | A method of transmitting radio signals by rapidly switching a carrier among many frequency channels, using a pseudorandom sequence known to both transmitter and receiver. Hybrid Navigation Systems | Systems that utilise a combination of different navigation technologies and methods to provide accurate positioning and orientation in a variety of environments. Privacy Concerns | The apprehensions related to the collection, storage, and use of personal information by robots, particularly regarding surveillance and data collection in public and private spaces. Decision-Making Autonomy | The capability of robots to make independent decisions without human intervention, raising ethical questions about the alignment of robotic decisions with human values and ethics. Liability and Accountability | Legal and ethical responsibilities associated with the actions or inactions of robots, particularly when those actions result in harm or damage. Deployment and Accessibility | The equitable distribution and availability of rescue robot technologies across various regions and communities, ensuring that benefits are shared irrespective of geographical location or economic status. Human-Robot Interaction Ethics | The study of moral principles governing the design and operation of robots in a way that respects human dignity, autonomy, and psychological well-being during interactions. Psychological Safety | The aspect of ensuring that interactions with robots do not cause undue stress, trauma, or emotional harm to humans. Empathetic Communication | The ability of robots to convey understanding and sensitivity towards human emotions through verbal and non-verbal cues, despite lacking genuine emotions. Cultural Sensitivity | The consideration and respect for diverse cultural norms and practices in the design and deployment of robots, ensuring their acceptance and effectiveness across different societal contexts. Transparency and Predictability | The clarity and reliability of robotic actions and communications, which are crucial for building trust and understanding between humans and robots. Emotion Recognition Technologies | Advanced technologies enabling robots to detect and respond to human emotional states, enhancing the quality of human-robot interactions. User-Centric Design | A design philosophy that places the needs, preferences, and values of the end-user at the forefront of product development, ensuring that technology serves and respects human interests. Autonomy and Consent | The principles of respecting individual autonomy by seeking explicit consent before robots initiate interactions or assistance, upholding ethical standards of personal freedom. Ethical AI Decision-Making | The incorporation of ethical considerations into the algorithms that govern AI and robotic decision-making, ensuring actions are morally defensible and aligned with societal values