← Back to Blog GPS denied UAV navigation

Global navigation satellite systems have become so central to modern UAV operations that it is easy to forget they represent a single point of failure in the position estimation chain. Remove GPS — through urban canyon blockage, indoor operation, intentional jamming, or signal spoofing — and the majority of commercial autopilot systems degrade significantly, often to the point where sustained autonomous flight is no longer possible. For industrial UAV programs that need to operate reliably across the full range of environments their applications demand, GPS-independent navigation capability is not a niche feature but a fundamental operational requirement.

The environments most likely to degrade GPS quality are precisely the environments where some of the highest-value inspection tasks occur. Steel-frame industrial buildings, bridge undersides, tunnels, subsea cable inspection under metallic covers, and the congested signal environments near large steel structures like refinery vessels are all contexts where the reflected and multipath GPS signals that penetrate create position errors that are too large for safe precision flight. Building a navigation stack that degrades gracefully — maintaining stable flight even when GPS quality is compromised — requires integrating alternative position estimation methods that are not affected by the same environmental factors that degrade GPS.

Visual Odometry

Visual odometry (VO) is the technique of estimating camera motion — and by extension, vehicle motion — by tracking features across successive image frames. A downward-facing or forward-facing camera captures video at 30 to 120 frames per second, and a feature-tracking algorithm identifies distinctive points in each frame — edges, corners, texture patterns — and measures how their image coordinates change between consecutive frames. The magnitude and direction of that apparent motion, combined with known camera focal length and distance to the feature surface, enables calculation of the camera's translational and rotational velocity.

Monocular visual odometry — using a single camera — can estimate relative motion with good accuracy over short time intervals but accumulates drift over extended trajectories because scale is unobservable from a single camera without additional constraints. Stereo visual odometry, using two cameras with a known baseline separation, resolves the scale ambiguity and provides lower-drift position estimates over longer distances. Downward-facing optical flow sensors, which are a simplified form of visual odometry tuned for the specific geometry of a camera looking straight down at a flat surface, are integrated in nearly all modern commercial autopilot systems and provide effective horizontal velocity estimation at altitudes below 30 to 40 meters.

The primary limitation of visual odometry is sensitivity to environmental conditions that degrade feature quality: low light, featureless surfaces (water, snow, uniform floors), and motion blur from rapid movement. Industrial facilities present all three challenges at various times and locations. Effective GPS-denied navigation systems treat VO as one input in a sensor fusion architecture rather than a standalone positioning solution.

LiDAR-Based SLAM

Simultaneous Localization and Mapping (SLAM) using LiDAR point clouds is one of the most technically capable approaches to GPS-independent navigation, providing both accurate position estimation and a continuously updated 3D map of the environment. A rotating or scanning LiDAR sensor generates a dense point cloud of the surrounding environment at 10 to 20 scans per second. The SLAM algorithm compares each new scan to the accumulating map of previously observed geometry, computing the transformation that best aligns the new scan with the map and using that transformation to estimate vehicle motion.

LiDAR SLAM performs extremely well in structured environments — buildings, tunnels, pipes, and other spaces with clear geometric features — where the algorithm can reliably align successive scans against distinctive geometry. The approach provides position accuracy in the 5 to 15 centimeter range in well-mapped indoor environments, which is sufficient for most close-range inspection tasks that require GPS-independent operation. Modern lightweight LiDAR sensors in the 100 to 300 gram weight class make onboard LiDAR SLAM feasible on platforms as small as 1.5 to 2.5 kg all-up-weight.

The computational demands of real-time 3D LiDAR SLAM are significant. Processing LiDAR point clouds at sensor update rates while simultaneously updating a 3D map requires dedicated onboard compute — typically a high-performance ARM processor running at 1.5 to 2.5 GHz with 4 to 8 GB of available RAM. The power budget required for this compute load is a non-trivial fraction of the total UAV power budget, reducing flight endurance by 10 to 20% compared to GPS-only operation. As embedded computing efficiency improves, this overhead is decreasing — newer LiDAR SLAM implementations running on dedicated neural processing units achieve similar throughput at 40 to 60% lower power draw.

Sensor Fusion Architecture

No single navigation modality provides reliable position estimation across all environments an industrial UAV might encounter. GPS works well in open outdoor environments but degrades near structures and indoors. Visual odometry provides good short-term velocity estimates but drifts in featureless or low-light environments. LiDAR SLAM excels in structured spaces but requires sufficient surrounding geometry and carries a power and weight penalty. A robust GPS-denied navigation system integrates multiple modalities through a sensor fusion framework that weights each source by its estimated reliability in the current environment.

The standard architecture for multi-source state estimation in UAV autopilots is an extended or unscented Kalman filter that maintains a state vector encompassing position, velocity, orientation, and sensor bias parameters, and updates this state by fusing measurements from all available sensors. Each measurement source has an associated noise model that reflects its typical accuracy and the environmental conditions that affect its reliability. When GPS quality degrades — quantified by dilution of precision metrics and satellite signal strength — the filter automatically increases its weighting of inertial, visual, and LiDAR measurements to maintain position estimate quality.

IMU pre-integration plays a crucial bridging role in this architecture. Between sensor measurement updates, the state estimator propagates the position estimate using inertial acceleration and angular rate data. IMU pre-integration formalizes this propagation in a form that can be accurately used as a probabilistic motion prior in the Kalman filter update step, providing smooth and consistent pose estimates even during the brief intervals between camera frames or LiDAR scan updates. High-grade tactical IMUs with temperature-compensated MEMS gyroscopes and accelerometers — rather than the consumer-grade MEMS sensors common in lower-cost platforms — significantly extend the period over which IMU-propagated estimates remain accurate before accumulating unacceptable drift.

Terrain-Referenced Navigation

Terrain-referenced navigation (TRN) uses a database of known terrain or surface geometry to constrain position estimates in environments where that geometry is sufficiently distinctive and stable. A downward-facing LiDAR altimeter or radar altimeter continuously measures altitude above the surface below the vehicle. When combined with an onboard terrain elevation model, this altitude measurement constrains the vertical component of the position estimate independently of GPS or barometric altimeter data. Horizontal position can be similarly constrained by correlating measured surface features with a pre-loaded reference map — a technique analogous to the terrain contour matching (TERCOM) systems used in cruise missiles since the 1970s.

For indoor industrial environments, terrain-referenced navigation transitions to building map-referenced navigation: a 3D geometric map of the facility, captured in a prior mapping mission, serves as the reference against which real-time sensor measurements are registered to maintain position estimates. This approach requires an initial mapping investment but provides navigation accuracy that is independent of external infrastructure, making it particularly valuable for operations in electromagnetically challenging environments like nuclear facilities, data centers, and steel-frame industrial structures.

Ultra-Wideband Positioning

Ultra-wideband (UWB) radio ranging provides an infrastructure-based positioning alternative for facilities where deploying a fixed anchor network is feasible. UWB anchors — small, low-power radio transceivers mounted at known positions within the facility — exchange ranging signals with a tag mounted on the drone. Time-of-flight measurement of the radio signals between multiple anchors and the tag enables trilateration of the tag position with accuracy in the 10 to 30 centimeter range, substantially better than what is achievable with GPS in complex indoor environments.

UWB positioning is particularly well-suited to applications where drones repeatedly return to the same facility, because the infrastructure investment in anchor installation is amortized across many missions. Warehouse inventory management, indoor construction monitoring, and facility inspection programs with defined recurring routes are all good candidates for UWB-augmented navigation. The main limitation is range — commercial UWB systems have reliable positioning coverage out to 50 to 100 meters from each anchor, requiring anchor density to scale with facility size.

Key Takeaways

Conclusion

GPS-denied navigation capability is transitioning from a specialized military and research domain to a commercial industrial requirement as drone inspection programs expand into the indoor and near-structure environments where the highest-value inspection targets often reside. The technical approaches described here — visual odometry, LiDAR SLAM, sensor fusion, terrain-referenced navigation, and UWB positioning — represent a toolkit rather than a single solution, and the appropriate combination depends on the specific environmental constraints of each application.

The commercial UAV platforms that will lead the next phase of industrial inspection adoption will be those that implement robust, adaptive navigation architectures capable of transitioning smoothly between GPS-supported and GPS-independent modes as the operational environment requires. This capability is not a future aspiration; it is actively deployed in advanced industrial programs today, and its performance and cost profile is improving rapidly with each generation of embedded compute hardware and navigation algorithm development.