**Land Seismic Background**

In many active geographical regions of land seismic exploration the canopy (trees/forestation) is such that RTK GPS cannot be used for the precise installation of survey stakes. The signals from the satellite segment of the GPS system cannot penetrate through to the receivers being carried by the field surveyors.

Global Positioning System (GPS) is based on satellites providing range data to a receiver at the location to be surveyed. We can reliably position a point today in an open space with GPS. We can accurately survey if we use Differential GPS – DGPS (1m to 2m). We can very precisely survey if we use Real Time Kinematic GPS – RTK (5cm to 15cm).

If we try and position using GPS in areas where the signals are masked or disturbed such as the urban setting seen right, we will degrade our positioning capability significantly.

If we were to try and position under trees (canopy) we can see that GPS will fail as the signals from the satellites are being masked (attenuated) by the trees.

If we combine the few GPS ranges we get with a good quality inertial unit we can maintain accurate survey grade positioning in these areas of poor (stand alone) GPS performance.

What we can also do is align the inertial unit to an installed monument and then position throughout the day just with the inertial unit. Alignment is maintained through the use of zero velocity updates (zupt’s) while surveying under canopy. When convenient, the unit is tied to a control point or survey monument and all data is re-processed to minimize errors.

This, although a very simplistic explanation, is how Zupt delivers very productive land survey positioning for seismic exploration.

**Marine Construction – A basic introduction**

Subsea precise positioning is achieved today through the use of Long Baseline (LBL) acoustic positioning systems (LBL provides relative accuracies of between 30cm to 3m depending on frequency). In a similar manner to GPS, ranges are measured to stations. In the case of LBL systems, multiple stations are deployed onto the seabed to provide the local coverage needed. These stations (transponders) are then calibrated (relatively [location with respect to each other] and absolutely [Lat, Lon and Depth]). Once calibrated the system is ready to be used for precise positioning. The type of work these arrays are used for is pipeline installation, sub sea manifold installation and general measurements during offshore oil and gas field construction. Remotely operated vehicles (ROV’s) are used today in the place of divers for deep water construction support. LBL systems are often used to position ROV during these tasks.

On the left we start to look at the example of a Remotely Operated Vehicle (ROV) involved in the installation of some subsea components (well-head, manifold, jumper, etc.) The ROV will be positioned with respect to this LBL array. To provide adequate coverage, positioning reliability and accuracy, a number of (usually 6 or more) transponders will be deployed and calibrated to form a LBL “array”. This “array” deployment and calibration will consume several “spread” days. The cost of this spread (vessel, ROV/personnel, survey equipment/personnel) will be in the region of $55K/day to $90K/day.

If we combine the measurements (observations) from just two transponders with a good inertial unit, pressure transducer and a velocity log we can provide the same positioning precision with much greater reliability. The spread time consumed to deploy these two beacons and calibrate them will be on the order of 30% to 40% of the time taken to deploy and calibrate the complete LBL array shown above. For each location Zupt’s services and equipment will shave between $100K and $200K off the overall cost of the operation. These oerations are ongoing throughout any deep water offshore field development.

“Our navigation systems contain Inertial Measurement Units which are defense articles and subject to the export controls of the US government. They may not be exported from the USA without the prior written approval of the US Department of State.”

The fastest way to go to jail (without passing go) will be to flaunt the export controls associated with this technology.

The International Traffic in Arms Regulations (ITAR) and the Arms Export Control Act (AECA) are the governing law that is overseen through the Directorate of Defense Trade Controls (DDTC) within the U.S. Department of State.

Simply put: “if you screw up, it will be you, not the company, that goes to jail”.

DDTC is responsible for all licensing issues if the commodity is controlled by State. To get a commodity under the more understanding Dept. of Commerce control a “Commodity Jurisdiction” has to be filed with the D.o.State.

Most US manufactured and “high end” international IMU’s will fall under the control of the DDTC. Do not listen to the vendors when they say don’t worry about this – talk to your own export attorney and get their advice.

For more information: Directorate of Defense Trade Controls

An inertial navigation system (INS) includes a computer to process navigation computations, an inertial sensor sub assembly or inertial measurement unit (IMU) containing accelerometers and gyroscopes orthogonally mounted on three axis. The INS is initially provided with its position or in some cases velocity from another source (an alignment), and thereafter computes its own position and velocity by integrating information received from the accelerometers and gyros. The primary advantage of an INS is that it requires no external references in order to determine its position, orientation, or velocity once it has been aligned. All inertial navigation system will exhibit drift as even the best accelerometers and gyros have some noise in their measurements. Usually the drift in position or velocity of an inertial navigation system is constrained, or controlled through the use of a measurement that can control the drift. In most cases this external aiding computation is made within a Kalman filter.

An INS can detect a change in its geographic position (a move east or north, for example), a change in its velocity (speed and direction of movement), and a change in its orientation (rotation about any axis). It does this by measuring the linear and angular accelerations applied to the system.

Since it requires no external reference (after initialization), it is immune to jamming and deception and is most useful where other systems are impacted by their environment. One example of this would be GPS under thick forest canopies. Signals from the GPS satellites cannot easily penetrate the leaves and branches, so precise positioning in the woods is very difficult to achieve with just GPS.

Gyroscopes measure the angular rotation in the inertial reference frame. By using the original orientation of the system and integrating the angular rotation at a specific rate, the system’s orientation is known at all times.

Accelerometers measure the linear acceleration in the inertial reference frame. By sensing the change in acceleration over a known time the displacement, or movement of the system is known at all times.

Because these measurements and calculations are with respect to the “inertial reference frame” and the end user is usually interested in where the car, ROV or airplane is, we translate the inertial reference frame through an “earth” reference frame to a “body” or “vehicle” reference frame to allow us to navigate or position these vehicles in known geographical coordinates. All of these calculations within Zupt’s systems are contained within what we call our “navigation solution”.

The type of sensors we use to sense angular rotation will vary on the application. We use fiber optic (FOG), ring laser (RLG) and many other types of gyroscopes. The same is true for accelerometers as we use a variety of force rebalance, pendulum rebalance or MEMs accelerometers depending on the application and quality required. The quality of an IMU does not depend on the type of sensor, but the quality of the sensor. It is possible to buy very good FOGs, it is also possible to buy less capable FOGs. The same is true for RLGs, Dynamically tuned gyros (DTG), and other rate sensors.

The industry has a lose definition of the grades of inertial navigation systems:

Strategic, Navigation, Tactical or Consumer

The table below provides some guidelines as to the capability and costs of these systems

Inertial Sensor Grade |
Price Range |
Position Error(km/h) |
Gyro Bias Error (°/h) |
Accelerometer Bias Error (mg) |

Strategic |
>$500K |
<0.03 |
0.00001 |
0.001 |

Navigation |
$90K to >$130K |
<4 |
0.015 |
0.03 |

Tactical |
$6K to $35K |
18 to 40 |
1 to 10 |
1.0 |

Consumer |
>$350 |
>40 |
1000 |
20 |

IMU’s are sometimes defined by discussing the gyro specification:

Bias usually stated in °/hr

Short term bias stability °/hr

Noise – angle random walk °/sqrt hr

Scale factor accuracy ppm

Bandwidth (50Hz or 100Hz) Hz

When a manufacturer describes an IMU as a 1°/hr IMU they are not referring to the capability of the IMU to derive “heading” to +/-1° or +/-1°/hr, they are referring to the bias or drift of the rate sensors within the IMU.

For more help understanding inertial navigation systems, or specifying the correct capability and price range of an IMU or INS for your application Contact Zupt

Zupt’s “navigation solution” consists of two main parts:

- Nonlinear solution integration – navigation propagation
- Kalman Filter processing

The nonlinear solution integration integrates the inertial sensor data and this code is executed as fast as possible. The configuration of the nonlinear solution varies from IMU to IMU, but propagates the rate and acceleration data.

The Kalman Filter has two sub sections that run at different speeds. The fast component of the Kalman filter implements the data accumulation and its rate is the same as the nonlinear solution integration above. The slow part provides the correction for the estimations and can be executed at a much slower rate. More specifically, the fast processing is the receiving rate of the measurement data from the selected inertial measurement unit will be between 500Hz and 1,000Hz. The slow rate can be set at any rate, but will normally be between 100Hz (if used with a multibeam system) and more commonly will be run at 5 Hz.

The overall navigation solution integration and corrections estimation interconnection is shown in the figure below (note no GPS integration shown in the example below):

**High level overall navigation software interconnection **

In this figure we assume that the initial alignment is completed. In the Figure the following color convention is adopted. The light yellow box on the left is inertial measurement unit (IMU) sensor and other aiding sensors. The gray boxes pertain to the Kalman Filter processing and aiding residuals computation. The green boxes identify the estimated corrections and sensors states usage. The blue lines indicate the flow of the estimated navigation solution correction and estimated measured inertial measurement unit state. Finally, the black lines indicate the flow of all other data. Specifically, the processing of the raw IMU data into the navigation solution starts from the upper left part of the figure and finish at the upper right of the Figure.

The process sequence is as follows:

The IMU delta rate (∆θ) and delta acceleration (∆V) are corrected for the bias, scale factor, misalignment, non-orthogonality and are integrated into body to navigation frames attitude, velocity and position of the body with respect to the earth centered-earth-fixed-(ECEF) frame. The IMU data correction is performed based on the Kalman Filter estimation. The integration is performed in the non-liner navigation solution propagator. The obtained attitude, velocity and position are also corrected by the Kalman Filter. The Filter is aided by the i) velocity residuals derived at the “no motion time” (determined by the motions detection logic for zero velocity update) from the current navigation solution, or/and ii) position residuals, or/and iii) range residuals.

The nonlinear integration consists of the following three integration processes:

- Process 1: attitude integration using body to inertial frame rate (delta angles) measured by gyros; navigation to inertial frame delta angles derived from body velocity and the Earth rotation rate.
- Process 2: the velocity integration based on the acceleration (delta velocity) measured by accelerometers.
- Process 3: is the position integration based on the obtained velocity.

Motion detector logic senses the constant velocity motion and the specific zero velocity motion case. The motion detector is based on the weighted combination of multiple logics.

The following key variables are employed in the description of the inertial navigation system:

is body to navigation (n) directional cosine matrix. This matrix describes the attitude of the antenna with respect to the local navigation frame.

is earth (ECEF) to n-frame directional cosine matrix. It can be computed from Latitude, Longitude (azimuth angle is free). This matrix can be considered as part of the position.

is body to inertial frame rotation rate expressed in the body frame.

is earth frame to inertial frame rotation rate expressed in navigation frame. This is the earth rotation rate.

is body to navigation frame rotation rate expressed in the body frame.

is navigation frame to Earth frame rotation expressed in the body frame.

is “specific acceleration” as measured by the accelerometer which includes the total acceleration and gravity acceleration.

is velocity of the navigation frame with respect to the earth frame expressed in navigation frame

is position with respect to the earth (ECEF) frame expressed in the ECEF, where

A few comments on our implementation: The attitude and position (but not altitude) is represented via quaternion which with respect to the cosine matrix has more efficient representation (4 rather than 9 elements) and simple algebra. The attitude and position (but not altitude) integration is implemented via high accuracy fourth order Runge-Kutta quaternion integration. The navigation frame mechanization has a wander azimuth implementation to avoid singularities and overall reduction of the algorithm complexity. In the algorithm we avoid explicit computation of the cosine and sine functions. That is, the corresponding elements of the cosine transformation matrices are used instead.

**Position and velocity integration long term equations.**** **

**Navigation Kalman Filter**** **

The short term (linear) integration equations which approximate the non-linear dynamics are shown in:

, where

is attitude error in navigation frame (3 by 1 vector)

is velocity error , in navigation frame (3 by 1 vector)

is position error in navigation frame (3 by 1 vector)

** **

**Position and Velocity integration short term equations used in the Kalman Filter**

is the gyro states (12 by 1 vector) where

is gyro bias in the body frames (3 by 1 vector)

is gyro scale factor (3 by 1 vector)

is gyro rotational misalignment, (3 by 1 vector)

is gyro non-orthogonality misalignment (3 by 1 vector)

is the accelerometer states (9 by 1 vector) where

is accelerometer bias in the body frames (3 by 1 vector)

is accelerometer scale factor (3 by 1 vector)

is accelerometer non-orthogonality misalignment (3 by 1 vector)

is the lever arm correction per considered aiding sensor (depends on the configuration). This state models also average speed of sound measurement correction.

Note that accelerometer rotational misalignment is not included into the state vector since it is not observable. However, the difference between the gyro and accelerometer rotational misalignment is observable and might be included as an additional state. For our particular application the impact of the estimation of the latter on the overall performance is negligible and therefore this additional state is not considered herein.

**Kalman Filter Matrix State-Space Form**

The Kalman Filter matrix state-space representation has the following form:

, or

*ARW* is the Gyro Angle Random Walk

*VRW* is the Accelerometer Velocity Random Walk

*RWgyro* is the gyro state noise sources

Specifically, the Random Walk Process for the Scale factor, Misalignment and Non-orthogonality; the first order Markov Process and constant offset for the rate bias, Scale factor, Misalignment and Non-orthogonality; *XRWacc* is the accelerometer state noise sources. Specifically, the Random Walk Process for the Scale factor and Non-orthogonality; the first order Markov Process and constant offset for the acceleration bias, Scale factor and Non-orthogonality; *XRWcorr* is the measurement noise sources corresponding to the lever arm correction estimate and speed of sound measurement. Specifically, the white noise for the clock bias and the white noise for the clock drift. The sensors noise characteristics above are reported by the corresponding vendors as part of the sensors specifications and are confirmed during system integration and testing.

Note that, , where is the cosine transformation (orthogonal) matrix between the gyro sensor frame and the body frame. Similar argument holds for the accelerometer as well. That is. Another transformations matrices for the inertial sensors considered below are nominal gyro body to the attitude determination b-frame and nominal accelerometer body to the attitude determination b-frame. Thus,, . The corresponding matrix blocks in the “A-matrix” above are:

, where the semi major earth axis “a” is defined earlier. The matrix models the velocity error impact on the attitude error and describes the GPS receiver clock error dynamic. Next,

describes the contribution of the gyro states to the attitude error and describes the contribution of the accelerometers states to the attitude error. Specifically, the components of these matrices are as in the following:

The gyro scale factor contribution to the attitude error is modeled as:

,

where and is the estimated gyro measured inertial rate, that is a rate averaged of the measured gyro delta time. The gyro non-orthogonality contribution to the attitude error is modeled as:

where . The gyro misalignment contribution to the attitude error is modeled as sum of and

Similar arguments hold for accelerometer as well. Let and is the estimated accelerometer measured specific acceleration which is acceleration averaged of the measured accelerometer delta time. Then, the accelerometer scale factor contribution to the attitude error is modeled as:

The accelerometer non-orthogonality contribution to the attitude error is modeled as:

,

where

**Measured Residuals and Measurement Matrix H**** **

The residuals are constructed based on the aiding data, associated time tag and the corresponding time tag estimates for the solution. To accomplish this we use the circular buffers for all the navigation estimates (position, velocity, attitude, angular rates). Moreover, for the given time tag the accurate interpolation is performed to obtain the associated time tag estimated values.

The residuals are subsequently corrected for the best known (prior knowledge + estimated correction) lever arms which model IMU as well as all aiding sensors position and orientation with respect to the reference frame on the ROV or the structure.

Regular time spaced aiding data is not required. The aiding data is used when it is available.

Since residuals are constructed based on the predicted measurement the sanity check of the measured data is performed and outliers are rejected.

The following aiding implantation is considered, where “matrix H” maps the filter state to the measured residuals and is the measurement noise vector.

**Zero Velocity Update (zupt)**

The aiding is implemented as corrections issued by Kalman Filter during stationary instances sensed by the motion detection logic. The input Kalman Filter residual is the estimated velocity and, where expectation is, which is white noise with the given sigma. .

For the considered IMU the position correction is possible due to the following:

Depending on the motion time around 75% to 50% of the position errors could be removed during zupt periods due to the cross correlation of the position and velocity errors. More correction can be done when the travel time is short, when correlation is pronounced.

Moreover, one sigma position error increases as **t5/2** as the travel time increases (from the triple integration of the random walk). Therefore, as travel period increases the ability to correct the position is decreases due to 1) reduction of the correlation coefficient 2) increase of the position error itself. The position residual error after zupt (correction) is uncorrelated from one stop to another stop and therefore it increase similar to the random walk that is as square root of the number of travel periods.

**USBL Position Aiding**

If DP is the residual of the measured position, then

, where expectation is, which is white noise with the given sigma. is the time matched navigation to ECEF frame cosine transformation matrix. are correction of the knowledge of the body to USBL transducer lever arm and its sensitivity matrix respectively.

**Depth (altitude) aiding **

It is the particular case of the position aiding. In this case only vertical channel is considered.

**LBL Ranges Aiding **

If DR(k) is the residual of the measured delta range for the k-th acoustic beacon, then

, where expectation is, which is white noise with the given sigma. The measurement matrix H1(k) is the current line of sight vector for the k-th beacon which maps position error into the k-th beacon -range error

, where is the current k-th beacon position in the ECEF and is the current body position in the ECEF and is the estimated range to k-th beacon. Next, are the correction of the knowledge of the body to LBL receiver lever arm and its sensitivity matrix respectively. Note that each beacon position knowledge uncertainty is included into this estimated state.

The weight is set per beacon. We also introduced for each beacon the trigger which de-weights this beacon exponentially once range is less than specified value. This models the reduction of the measurement accuracy when we approach the beacon due to the expected absolute depth error in the beacon station coordinate.

The discussion of mapping the measured time interval to the corresponding ranges is omitted here in. We only note that measured averaged speed of sound used to scale the time interval is also corrected by one state of the Kalman Filter. Special care is also taken to construct the accurate time tag of the measurement.

**Kalman Filter Key Processing Steps**** **

The key steps of the Kalman Filter process are the following:

- Covariance Matrix Propagation:, where is the state covariance, is the transition matrix over the filter (fast) delta time DtKF and Q is the propagated noise variance .
- Gain Calculation:, where Pp is the propagated state covariance above, H is the measurement matrix (see previous subsection) and is the measurement noise variance.
- Covariance Matrix update and residual state computation.

The Kalman filter process is an iterative process. That is the state covariance matrix P computed in the step #3 is used in the step#1. The corresponding state residuals, computed in step#3, are utilized to correct the estimated attitude, velocity and position, and are being accumulated to improve the knowledge of the sensor (gyro, accelerometer and aiding sensors) states.

**Initialization (Alignment)**

The initial alignment is achieved by using the special Kalman filter developed specifically for the alignment process. This filter can be configured for the static (no motion whatsoever) and dynamic (Marine) alignment.

**GPS Loosely coupled configuration**

Conceptually, this aiding is the same as position aiding discussed above.

**GPS Tightly coupled configuration**

The filter measured residuals vector are computed as difference between the measured by GPS receiver pseudo-ranges and pseudo-ranges rates, and their estimates. Subsequently, the GPS clock bias and drift corrections are applied to the derived residuals. Therefore, the measurement “matrix H” has the format shown below.

Let where DPR is the residual of the measured pseudo range for the k-th GPS receiver tracked satellite (not to be confused with the antenna tracked satellite) and

DDR is the residual of the measured pseudo delta range (rate) for the k-th GPS receiver tracked satellite. Therefore,

, where expectation is, which is white noise with the given sigma.

, where expectation is,

which is white noise with the given sigma .

Then, the measurement matrix H1(k) is the current line of sight vector for the k-th GPS receiver tracked satellite which maps position error into the k-th satellite pseudo-range error

, where is the current k-th satellite position in the ECEF and is the current antenna (body) position in the ECEF and is the estimated range to k-th satellite.

**Off-line re-run for analysis and smoothing**

This capability allows to re-run the estimator forwards and backwards using the stored raw data. This results in the position estimate with the much smaller covariance bound are therefore more accurate estimates.

**Kalman Filter (KF) based fixed beacon position estimator**

We have developed a special Kalman Filter to estimate the beacon positions from the measured ranges and the DGPS position. The benefit of the KF implementation rather than conventional Least Square Estimator is that it provides in real time the quality indictor in the form of the covariance bounds which is not directly available from other methods. For example, Least Square Approach produces the quality of the ranges fit which is not directly transformed to the position error.

**Data Logging**

The time tag unit (TTU) sends all raw data to our CF storage. But we also need to log processed position, navigation, attitude velocity data as the solution is running and we have an additional data set of final survey data (data that is logged as a “fix” for specific survey operations). This is all managed our SSTT software. Data is logged internally in two primary states:

“everything processed” is logged in a *.DAT file

survey data is logged in an “Oplog” or operational log file

For all field operations we only use the data logged to the Oplog file as this is a practical subset of all data. If we ever need to go back to the processed solution we have this in the *.DAT file. If we ever need to replay (we do this quite often for training etc.) raw data we have this in the CF file.

**User Interface**

The user interface for C-PINS is our “SSTT” application. This allows for the job configuration, data entry such as alignment position, geodesy, lever arms or offsets, etc etc.

All of the aiding sensors are configured through SSTT as well. This is a fairly busy piece of software but once an operator has spent a couple of hours working with it is fairly self-explanatory. Many standard type navigation and depth displays are available within SSTT.

An overall screen dump showing the main interfaces of SSTT is included below:

To learn more about our Navigation Software Contact Zupt