Surveillance Sensor Eyes Data Overload

Wednesday, August 17, 2011 @ 01:08 PM gHale

An autonomous multi-sensor motion-tracking and interrogation system can reduce the workload for analysts by automatically finding moving objects, then presenting high-resolution images of those objects with no human input.

Intelligence, surveillance and reconnaissance (ISR) assets in the field generate vast amounts of data that can overwhelm operators and can severely limit the ability of an analyst to generate intelligence reports. This multi-user tracking capability enables the system to manage collection of imagery without continuous monitoring.

Robot Thinks Safety First for Miners
Forecasting Fractures in Pipes
Microsensors can Prowl Oil Wells
Deep Dive: Subsea Sensors a Security Source

“These tests display how a single imaging sensor can be used to provide imagery of multiple tracked objects,” said Dr. Brian Daniel, research physicist, Naval Research Laboratory (NRL) ISR Systems and Processing Section. That is a job that typically requires multiple sensors.

During flight tests in March, multiple real-time tracks came together via a wide-area persistent surveillance sensor (WAPSS) autonomously cross-cued to a high-resolution narrow field-of-view interrogation sensor connected by an airborne network.

Graphic shows the network sensing concept.

Graphic shows the network sensing concept.

“The demonstration was a complete success,” said Dr. Michael Duncan, ONR program manager. “Not only did the network sensing demonstration achieve simultaneous real-time tracking, sensor cross cueing and inspection of multiple vehicle-sized objects, but we also showed an ability to follow smaller human-sized objects under specialized conditions.”

The network sensing demonstration utilized sensors built under other ONR sponsored programs. The interrogation sensor was the precision, jitter-stabilized EyePod developed under the Fusion, Exploitation, Algorithm, and Targeting High-Altitude Reconnaissance program.

EyePod is a dual-band visible-near infrared and long-wave infrared sensor mounted inside a nine-inch gimbal pod assembly designed for small unmanned autonomous vehicle platforms. The mid-wave infrared nighttime WAPSS (N-WAPSS) was the wide-area sensor, and has a 16 mega-pixel, large format camera that captures single frames at four hertz (cycles per second) and has a step-stare capability with a one hertz refresh rate.

Using precision geo-projection of the N-WAPSS imagery, the system was able to track all moving vehicle-size objects in the field of view in real-time. The tracks converted to geodetic coordinates and sent via an air-based network to a cue manager system. The cue manager autonomously tasked EyePod to interrogate all selected tracks for target classification and identification.

Leave a Reply

You must be logged in to post a comment.