3D Mapping Paper

3D Mapping Paper

Cross-platform lidar and sfm-mvs data fusion for dryland ecosystem monitoring. 

Authors: Tyson L. Swetnam1*, Temuulen T. Sankey2, Mitchell P. McClaran1, Mary Nichols3, Jason McVay2, Phillip Heilman3

1 School of Natural Resources and the Environment, University of Arizona, Tucson AZ

2 Informatics and Computing Program, Northern Arizona University, Flagstaff AZ

USDA Southwest Watershed Research Center, Agricultural Research Service, Tucson, AZ 85719

*Corresponding author

1064 East Lowell Street 

Tucson Arizona 85719

email: tswetnam@email.arizona.edu 

phone: 1 (520) 621-1052

Abstract

Dryland ecosystems exhibit long periods of senescance punctuated by rapid growth following seasonal precipitation events. Monitoring vegetation via remote sensing to capture new growth as well as changes to structure from herbivory and disturbance requires high spatial and temporal precision. Difficulties to collecting accurate information persist across sensor platforms and at different scales in space and time. In the present study we identify specific strengths and weaknesses for several different modalities, including terrestrial and aerial platforms including sUAS and manned aircraft. Following the summer monsoon in early October 2015 we collected (1) sUAS structure from motion multi-view stereo (sfm-mvs) and (2) lidar, (3) terrestrial lidar, and (4) manned aircraft lidar over the Walnut Gulch Experimental Watershed (WGEW), a long term United States Agricultural Research Station (USDA-ARS) research site located east of the town of Tombstone, Arizona (30.74° N, -110.05° W). In March, May, August and September 2016 we again collected sUAS sfm-mvs and terrestrial lidar over the Santa Rita Experimental Range (SRER), another long term USDA-ARS site south of the city of Tucson, Arizona (31.80° N, -110.84° W). We found aerial lidar (both sUAS and manned aircraft) to be more precise at identifying bare ground elevation than terrestrial lidar or sUAS sfm-mvs, but less precise in measuring herbaceous vegetation than terrestrial lidar or uUAS sfm-mvs. The cost of collecting manned aircraft lidar and sUAS lidar also precluded them from high frequency data collection, whereas the sUAS was not limiting. Despite the utility of the sUAS for monitoring vegetation phenology and structure, the overhead of computational processing became a limiting factor at progressively larger scale (both spatial and temporal).  

Introduction

Keeping pace with technological innovation and analyzing the breath of remotely sensed information available for ecosystem monitoring now far surpasses the skillset of any single traditional field ecologist. Monitoring of species and ecosystems has always been limited by what Levin termed the 'problem of pattern and scale'. Yet today, technology ever more quickly lowers these barriers. Ecologists are no longer limited to working in small monitoring plots and extrapolating observations to estimate the characteristics. They can measure entire ecosystems (and ecoregions) via manned aircraft equiped with lidar and hyper-spectral sensors. At more local scales they monitor hectare size areas with centimeter and even milimeter precision using small unmanned aerial vehicles (sUAS). Three dimensional structure from motion (sfm) reconstruction of surface features and vegetation from passive visible light cameras also promises to greatly enhance our ability to monitor ecosystem. 

 

Over the last half century remote sensing from aerial and space-based platforms have immeasurably changed the earth and environmental sciences. Conventional orthophotography and satellite imagery are by their sensor design, two dimensional (2-D) data, measured across a wide range of spatial, spectral, and radiometric resolutions. More recently, these passive sensor technologies have begun to be supplemented and to some extent surpassed at the scientific forefront, by laser-based active sensor technology, specifically light detection and range (lidar), which has become the dominant technology used in the earth sciences for measuring the three dimensional structure of vegetation and earth surface phenomena (Glennie et al. 2013, Harpold et al. 2015). Despite the sensor advances the measurement and re-measurement of ecological and geophysical phenomena are still limited by the resolution of the sensors and their availability across time and space (Woodcock and Strahler 1987, Turner 1989, Turner et al. 1989, Levin 1992). In general, we are very good at measuring things at high spatial, spectral, radiometric, or temporal resolution but typically no more than two of these dimensions at one time. 

Dryland ecosystems, characterized as regions where evaporation and vegetation transpiration exceed precipitation, cover 41% of the terrestrial Earth surface (Millenium ecosystem assessment, 2005). 

From 2D to 3D, and back again.

The dominant modes of Geographic Information Systems (GIS) (Estes and Star 1990, Coppeck and Rhind 1991) are moving away from grid (so-called 'raster') and 2-D vector based maps to also include 3-D point and mesh surfaces of volume spaces. Rasters cannot represent all of the information in a 3-D space at one time, nor do they accurately describe 3-D features such as vegetation or vertical relief like cliffs and river banks (Lague et al. 2013). Raster derivatives of point cloud data which are sometimes described as 2.5-D data include: bare earth Digital Elevation Models (DEM)(Kraus and Pfeifer 2001, Sithole and Vosselman 2004), Digital Surface Models (DSM) which include vegetation and human structures on the surface (Zhang et al. 2003), Canopy Height Models (CHM) of vegetation's height above ground level (St-Onge and Achaichia 2001, Lefsky et al. 2002, Popescu et al. 2002), and statistical metrics of the point cloud vertical profile (Reutebuch et al. 2005, McGaughey 2009) which consist of evenly spaced uniform grids. In any case, the availability of 3-D data are increasing and cost decreasing at a rate faster than most organizations can adapt into their land management planning and monitoring protocols.

Discriminating physical surfaces, e.g. bare ground, bathymetry, or infrastructure, versus biological features, e.g. vegetation, from 3-D point clouds to produce accurate DEM and DSM models involve numerous computational techniques with varying levels of complexity (Kraus and Pfeifer 2001, Zhang et al. 2003, Hutton and Brazier 2012). Classification of physical and biological features from point cloud data (Brodu and Lague 2012) require numerous parameterizations which can be both computationally intensive and time consuming to design. Final surface model precision and accuracy further depend upon the user's intended application and may not be desirable for all users, e.g. a fine textured surface for measuring soil properties may be prohibitively inefficient for modelling surface flooding and run-off from precipitation. SfM mapping from terrestrial platforms (Castillo et al. 2015) and from UAS (Rango et al. 2009, Harwin and Lucieer 2012, Anderson and Gaston 2013) suggest that in non-vegetated terrain SfM, if collected with rigorous planning and attention to detail is as good as ALS or even TLS. 

The critical threshold above which a new technology must reach in order to warrant replacing an older established method or technology should be that its added benefit out weighs the associated costs of collecting it and the effort it takes to process the data and get results. Limitations of the technology should also be considered, i.e. whether a loss in precision or accuracy, or a change in spatial or temporal resolution below an existing technique are acceptable given the benefit added. For example, if SfM collected from UAS is as accurate as ALS but its cost per hectare (ha-1) is significantly lower than the benefit added in UAS-SfM's temporally deeper resolution outweighs the loss in total spatial coverage obtainable by manned-ALS or manned-SfM.  

In the present study we compare four different 3-D remote sensing techniques: terrestrial laser scanning (TLS), UAS aerial laser scanning (UAS-ALS), manned aircraft aerial laser scanning (M-ALS), and UAS-SfM in two different vegetation cover types (shrubland and semi-arid grassland). We were interested in first determining the cross-platform precision, accuracy, and uncertainty and second, identifying applications for two new and less expensive technologies available for UAS: SfM and ALS, relative to two other well established but much more expensive systems: TLS and manned ALS. 

Lidar suffers from high start up costs for equipment and training. Terrestrial Laser Scanning (TLS) units, differential-corrected Global Positioning Systems (dGPS), and Total Stations, which are required to create a precisely geo-referenced scan cost tens to hundreds of thousands of US dollars to purchase or rent and require specialized training to operate. Similarly, aerial laser scanning (ALS) which covers far larger areas at nominally lower resolutions than TLS has an even higher start up investment cost for the aircraft, pilots and technicians, lidar equipment, dGPS, onboard inertial motion unit (IMU), and computational equipment resulting in manned ALS collections being limited to single site visits, or long intervals between revisits.

Table 1: Spatial and temporal resolution, point density, mapping characteristics, and costs of data collection for various 3-D mapping techniques.

TechniqueSpatial (ha)Point Density (pp m3)PrecisionBare Ground

Ground beneath vegetation

Woody Vegetation

Canopy penetration

Equipment (US$)

daily rate

information cost

($ ha-1)

Information value

($ points-1 area-1)

SfM<100107<1cmYesNoYesNo1,000 - 5,000+200 - 1,000+200 - 400 
TLS<101 - 102104<1cmYesNoYesYes15,000 - 150,000+3,000 - 5,000+100 - 400 
sUAS ALS101 - 103102<10cmYesYesYesYes50,000 - 250,000+5,000 - 10,00025 - 50 
sUAS SfM102 - 103103<10cmYesNoYesNo2,000 - 100,000+3,000 - 15,0005 - 25 
Manned Aircraft ALS103 - 106101<20cmYesYesYesYes250,000 - 1,000,000+30,000 - 60,0000.75 - 1.25 

*Fails to find bare earth surfaces whenever there is a significant vegetation component.

More recently developed 3-D remote sensing techniques include Structure from Motion (SfM) with multi-view stereo (MVS) or photogrammetric detection and range (phodar) which derive from traditional stereoscopy techniques (i.e. parallax height measurement) and utilize rapid computational algorithms that produce dense 3-D point cloud data which are nearly equivalent to ALS and TLS in accuracy, precision, and point density (Westoby et al. 2012, Fonstad et al. 2013, Smith et al. 2015). Further, SfM-MVS can be conducted from small unmanned aerial systems (sUAS) (Smith et al. 2015) which increases the frequency of data acquisition, but only across smaller footprint relative to traditional ALS. SfM-MVS also benefits from a much lower lower start-up cost (Dandois and and Ellis 2013). SfM-MVS is limited in several ways in which lidar is not: (1) it is a passive sensor technology and is dependent upon illumination of its targets from another light source typically from the sun; and (2) its capacity to generate accurate bare earth models is increasingly limited in areas with dense vegetation cover (Westoby et al. 2012, Nouwakpo et al. 2015). Advancement in sUAS equipped with various active and passive sensor technology portend yet another scale change in the availability of remotely sensed data for natural resource managers and earth scientists (Rango et al. 2009, Harwin and Lucieer 2012, Anderson and Gaston 2013, Javernick et al. 2014). The potential for multi-temporal (daily to hourly) data at high spatial resolution (<1 cm pixel or voxel size) at intermediate scale (1-1 to 100-1 hectares [ha-1]) makes UAS highly desirable for ecological monitoring as these temporal and spatial scales correspond well to eddy covariance flux tower fetch footprints (Beland et al. 2015) as well as hot spots of activity for geophysical changes following disturbances (Lague et al. 2013). With the veritable fire hose of data streaming in from the field data scientists need to develop best practices for data management and data analysis which will ensure the benefits added by technology are achieved to their fullest.

The temporal repeatability of SfM-MVS collections from sUAS suggest we can now observe structural characteristics and change in ecosystems and landscapes which far exceed those of manned ALS.

Establishing where the limitations of these new remote sensing technology are remains a critical component for managers and researchers interested in developing new monitoring protocols and testing scientific hypotheses. For example, terrestrial laser scanning (TLS) has precision and accuracy (<1cm3), point count (106-109), and point density (105 ppm-2) which far exceeds other technologies, but it is hampered by the short heights above ground level from which it can be collected. TLS scans often have occlusions from objects such as vegetation which block line of sight. Passive sensor remote sensing which includes SfM-MVS is much less expensive than laser scanning, but has numerous dependencies which must be established prior to collecting high quality data, e.g. establishment of multiple ground control points (GCPs), close attention to light environment, sensor resolution, and angle of the image collection; SfM also suffers from an apparent inability to penetrate dense vegetation, in the way lidar can (Dandois and Ellis 2010, 2013, James and Robson 2014). What has not been clearly investigated is how a cross-sensor platform data fusion can eliminate these limitations in such a way that we can maximize our ability to remotely sense our environment over time and space at higher resolutions than are possible with any one technology. In natural resource management this scaling limitation is of particular concern; when the scale of the area under management are so large it is fiscally or physically impossible to collect enough field sampled information to create a statistically robust characterization of ecosystem or landscape phenomena, remote sensing may be required to answer an important question. 

Table 2: Nomenclature used in the text.

Unit/AcronymNameDescription
ALSAerial Laser Scanninglidar collected from a manned aircraft
AMERIFLUX  
ARSAgricultural Research Service 
CHMCanopy Height Modelheight of vegetation above ground level, DSM minus DEM
cmcentimeter1 centimeter, or 10 mm
CMVS  
δdeltameasurable change in quantity
DEMDigital Elevation Model 
dGPSdifferential Global Positioning Systemsatellite network which triangulates position on Earth surface which also uses a second base station for error correction
DSMDigital Surface Model 
Estestimatedestimated or predicted quantity
GCPGround Control Pointa target or surveyed point on the surface
GISGeographic Information Systemsdigital cartography or maps produced in a computer
GNSSGlobal Navigation Satellite System 
GPUGraphics Processing Unit 
IMUInertial Motion Sensor 
haHectares10,000 meters2
kmkilometer1,000 meters
lidarLight Detection and Rangelaser measurements which return x,y,z position in space
mmeter100 cm.
MAEMean Absolute Error 
mmmillimeter1 millimeter, or 1/10 cm
MSEMean Square Error 
MVSMulti View Stereoa method for generating topography from sfm.
NIR  
nmnanometer10-9 meter, or one billionth of a meter
Obsobservedobserved quantity
phodarphotogrammmetic detection and rangesimilar to sfm, but involving detailed orthophotographic & map projection techniques
PMVS  
RMSERoot Mean Square Error 
RTKReal Time Kinetic 
SDEStandard Deviation of the Mean 
sfmStructure from Motionstereoscopic reconstruction of three dimensional objects, without scale or georeferencing
σsigma1 standard deviation
sUASsmall Unmanned Aerial Systemsmall aircraft equipped with sensors for measurement
TLSTerrestrial Laser Scanningground based laser measurements
   
   
USDA  
USGSUnited States Geological Survey 

Study Area

The Walnut Gulch Experimental Watershed (WGEW) is a long term United States Agricultural Research Station (USDA-ARS) research site located east of the town of Tombstone, Arizona (30.74° N, -110.05° W). The soils vary from high carbonate soils on the western lower watershed to XXX on the eastern upper watershed (Keefer et al. 2008).

For this study we selected the fetch footprint of two eddy covariance flux towers (both towers are part of the AMERIFLUX network): Lucky Hills Shrubland (US-Whs), and Kendall Grassland (US-Wkg). Lucky Hills is characterized as a Chihuahuan desert scrub (Scott et al. 2006, Scott 2010) the dominant species include: Larrea tridentata (creosote), Vachellia vernicosa (white thorn acacia), Flourensia cernua (tarbush), Parthenium incanum (mariola), Rhus microphylla (Little-leaf desert sumac), Condalia warnockii (Warnock's Snakewood), Ephedra spp. (Mormon tea). The Kendall Grassland site is characterized as a semi-arid desert grassland (Scott et al. 2010) dominant species includes: Eragrostis lehmanniana (Lehmann's lovegrass) an invasive grass, and Prosopis spp. (mesquite), native grasses include Hilaria belangeri (curly mesquite), Bouteloua eriopoda (black grama), Bouteloua hirsuta (hairy grama), and Aristida hamulosa (threeawn) (Skirven et al. 2008), other species include Yucca baccata (Banana yucca), Yucca elata (Soaptree yucca), and Agave palmeri (Palmer's agave).

Methodology

Data are broken up into a cross-comparison of four different techniques: TLS, sUAS-ALS, manned ALS, and SfM. The TLS data are treated as the 'observed' reference value based on their absolute positional accuracy which is over an order of magnitude finer than the next closest technology. Establishing absolute location on the Earth's geoid based on GPS readings from different instruments results in inherent errors related to the positional accuracy of the GPS, here we relied on a real time kinematic (RTK) system. To avoid adding additional levels of uncertainty to our comparisons we align the individual point clouds using control points that are clearly identifiable in each dataset.     

  1. Georeferencing and alignment of point clouds
    1. Ground control points (GCP) have been established at both locations from differential GPS and Total Station surveys from USGS Benchmarks (two near Kendall Grassland, XXX at Lucky Hills).
    2. Features that are clearly identifiable (and measured) in each of the data sets were used to align point clouds in CloudCompare (Girardeau-Montaut 2011) using the three-point-picking tool, e.g. the eddy-covariance flux towers and surrounding structures, water sampling flumes, fence posts, parked vehicles, telephone poles, road features, and small boulders. The nominal precision of each point cloud is relative to the precision of the technology used to collect it.
    3. Watershed boundaries and major infrastructure features for Walnut Gulch are available in a pre-existing GIS and remote sensing database (Heilman et al. 2008, Moran et al. 2008). 
    4. The absolute precision of the Reigl VZ-400 TLS unit is ±5 mm, and accuracy ±3 mm at 1 sigma (σ) at 100 m distance from the sensor under test conditions. The unit has a minimum distance of 1.5 m and a maximum distance of 160-350 m depending on target reflectivity (20%-90%) in high speed mode; and has a beam divergence of 0.35 mrad resulting in an increase in beam diameter of 35 mm for every additional 100 m distance from the sensor. 
    5. We use the TLS to establish 'virtual' GCP for alignment to the other three point cloud products from the sUAS-ALS, manned ALS, and SfM. 

      USGS Benchmark IDs  
      Kendall #1  
      Kendall #2  
      Lucky Hills # 1  
    6. are identified using a XXXX (Teki - does the eBee software use a 3-D Helmert transformation? (see Harwin and Lucieer 2012: seven parameter, three translation, three rotation, one scale))
  1. To estimate the point cloud to point cloud, as well as DEM and DSM quality we calculated the precision or standard deviation of the error (SDE) about the estimated surface and the residual errors which include the measure of surface quality or root mean square error (RMSE), the measure of accuracy or mean error (ME), the measure of absolute positional difference or absolute mean error (MAE).

    where Esti is determined from the SfM, sUAS-ALS and M-ALS elevation or point location, and Obsi is the observed RTK-GPS elevation or point locations derived from the Reigl TLS unit. 
  2. For characterizing the positional differences between each of the 3D point clouds we used a robust cloud to cloud comparison tool: the multi-scale model to model cloud comparison (M3C2) (Lague et al. 2013). M3C2 compares point clouds directly without meshing or gridding, computes the local distance between the clouds using a normalized surface, and estimates the confidence interval based on cloud roughness and registration error.
  3. DEMs of difference (DoD1) (Lague et al. 2013, Pelletier and Orem 2014) were generated from each of the datasets to determine the precision and accuracy of bare ground.
  4. DSMs of difference (DoD2) were also generated to determine the precision and accuracy of vegetation height

Propogation of Uncertainty

Sources of error in the final models include the propogation of errors from each each technology (the measurement error) as well as the model error when using interpolated or mean estimates.

  1. The error propagation for two or more measurements with different levels of uncertainty (δA, δB, and δC) are calculated as the squared values to the 1/2 power whose result, R, has the uncertainty: δR = (dA2 + dB+ dC2)1/2 

 USGS Lidar Base Specification (Heidemann 2014) Quality Level 1 (QL1) must have an absolute vertical error of < 29.4 cm in vegetation and < 19.4 cm for non-vegetated surfaces.

The Woolpert lidar on WGEW had a tested 9.6 cm vertical accuracy at a 95 percent confidence level, derived according to NSSDA, in open terrain using 4.9 cm (RMSEz) x 1.96000. Tested against the TIN using independent checkpoints against all points.

The TLS unit has an estimated uncertainty of 6 mm at 100 m distance from the sensor (Reigl doc). 

 

Some of the error in the subsequent model generation from sfm may be due to rolling-shutter artifacts during flight (Ref.) 

 

Terrestrial lidar

We used a Reigl VZ-400 terrestrial laser scanner to scan the locations around the two study sites. Immediately prior to both of the sUAS flights and the TLS scanning on 10/8/2015 we established six two meter tall, 10 cm diameter cylinder, lidar targets atop existing Total Station control points for use as control of the TLS scans. At Kendall Grassland the targets were erected atop two USGS benchmarked points (Mary and Michelle: can you fill out this paragraph with how the ground pins was established - also ensure that these statements are factually correct) and four total station and RTK located pins across the drainage from the USGS benchmarks. RiSCAN-PRO? At Lucky Hills the targets were erected upon (Mary: need information about the target location over pins at Lucky Hills) XXXX

    1. Need details about the instrument precision and accuracy
    2. Average pulse per square meter (ppsm)

Manned Aerial lidar
Aerial lidar was made by Woolpert Inc. over the entire WGER three weeks prior to the first terrestrial laser scanning at Kendall. The lidar has a nominal point spacing of (proposed*0.35m) with relative vertical accuracy of <=8 cm RMSEz between adjacent swaths and a maximum of +-16 cm. The aerial lidar was surveyed using a Real-Time Kinematic GPS Survey as well as a Rapid-Static GPS Survey

Small Unmanned Aerial 

  1. Fixed-wing sUAS and Structure from Motion
     The fixed-wing sUAS (eBee) is a small, electric platform with a single pusher propeller at the rear (SenseFly, Switzerland). It is launched by hand and belly lands with a maximum total take-off weight of 750 gr. It is equipped with a custom flight-planning software and ground control station which provides built-in safety redundancies. The fixed-wing sUAS was designed to collect geometrically-corrected aerial photography, used for creating orthorectified photo-mosaics, as well as photogrammetrically produced synthetic 3-dimensional point clouds via structure from motion. The platform can be equipped with one of several sensors at a time, including a visible, near infrared (NIR), multispectral, and thermal camera. We tested the multispectral sensor with four bands: green (520-580nm), red (630-690nm), red edge (720-750nm), and NIR (760-820nm).

    The fixed-wing sUAS data were processed in eMotion and Postflight Terra 3D software (SenseFly, Switzerland), which orthorectifies and mosaics the image tiles. The Postflight Terra 3D software also generates 3D point cloud data via structure from motion using tiepoints from individual image tiles taken from different angles. The fixed-wing sUAS flights were performed at an average flight altitude of 110 m with 80% and 70% vertical and horizontal overlap between image tiles. Given the flight altitude, the multispectral images had 15 cm resolution. The image tiles were successfully georeferenced and calibrated (100%) to generate the synthetic 3D point cloud. 

2. We did not establish any ground control targets for the eBee in Walnut Gulch. Instead we rely on several local features (fence posts, the eddy covariance flux tower, road features) which are scanned by the TLS unit. 

3. Octocopter sUAS and Aerial lidar

The octocopter sUAS (Service-Drone, Germany) weighs 5.5 kg and was developed to carry an additional heavy payload of up to 6.5 kg. The octocopter is controlled via a hand-held remote control transmitter and a ground control station with navigation data link, which sends waypoint navigation information to the craft live from a laptop computer. The octocopter was custom-designed to carry an inertial navigation system (INS), a lidar scanner, and a hyperspectral sensor with a data storage unit on a 3-axis gimbal. The INS has an integrated survey-grade Global Navigation Satellite System (GNSS) and an inertial motion unit (IMU) that correct for errors associated with pitch, roll, and heading (0.05º, 0.05º and 0.5º RMS, respectively) (SBG Systems North America, Inc., Chicago, IL). The hyperspectral sensor is a pushbroom nano-sensor with 272 spectral bands ranging 400-1000nm (Headwall Photonics Inc., Fitchburg, MA). The Velodyne HDL-32E lidar scanner can operate at up to 80-100 m maximum flight altitude and produces 3-dimensional laser point cloud data with 32 laser beams/scan and +/- 2 cm accuracy at 40º vertical field of view and 360º horizontal field of view (Velodyne Acoustics, Inc., Morgan Hill, CA). The spot diameter of the laser beam is 0.03 m2 at each beam at 70 m flight altitude. The lidar point density/m2 can vary depending on flight altitude and speed. Average point density observed in this study was 35 points/m2.

 

PlatformLocationSurvey date

Flights / Scans

Footprint

(ha-1)

Post-processing software3-DInstrumentGeoreferencePositional accuracy

Nominal Point Density

(points per square meter)

UncertaintyPrecisionAccuracy
DSLR cameraSRER Grassland3/17/20166 Agisoft Photoscan v1.3sfmSony a6000Ublox8 GPS     
DSLR cameraSRER Woodland6/20162 Agisoft Photoscan v1.3sfmSony a6000Ublox8 GPS     
DJI Osmo    Agisoft Photoscan v1.3 Gimbled x3 and x5No     
Terrestrial Laser Scans (TLS)Lucky Hills

????

 

??

 

 Reigl Software (name?) Reigl VZ-400USGS Benchmarks and Total Station Pins<1cm?  1 cm???
TLSKendall9/23/20156~40Reigl Software (name?)lidarReigl VZ-400USGS Benchmarks and Total Station Pins<1cm?    
TLSKendall10/8/20159~60Reigl Software (name?)lidarReigl VZ-400USGS Benchmarks and Total Station Pins<1cm?    
eBeeLucky Hills10/8/2015149

Pix4D (Version?)

Agisoft Photoscan v1.3

sfmMultiSPECGPS Type?~150 cm? 1.4?14 cm 
eBeeKendall Grassland

 

 10/8/2015

1

2

66

Pix4D (Version?)

Agisoft Photoscan v1.3

sfmMultispectral sensorGPS Type?~150 cm? 1.4?15 cm30 cm
Service DroneLucky Hills10/8/20161  lidar       

ServiceDrone

Kendall Grassland

 

10/8/2015

 

2

  lidarVelodyne HDL-32ESBG Systems GNSS INS/IMU

 0.05º, 0.05º and 0.5º RMS for pitch, roll, and heading, respectively

 32 laser beams/scan 10 cm+/-2cm
DJI Phantom 4    Agisoft Photoscan v1.3        
DJI Phantom 4    Agisoft Photoscan v1.3        
DJI Phantom 3    Agisoft Photoscan v1.3        
DJI Phantom 3    Agisoft Photoscan v1.3  DJI Phantom GPS (brand?) 30,000 - 50,000 1 cm 
SRER PAG 2011    ?lidar GPS type?     
Woolpert 2015 9/?/2015  ?lidarLeica ALS70?GPS Type? ~14 ppsm 15 cm???

Dealing with misalignment amongst datasets

In addition to uncertainty inherent with each remote sensing platform, we must deal with misalignment amongst point clouds from different vertical datums and geoids which result in large differences in position. These differences are not 'errors' so much as just differences in geo-location with projections where are established at global scale. These problems become most apparent when data are loaded together for the first time. Because we are working at highly localized scales (sub-centimeter in the case of the TLS and terrestrial sfm).

Deformation of bare earth and vegetation within the sfm point clouds from either systematic flattening of the images, e.g. barrel or pin cushion lens distortion which may not have been digitally corrected or completely corrected by software are possible. GPS errors from the aerial lidar   

 

Software

Structure from Motion

SfM-MVS which uses open-source code include a combined workflow of Bundler (Snavely 2008, Snavely et al. 2008), Clustering Views for Multi-View Stereo (CMVS) (Furukawa et al. 2010), Patch-based Multi-View Stereo (PMVS2) (Furukawa and Ponce, 2010, James and Robson 2012) in programs like VisualSfM (Wu XXXX) (http://ccwu.me/vsfm/).

Sfm point clouds were created using Agisoft Photoscan (version 1.3) and OpenDroneMap (version xxx)

LiDAR

TLS

ALS

Analysis

CloudCompare

Microsoft Ice

QGIS

Matlab

Results

  1. Precision and Accuracy
    1. TLS
    2. manned ALS
      From the vendor: "Tested 0.096 meters preliminary vertical accuracy at a 95 percent confidence level, derived according to NSSDA, in open terrain using 0.049 meters (RMSEz) x 1.96000. Tested against the TIN using independent checkpoints against all points"
    3. sUAS-ALS
    4. SfM
  2.  

  3. Table X: Cover to Cover estimate

  4. Table X: RMSE (in millimeters) for the four platforms. The easting and northing error are evaluated as horizontal (h) and vertical (z) based on the GCP established by the TLS. 

    Sensorpoints per meter squarepulse sizeBare Ground RMSEzBare Ground RMSEhGrasses (<1.0 m) RMSEzGrasses (<1.0 m) RMSEhWoody Plants (<2.0 m) RMSEzWoody Plants (<2.0 m) RMSEh Woody Plants (>2.0 m) RMSEzWoody Plants (>2.0 m) RMSEh 
    Reigl VZ-400          
    Leica ALS70-HP / Optech Gemini ALTM          
    Velodyne?          
    Sony a6000 SfM in Photoscan          
    Ebee SfM in Pix4D          

    Table X: RMSE (mm) for each technology cross-validated to another technology, technology compared to itself are from separate flights or scan locations

     TLSUAS-ALSM-ALSSfM
    TLS    
    UAS-ALS    
    M-ALS    
    SfM    

    Table X: MAE (mm) for each technology cross-validated to another technology

     TLSUAS-ALSM-ALSSfM
    TLS    
    UAS-ALS    
    M-ALS    
    SfM    

    Table X: SDE (mm) for each technology cross-validated to another technology

     TLSUAS-ALSM-ALSSfM
    TLS    
    UAS-ALS    
    M-ALS    
    SfM    
  5.  

Discussion

  1. The four techniques in order or decreasing point density: TLS, sUAS-ALS, sUAS-SfM, M-ALS.
  2. The spatial resolution and point density that are achieved from SfM-MVS are comparable to ALS and TLS, however SfM-MVS is less precise, particularly over longer distances versus TLS (James and Robson 2012). 
  3. Cross comparison of sensor precision and accuracy both amongst one another and within each (delta time) revealed important limitations
    1. The TLS had the highest density of points, precision, and accuracy but is unable to penetrate dense herbaceous cover (grasses and forbs) to find actual bare ground level beyond a 45 degree angle. The inability of TLS to detect ground limits its utility in measuring vegetation height in areas without a validated DEM.
    2. The M-ALS had the largest spatial coverage as expected. However its utility for measuring herbaceous and small shrubs was much lower than expected. This is likely because the data were classified into discrete returns from a waveform with too low of a tolerance for detecting vegetation. This resulted in a single last return (first return) which was very close to the observed ground level even in areas with dense grass cover. The largest woody plants (leaf-on mesquites) were identified but their diffuse outer branches were not, resulting in an overall under-estimate of tree height. At Lucky Hills there was a surprisingly low number of returns in the woody shrubs
    3.  The UAS-ALS had a moderate spatial coverage, but had a spatial extent smaller than what was possible with TLS unit. The UAS-ALS was able to penetrate through the grass in the Kendall site to identify a vertical profile characteristic of the TLS. 

The SfM had a larger spatial coverage than the TLS or UAS-ALS, but smaller than the M-ALS. The SfM also produced a point cloud which was in general accurate at discriminating large woody vegetation from open or grass surfaces. The SfM tended to under predict the heights of woody vegetation in the same way as the M-ALS: it failed to discriminate the diffuse outer branches. The SfM also did not penetrate the surface of the dense grassy areas. The SfM was useful for producing DSM models, but is poor for representing an accurate DEM in areas with high density vegetation, in agreement with other studies. 

Comparing two or more scans to one another requires each scan be (1) geo-referenced using a standard coordinate system or (2) aligned using at least three GCP that are shared between datasets. Because GPS introduces unresolvable uncertainty in the absolute location of the data we chose to align each point cloud using defined features which were identifable and of known dimension from each dataset.      

Limitations

  1. Influence of Canopy Cover on bare earth surface model generation. 
    1. Nouwakpo et al. (2015) compared soil texture and elevation differences between SfM and TLS and were able to stay within 5 mm RMSE for bare earth patches and became significantly affected in areas of vegetation with >50% canopy cover. 
    2. Luscombe et al. (2014) compared ALS and TLS and found a similar under-estimate in height for taller vegetation and a further limitation in the ability of ALS to resolve ground surfaces beneath dense vegetation. 
  2. Influence of Canopy Cover on digital surface model generation.
    1. Kato et al. (2015) compared SfM vs TLS for tree canopy structure and reported a strong correlation between canopy shape for both technologies: the SfM was however unable to penetrate deeply into the canopy of individual trees to reveal their internal structure in particular the tree bole.

Uncertainty

Conclusions

Acknowledgements

This material is based upon work supported by the U.S. Department of Agriculture, Agricultural Research Service, under Agreement No. 58-2022-5-13. Any opinions, findings, conclusion, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the U.S. Department of Agriculture. 

 


References

    • Ackermann, F. 1999. Airborne laser scanning—present status and future expectations. In ISPRS, 1999 pp. 64-67.
      Anderson, K., & Gaston, K. J. (2013). Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Frontiers in Ecology and the Environment,11(3), 138-146.

    • Baltsavias, E.P. 1999. Airborne laser scanning: existing systems and firms and other resources. In ISPRS, 1999 pp.
      164-198.
    • Beland, M., Parker, G., Harding, D., Hopkinson, C., Chasmer, L., & Antonarakis, A. (2015). White Paper–On the Use of LiDAR Data at AmeriFlux Sites.
    • Brodu, N., & Lague, D. (2012). 3D terrestrial lidar data classification of complex natural scenes using a multi-scale dimensionality criterion: Applications in geomorphology. ISPRS Journal of Photogrammetry and Remote Sensing68, 121-134.
    • Castillo, C., James, M. R., Redel-Macías, M. D., Pérez, R., & Gómez, J. A. (2015). SF3M software: 3-D photo-reconstruction for non-expert users and its application to a gully network. SOIL, Volume 1, Issue 2, 2015, pp. 583-594,1, 583-594.
    • Coppock, J. T., & Rhind, D. W. (1991). The history of GIS. Geographical information systems: Principles and applications1(1), 21-43.
    • Cunliffe, A. M., Brazier, R. E., & Anderson, K. (2016). Ultra-fine grain landscape-scale quantification of dryland vegetation structure with drone-acquired structure-from-motion photogrammetry. Remote Sensing of Environment183, 129-143.
    • Dandois, J. P., & Ellis, E. C. (2013). High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision. Remote Sensing of Environment136, 259-276.
    • Dandois, J. P., & Ellis, E. C. (2010). Remote sensing of vegetation structure using computer vision. Remote Sensing2(4), 1157-1176.
    • Eltner, A., Kaiser, A., Castillo, C., Rock, G., Neugirg, F., & Abellán, A. (2016). Image-based surface reconstruction in geomorphometry–merits, limits and developments. Earth Surf. Dynam., 4, 359–389, 2016 www.earth-surf-dynam.net/4/359/2016/ doi:10.5194/esurf-4-359-2016
    • Emmerich, W.E. 2003. Carbon dioxide fluxes in a semiarid environment with high carbonate soils. Agricultural and Forest Meteorology 116(1-2): 91-102, doi:http://dx.doi.org/10.1016/S0168-1923(02)00231-9
    • Estes, J., & Star, J. (1990). Geographic information systems. University of California. Santa Barbara-EEUU. 295p.
    • Fonstad, M. A., Dietrich, J. T., Courville, B. C., Jensen, J. L., and Carbonneau P. E. (2013)Topographic structure from motion: a new development in photogrammetric measurement, Earth Surf. Proc. Land., 38, 421–430, doi:10.1002/esp.3366, 
    • Furukawa Y and Ponce J (2010) Accurate, dense and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence 32: 13621376.
    • Furukawa Y, Curless B, Seitz SM, et al. (2010) Towards internet-scale multi-view stereo. In: Computer Vision and Pattern Recognition (CVPR), 13–18 June, San Francisco, CA, USA, pp. 1434–1441. IEEE Conference.
    • Girardeau-Montaut, D. (2011). CloudCompare-Open Source project.OpenSource Project.
    • Glennie, C. L., Carter, W. E., Shrestha, R. L., & Dietrich, W. E. (2013). Geodetic imaging with airborne LiDAR: the Earth's surface revealed. Reports on Progress in Physics76(8), 086801.
    • Harpold, A. A., Marshall, J. A., Lyon, S. W., Barnhart, T. B., Fisher, B., Donovan, M., ... & Kirchner, P. B. (2015). Laser vision: lidar as a transformative tool to advance critical zone science. Hydrology and Earth System Sciences Discussions12(1), 1017-1058.
    • Harwin, S., & Lucieer, A. (2012). Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery. Remote Sensing4(6), 1573-1599.
    • Heilman, P., Nichols, M. H., Goodrich, D. C., Miller, S. N., & Guertin, D. P. (2008). Geographic information systems database, Walnut Gulch Experimental Watershed, Arizona, United States. Water resources research,44(5).
    • Hutton, C., & Brazier, R. (2012). Quantifying riparian zone structure from airborne LiDAR: Vegetation filtering, anisotropic interpolation, and uncertainty propagation. Journal of Hydrology442, 36-45.
    • James, M. R., & Robson, S. (2014). Mitigating systematic error in topographic models derived from UAV and ground-based image networks.Earth Surface Processes and Landforms39(10), 1413-1420.
    • James MR and Robson S (2012) Straightforward reconstruction of 3D surfaces and topography with a camera: Accuracy and geoscience application. Journal of Geophysical Research: Earth Surface 117: F03017. doi: 10.1029/2011JF002289.
    • Javernick, L., Brasington, J., & Caruso, B. (2014). Modeling the topography of shallow braided rivers using Structure-from-Motion photogrammetry.Geomorphology213, 166-182.
    • Jenerette, G.D., R.L. Scott, and T.E. Huxman. 2008. Whole ecosystem metabolic pulses following precipitation events. Functional Ecology 22(5): 924-930, doi:http://dx.doi.org/10.1111/j.1365-2435.2008.01450.x
    • Kato, A., Obanawa, H., Hayakawa, Y., Watanabe, M., Yamaguchi, Y., & Enoki, T. (2015, July). Fusion between UAV-SFM and terrestrial laser scanner for field validation of satellite remote sensing. In Geoscience and Remote Sensing Symposium (IGARSS), 2015 IEEE International (pp. 2642-2645). IEEE.
    • Keefer, T. O., Moran, M. S., & Paige, G. B. (2008). Long?term meteorological and soil hydrology database, Walnut Gulch Experimental Watershed, Arizona, United States. Water Resources Research44(5).
    • Kraus, K., & Pfeifer, N. (2001). Advanced DTM generation from LIDAR data.International Archives Of Photogrammetry Remote Sensing And Spatial Information Sciences34(3/W4), 23-30.
    • Millennium Ecosystem Assessment, 2005. Drylands Systems. Chapter 22 in: Ecosystems and Human Wellbeing: Current State and Trends, Volume 1. Island Press.
    • Moran, M. S., Holifield Collins, C. D., Goodrich, D. C., Qi, J., Shannon, D. T., & Olsson, A. (2008). Long?term remote sensing database, Walnut Gulch Experimental Watershed, Arizona, United States. Water resources research,44(5).
    • Moran, M.S., R.L. Scott, T.O. Keefer, W.E. Emmerich, M. Hernandez, G.S. Nearing, G. Paige, M.H. Cosh, and P.E. ONeill. 2009. Partitioning evapotranspiration in semiarid grassland and shrubland ecosystems using time series of soil surface temperature. Agricultural and Forest Meteorology 149(1): 59-72, doi:http://dx.doi.org/10.1016/j.agrformet.2008.07.004
    • Nouwakpo, S. K., Weltz, M. A., & McGwire, K. (2015). Assessing the performance of structure from motion photogrammetry and terrestrial LiDAR for reconstructing soil surface microtopography of naturally vegetated plots. Earth Surface Processes and Landforms.
    • Lague, D., Brodu, N., & Leroux, J. (2013). Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (NZ). ISPRS Journal of Photogrammetry and Remote Sensing82, 10-26.
    • Levin, S.A. (1992). The problem of pattern and scale in ecology: the Robert H. MacArthur award lecture. Ecology73(6), 1943-1967.
    • Lim, K., Treitz, P., Wulder, M., St-Onge, B., & Flood, M. (2003). LiDAR remote sensing of forest structure. Progress in physical geography27(1), 88-106.
    • Lefsky, M. A., Cohen, W. B., Parker, G. G., & Harding, D. J. (2002). Lidar Remote Sensing for Ecosystem Studies Lidar, an emerging remote sensing technology that directly measures the three-dimensional distribution of plant canopies, can accurately estimate vegetation structural attributes and should be of particular interest to forest, landscape, and global ecologists. BioScience52(1), 19-30.
    • Li, F.Q., W.P. Kustas, M.C. Anderson, J.H. Prueger, and R.L. Scott. 2008. Effect of remote sensing spatial resolution on interpreting tower-based flux observations. Remote Sensing of Environment 112(2): 337-349, doi:http://dx.doi.org/10.1016/j.rse.2006.11.032
    • Luscombe, D. J., Anderson, K., Gatis, N., Wetherelt, A., Grand?Clement, E., & Brazier, R. E. (2014). What does airborne LiDAR really measure in upland ecosystems?. Ecohydrology.
    • McGaughey, R. J. (2009). FUSION/LDV: Software for LIDAR data analysis and visualization. US Department of Agriculture, Forest Service, Pacific Northwest Research Station: Seattle, WA, USA123(2).
    • Pelletier, J. D., & Orem, C. A. (2014). How do sediment yields from post-wildfire debris-laden flows depend on terrain slope, soil burn severity class, and drainage basin area? Insights from airborne-LiDAR change detection. Earth Surface Processes and Landforms39(13), 1822-1832.
    • Popescu, S. C., Wynne, R. H., & Nelson, R. F. (2002). Estimating plot-level tree heights with lidar: local filtering with a canopy-height based variable window size. Computers and Electronics in Agriculture37(1), 71-95.
    • Rango, A., Laliberte, A., Herrick, J. E., Winters, C., Havstad, K., Steele, C., & Browning, D. (2009). Unmanned aerial vehicle-based remote sensing for rangeland assessment, monitoring, and management. Journal of Applied Remote Sensing3(1), 033542-033542.
    • Reutebuch, S. E., Andersen, H. E., & McGaughey, R. J. (2005). Light detection and ranging (LIDAR): an emerging tool for multiple resource inventory. Journal of Forestry103(6), 286-292.
    • James MR and Robson S (2012) Straightforward reconstruction of 3D surfaces and topography with a camera: Accuracy and geoscience application. Journal of Geophysical Research: Earth Surface 117: F03017. doi: 10.1029/2011JF002289
    • Scott, R.L. 2010. Using watershed water balance to evaluate the accuracy of eddy covariance evaporation measurements for three semiarid ecosystems. Agricultural and Forest Meteorology 150(2): 219-225, doi:http://dx.doi.org/10.1016/j.agrformet.2009.11.002
    • Scott, R.L., T.E. Huxman, W.L. Cable, and W.E. Emmerich. 2006. Partitioning of evapotranspiration and its relation to carbon dioxide exchange in a Chihuahuan Desert shrubland. Hydrological Processes 20(15): 3227-3243, doi:http://dx.doi.org/10.1002/hyp.6329
    • Sithole, G., & Vosselman, G. (2004). Experimental comparison of filter algorithms for bare-Earth extraction from airborne laser scanning point clouds. ISPRS journal of photogrammetry and remote sensing59(1), 85-101.

    • Skirvin, S., Kidwell, M., Biedenbender, S., Henley, J. P. King, D., Collins, C. H., Moran, S., and Weltz, M.: Vegetation data, Walnut Gulch Experimental Watershed, Arizona, United States, Water Resour. Res., 44, W05S08, doi:10.1029/2006WR005724, 2008.

    • Smith, M.W., Carrivick, J., & Quincey, D. (2015) Structure from Motion Photogrammetry in Physical Geography. Prograss in Physical Geography. ISSN 0309-1333. http://eprints.whiterose.ac.uk/92733
    • Snavely N (2008) Scene Reconstruction and Visualization from Internet Photo Collections. PhD thesis, University of Washington, USA.
    • Snavely N, Seitz SN and Szeliski R (2008) Modeling the world from Internet photo collections. International Journal of Computer Vision 80: 189–210.
    • St-Onge, B. A., & Achaichia, N. (2001). Measuring forest canopy height using a combination of lidar and aerial photography data. INTERNATIONAL ARCHIVES OF PHOTOGRAMMETRY REMOTE SENSING AND SPATIAL INFORMATION SCIENCES34(3/W4), 131-138.
    • Turner, M. G., O'Neill, R. V., Gardner, R. H., & Milne, B. T. (1989). Effects of changing spatial scale on the analysis of landscape pattern. Landscape ecology3(3-4), 153-162.
    • Turner, M. G. (1989). Landscape ecology: the effect of pattern on process.Annual review of ecology and systematics, 171-197.
    • Watts, C.J., R.L. Scott, J. Garatuza-Payan, J.C. Rodriguez, J. Prueger, W. Kustas, and M. Douglas. 2007. Changes in vegetation condition and surface fluxes during NAME 2004. Journal of Climate 20(9): 1810-1820, doi:http://dx.doi.org/10.1175/JCLI4088.1
    • Westoby, M. J., Brasington, J., Glasser, N. F., Hambrey, M. J., and Reynolds, J. M. (2012) "Structure-from-Motion" photogrammetry: A low-cost, effective tool for geoscience applications, Geomorphology, 179, 300–314, doi:10.1016/j.geomorph.2012.08.021
    • Woodcock, C. E., & Strahler, A. H. (1987). The factor of scale in remote sensing. Remote sensing of Environment21(3), 311-332.
    • Zhang, K., Chen, S. C., Whitman, D., Shyu, M. L., Yan, J., & Zhang, C. (2003). A progressive morphological filter for removing nonground measurements from airborne LIDAR data. Geoscience and Remote Sensing, IEEE Transactions on41(4), 872-882.