Introduction
The purpose of this lab is to evaluate
processed LiDAR datasets for the previous labs (labs 1-3) datasets for
vertical, horizontal, and classification accuracy. In order for LiDAR data to
be used for a specific project, it is important that the data’s accuracy meets
the threshold of a project. If the data does not meet the accuracy threshold
for a given project the data can not be considered reliable and therefore
should not be not be used.
To begin the lab, the point cloud density was analyzed. This was done by the a shapefile of the the point cloud being created where the statistics on the point cloud be calculated. The file was named Lake_Statistics.shp. Once this was done the point density for the dataset as a whole could be examined. While this shapefile gave statistics for the entire dataset, it did not give statistics for specific areas. To gain the statistics for specific areas, the stamp tool which extracted LAS statistics within a user defined area.
To begin the lab, the point cloud density was analyzed. This was done by the a shapefile of the the point cloud being created where the statistics on the point cloud be calculated. The file was named Lake_Statistics.shp. Once this was done the point density for the dataset as a whole could be examined. While this shapefile gave statistics for the entire dataset, it did not give statistics for specific areas. To gain the statistics for specific areas, the stamp tool which extracted LAS statistics within a user defined area.
Once the statistics were calculated, a point
density image was created to make sure that there was uniform coverage of data
points throughout the dataset as well as highlight areas of low coverage. This
image created by filtering first returns, and the scan angle (min: -13.5. max:
13.5). The surface method chosen was point
insertion to determine whether there was a point within each pixel cell.
Once this done the file was exported as tiff file, Lake_Point_Density.tiff.
The next section of the lab of the lab was to
check whether the 90 percent of the cells have at least 1 point per cell. This
was done by importing the Lake_Point_Density.tiff
into ArcMap, and then using model builder
to subtract the different bands (band 1 - band 2 - band 3). The results of this
model were imported into a new field in the Lake_Statistics.shp
where a percentage was calculated. For this dataset, only 13 percent of the
data met the specification.
Then a relative accuracy assessment was by
using a swath-to-swath analysis to check the vertical accuracy of the data. This
was done by creating a delta Z (DZ) image. When analyzing the DZ values within
the flight lines there were some areas along flat surfaces that did not meet
the accuracy threshold. This was likely caused by elevated medians within
highways, street lights, and relatively steep slopes in some areas. Then using
the same DZ image, polylines were created along flat areas within the
overlapping flight lines. Once polylines were drawn within all the flight
lines, the stream analysis tool was
used to check whether the polylines fit within the threshold. Once this task
was completed, the output was symbolized as quantities so that areas within and
outside the threshold could be identified.
Following the relative accuracy assessment
performed an absolute accuracy assessment. A non-vegetated vertical accuracy
assessment was performed. To perform this task, a set of vertical checkpoints
for non-vegetated areas were imported into ArcMap where their XY data was
displayed. This was done because the checkpoints were in an excel spreadsheet
format. Then a Vertical_GCP shapefile was created to verify the accuracy of the
dataset compared to the checkpoints using the LP360 QA/QC tool.The profile view within LP360 was to verify where the
checkpoints were actually on vegetated land cover. Then using the accuracy of
the points of calculated using calculate
DZ command. Following the absolute vertical assessment, a horizontal
assessment was performed by imported a Horizontal
Checkpoint xlsx file similarly to the vertical accuracy assessment. After
the points were displayed by XY, the measure
mode tool used to place a mark upon each of the control points at a ~2000
scale. Then the Calculate DZ tool was
again used to measure the difference between the datasets x values and the
checkpoints. The results were then exported in a control report.
After the vertical and horizontal azzurcay
reports were created, a final QA/QC was performed on the dataset. Using a
filter that didn’t include ground or model key points. Then areas within the LAS
point cloud that had data voids and or misclassified points was identified and
labeled by creating a shapefile around these areas. Once the dataset was
scanned for any issues found were corrected using various classify tools within LP360’s profile view.