Monday, December 17, 2018

Lab 8


Introduction
The purpose of this lab is to analyze corridor LiDAR point clouds and extract features. This lab also introduces reprojecting LAS files, practical skills in utilizing terrestrial lidar scan (TLS) for corridor mapping and assessment mapping, and extracting building footprint for LOMA (Letter of Map Amendment) information.

Methods
Part 1: Projecting an Unprojected Point Cloud
The first section of this lab was to define the projection for a LAS dataset for Algoma, Wisconsin. This was completed by creating a new tool box in ArcMap that contained the Define LAS File Projection, Reproject LAS Files, Scale LAS Files, and Shift LAS Files. Using the Define Projection tool the Algoma LAS tiles xy projection was defined as NAD_1983_HARN_WISCRS_Kewaunee_County_Feet (WISCRS) and the z coordinate system was defined as NAD 88 (US Feet). Once the projection was defined for the LAS files the LAS files were then reprojected using the same projections (figure. 1).
Figure 1. Projected LAS file in LP360
Part 2: Transport and transmission corridor asset Management 
Once the LAS files were projected into the correct coordinate system they were then brought into LP360 for a corridor asset inventory. To aide in the inventory assessment, the boundary of the study area was brought into Google Earth as a kmz file. Where different features could be cross-checked with high-resolution imagery. Within the dataset features such as street poles and street signs were counted, as well check power lines for potential issues due to vegetation from surrounding trees (figure. 2).

Figure 2. 3D viewer used to determine if there was encroachment from vegetation on power lines
Part 3: Building Feature Extraction  
Once the dataset was checked for features, a building footprint was extracted in LP360 from the dataset for Lab 4 in Lake County, Illinois. Using a Point Group Tracing and Squaring task. Using this task two shapefiles were created a building footprint and building footprint square. The building footprint shapefile traced the outline of the classified buildings where as the building footprint square created squared edges for the classified buildings. Using the footprint squared image, the shapefile was conflated using Summarize Z using building points as the source points to determine building heights. Then the building footprint square shapefile was conflated using the Pure Drape conflated methods with ground points as the source points to create a building footprint. Then to determine minimum height for buildings to designated above the floodplain to be candidates to apply for FEMA’s LOMA, the footprint square conflated using Summarize Z using Minimum Z with the source points being ground points. Once the building footprints where conflated, the shapefiles were brought into ArcMap (figure. 3).  Then using ArcMap building that were between 800 and 810 feet above sea level were selected to determine which buildings could be taken off of the floodplain map if FEMA were to hypothetically change their policy.
Figure 3. Building footprint shapefile created in LP360 and displayed ArcMap

Results
Map created in Arcmap displaying the buildings that could be removed from floodplain map.
Sources
Data for this lab was provided by Dr. Cyril Wilson of the University of Wisconsin-Eau Claire. 

Lab 7


Introduction
The purpose of this lab was utilize land cover from optical imagery to pinpoint tree species spatial distribution, use empirical models to extract various vegetation metrics from a tree species in a given study area, and create a recommendation to the carbon sink potential of a forest.

Methods
Part 1: Canopy Height Model
This section of this lab was to generate a canopy surface and ground model from a LAS dataset for Eau Claire County. A canopy surface model was created using vegetation points filtered by first returns only using a 3 foot cell size. Then a DTM was for the ground surface. Once these models were created in LP360, they were brought into ArcMap. Using model-builder a model was created that subtracted the vegetation height model from the ground model to create a new model from the subtracted values named EC_veg_CH. This new model was changed to a 32-bit signed integer raster named EC_veg_CH_c. This was done so that an attribute table could be created for model.

Part 2: Above Ground Biomass Estimation
The next section of the lab the above ground biomass (AGB) for five different tree species was created. To complete this task, the EC_veg_CH_c and the wiscland2_L3 were brought into ArcMap. The wiscland2_L3 shapefile was reclassified into five different classes (hardwood, red maple, pine, oak, and aspen). Then using this newly reclassified image and using the chart and equation below (figure. 1) a new model was created to calculate each of the of the aforementioned species (figure 2). 

Figure 1. Values and equation used to create model in ArcMap
Figure 2. Model used in ArcMap to calculate AGB

Part 3: Calculation of Additional Metrics
For the final step of the lab we asked to calculate stem, branch, and foliage biomass for each of the each of the tree species using the equation below. Inputs for the eqations was found using a set of articles provided by the Dr. Wilson.
a = gain, b is offset, and H is derived tree height
The values for each of the variables were pulled from following
Results
Part 1: Canopy Height Model

Part 2: Above Ground Biomass Estimation

Part 3: Calculation of Additional Metrics

Sources
Data for this lab was provided by Dr. Cyril Wilson of the University of Wisconsin-Eau Claire. 

Friday, December 14, 2018

Lab 6

Introduction
The purpose of this lab is to perform a QA/QC of a topobathy lidar point cloud, create and generate a conflate a breakline for a shoreline as well as configuring the enforcement of that breakline for topo-bathy deliverables.

Methods
The first process in this lab was to correct a topobathy lidar point cloud dataset in Delta County, Michigan. This dataset had many classification errors due to both commission and omission errors. These areas where highlighted in LP360 by digitizing areas around dataset errors and labeling these areas either as omission or commision using the create feature class tool. For example, there were many areas in the Lake Michigan that had been wrongly classified as ground points due to the reflection of LiDAR pulses suspended sediments. There were also many inland water bodies that had been classified as ground points. Omission errors included large areas along the coast line that been left unclassified.

Once these areas were identified the errors on the coastline were corrected using manual classification. This was done to ensure that the breaklines would have enough ground points to conflate the polyline feature accurately.  After this process was completed, a polyline feature was created for shoreline. Then a filter for the was applied to data displaying only ground and 31 reserved points and the LAS dataset was displayed via a TIN. Using the conflate manager a new task was created using the drape method for the breaklines with vertices being created every 5 map units. The polyline was drawn tracing the shoreline using the ground points. Following the creation of the polyline, a polygon feature class was created using the same parameters and instead of tracing the shoreline, the polygon feature traced the rest of the dataset along with the shoreline.

Once the polyline and polygon were created, they were then brought into LP360 where they were conflated over LAS tiles (figure 1).
Figure 1. Breaklines created in ArcMap used to create DTM in LP360
Once the LAS files and newly created breaklines were brought into LP360 the surface was then exported as DTM. The source points for the DTM were ground points, roads, and 31 reserved. The surface method was a TIN and breaklines enforced using the newly created polyline and polygon. Using these parameters a new DTM was created for Delta County dataset.

Results
DTM created in LP360 from the breaklines created in ArcMap 
Sources
The materials for this lab were provided by Dr. Cyril Wilson of the University of Wisconsin-Eau Claire

Wednesday, December 5, 2018

Lab 5


The purpose of this lab is to create, identify and fix topological errors in breaklines, conflate breaklines, inspect breaklines elevation data, configuration of surface features based upon breaklines. 

Methods
The first section of the lab, I was given a shapefile that had outlined all the water bodies in the the study area. From this shapefile topological errors were found and corrected in ArcMap. This was done to ensure that there would not be errors in breakline enforcement between the breaklines and the LAS point cloud. If there were areas where there were topological errors it lead to undesirable results when hydro-flattening the water bodies in the study area.

Once the topology of the lines was corrected, the breaklines could then be conflated. To begin, the LAS file was displayed in a TIN surface. Then a new conflation point cloud task was created using the topologically corrected shapefile from the first section of the lab. The ground and water points were set as the source points for the conflation. The conflation method used was ponds and lakes was Summarize Z. This was used to ensure that the Z values for the conflated breakline would create a homogeneous elevation so that the water bodies could be hydro-flattened. For Islands and rivers the Drape conflation method was used. Then the conflation task was run. Once the breaklines had been conflated, the newly conflated brealines were brought into ArcMap where a new z elevation field was created and the z value geometry was calculated, giving the elevation values for the breaklines. Then the breaklines where used using Breakline Enforcement to hydro-flatten the water bodies. Once this was done smooth contours where generated for the image at an interval of 5 map units. This was done to ensure that the hydro-flatten water bodies were in fact flat and did not have elevation errors. Then a digital terrian model of the LAS tiles was generated using the classified ground points.

Once this was completed, we were given a LAS dataset for the city of Eau Claire. This LAS dataset was imported into ArcMap where we then created a new polygon and polyline feature that were used to create conflated breaklines for the Chippewa River. In each of these features, fields were created for the water type, min Z, mean Z, and maximum Z values. Once the features were created, they were then digitized over a TIN surface created from the LAS file. The polygon outlined the river and the polyline was digitized down the center of the river. The polyline was digitized down the center of the river so that a downstream constraint could be calculated for the river. Once the breaklines were created the River-Flattening  tool was to hydro-flatten the Chippewa River.

Tuesday, November 27, 2018

Lab 4


Introduction
The purpose of this lab is to evaluate processed LiDAR datasets for the previous labs (labs 1-3) datasets for vertical, horizontal, and classification accuracy. In order for LiDAR data to be used for a specific project, it is important that the data’s accuracy meets the threshold of a project. If the data does not meet the accuracy threshold for a given project the data can not be considered reliable and therefore should not be not be used.


To begin the lab, the point cloud density was analyzed. This was done by the a shapefile of the the point cloud being created where the statistics on the point cloud be calculated. The file was named Lake_Statistics.shp. Once this was done the point density for the dataset as a whole could be examined. While this shapefile gave statistics for the entire dataset, it did not give statistics for specific areas. To gain the statistics for specific areas, the stamp tool which extracted LAS statistics within a user defined area.

Once the statistics were calculated, a point density image was created to make sure that there was uniform coverage of data points throughout the dataset as well as highlight areas of low coverage. This image created by filtering first returns, and the scan angle (min: -13.5. max: 13.5). The surface method chosen was point insertion to determine whether there was a point within each pixel cell. Once this done the file was exported as tiff file, Lake_Point_Density.tiff.
The next section of the lab of the lab was to check whether the 90 percent of the cells have at least 1 point per cell. This was done by importing the Lake_Point_Density.tiff into ArcMap, and then using model builder to subtract the different bands (band 1 - band 2 - band 3). The results of this model were imported into a new field in the Lake_Statistics.shp where a percentage was calculated. For this dataset, only 13 percent of the data met the specification.


Then a relative accuracy assessment was by using a swath-to-swath analysis to check the vertical accuracy of the data. This was done by creating a delta Z (DZ) image. When analyzing the DZ values within the flight lines there were some areas along flat surfaces that did not meet the accuracy threshold. This was likely caused by elevated medians within highways, street lights, and relatively steep slopes in some areas. Then using the same DZ image, polylines were created along flat areas within the overlapping flight lines. Once polylines were drawn within all the flight lines, the stream analysis tool was used to check whether the polylines fit within the threshold. Once this task was completed, the output was symbolized as quantities so that areas within and outside the threshold could be identified.

Following the relative accuracy assessment performed an absolute accuracy assessment. A non-vegetated vertical accuracy assessment was performed. To perform this task, a set of vertical checkpoints for non-vegetated areas were imported into ArcMap where their XY data was displayed. This was done because the checkpoints were in an excel spreadsheet format. Then a Vertical_GCP shapefile was created to verify the accuracy of the dataset compared to the checkpoints using the LP360 QA/QC tool.The profile view within LP360 was to verify where the checkpoints were actually on vegetated land cover. Then using the accuracy of the points of calculated using calculate DZ command. Following the absolute vertical assessment, a horizontal assessment was performed by imported a Horizontal Checkpoint xlsx file similarly to the vertical accuracy assessment. After the points were displayed by XY, the measure mode tool used to place a mark upon each of the control points at a ~2000 scale. Then the Calculate DZ tool was again used to measure the difference between the datasets x values and the checkpoints. The results were then exported in a control report.

After the vertical and horizontal azzurcay reports were created, a final QA/QC was performed on the dataset. Using a filter that didn’t include ground or model key points. Then areas within the LAS point cloud that had data voids and or misclassified points was identified and labeled by creating a shapefile around these areas. Once the dataset was scanned for any issues found were corrected using various classify tools within LP360’s profile view.

Sunday, October 7, 2018

Lab 1

Introduction
The goal of this lab is to classify preprocessed Lidar points into ground and water points. This is achieved by using automatic ground filtering algorithms and identifying low outlier points, manual cleanup, and the classification of water points using breaklines.

Methods
To begin this lab LAS tiles of Lake County Illinois were brought into LP360 in 64-bits. Then a NAIP image with was also brought into the image to aid in image classification. Once the image was brought into LP360 the image was then displayed according to classification, elevation, and return combination. When the image was displayed by classification, it was evident that the data was clearly not classified (figure 1.). Also, when the data was displayed by return combination the image was dominated by single returns and high the highest amount of clustering was found in forested areas.
Figure 1. Dataset containing only unclassified points
When analyzing a LIDAR dataset, it is vital for the data’s statistics to be checked. For this lab the LAS file statistics were extracted via making the dataset into shaplefile where the statistics could be view Arcmap. 

Following the extraction of the LAS statistics, low outlier points were removed from the data. This was completed using the Low/Isolated Points Filter. 

Once this was completed, an Adaptive Tin Ground Filter was applied to the data. The Adaptive Ground Tin Filter selects the lowest point in each of the tiles and then assigns this point as a seed point. Once the each of the tiles has a seed point, the model then generates a tin model from the seed points through a series of iterations, for this lab 8 iterations were used. If a point a falls within the threshold parameters, the point is then classified as a ground point. Also, for this lab all the flags were ignored.

This was done in two iterations, after the first iteration was complete. The seed points for each of the tiles were checked to ensure that all the seed points were correctly classified as ground points. Once the ground points were verified, the Adaptive Tin Ground Filter was then run again on the data.

The next section of the lab was to visually inspect the data and manually correct areas that were wrongly classified. This was both for buildings that were wrongly classified as ground points as well as ground points that were left unclassified. This was done by using the Ground Cleanup Filter. This was done by using both the profile and 3D viewer to flag points that were misclassified. 
Figure 2. Clean Up Missed Ground task used to fix misclassified /unclassified ground points.
Figure 3. Both 3D and Horizontal filters used to identify areas of misclassified points

The final section of the lab was to classify water. This done by using the Classifiy By Feature task and importing a shapefile that encompassed the water in the dataset. Once this task was run, the water points in the dataset were classified. 

Results 
Once the lab was completed, dataset had ground and water points classified.
Dataset with the classified ground and water points
Data Source
Data for this lab was provided by Dr. Cyril Wilson, University of Wisconsin-Eau Claire.