Development of drone-assisted highway mowing operations planning, monitoring, and verification capabilities

GEORGIA DOT RESEARCH PROJECT 20-09 Final Report
DEVELOPMENT OF DRONE-ASSISTED HIGHWAY MOWING OPERATIONS PLANNING, MONITORING, AND VERIFICATION CAPABILITIES
Office of Performance-based Management and Research 600 West Peachtree Street NW | Atlanta, GA 30308
May 2022

TECHNICAL REPORT DOCUMENTATION PAGE

1.Report No.:

2. Government Accession No.: 3. Recipient's Catalog No.:

FHWA-GA-22-2009

N/A

N/A

4. Title and Subtitle:

5. Report Date:

Development of Drone-Assisted Highway

May 2022

Mowing Operations Planning, Monitoring, and Verification Capabilities.

6. Performing Organization Code: N/A

7. Author(s):

8. Performing Organ. Report No.:

Javier Irizarry, Ph.D., PE; Yong Kwon Cho,

20-09

Ph.D.; Yosuke Yajima; Seongyong Kim; Steven

Kangisser

9. Performing Organization Name and Address:

10. Work Unit No.:

Georgia Institute of Technology

N/A

225 North Ave NW

11. Contract or Grant No.:

Atlanta, GA 30332

PI# 0017429

12. Sponsoring Agency Name and Address:

13. Type of Report and Period Covered:

Georgia Department of Transportation (SPR)

Final Report (August 2020 May 2022)

Office of Performance-based Management and

Research

14. Sponsoring Agency Code:

600 West Peachtree NW

N/A

Atlanta, GA 30308

15. Supplementary Notes:

Prepared in cooperation with the US Department of Transportation, Federal Highway Administration.

16. Abstract:

This project focused on the development for the integration of unmanned aerial systems (UASs) and

artificial intelligence (AI) to automate planning, monitoring, and performance verification of highway

maintenance tasks within the Georgia Department of Transportation (GDOT). The current mowing

assessment for verifying the performance of mowing contractors was conducted by labor-intensive and

qualitative visual inspection. Recent technology in AI and drones brings an opportunity to automate and

assess the verification of mowing performance to improve efficiency and speed up the inspection process.

In this report, the research team focuses on three main tasks: (1) optimizing the workflow analysis for the

GDOT mowing assessment, (2) developing a data-driven automated highway monitoring system, and

(3) integrating a user-friendly interface for the GDOT Office of Maintenance. In the workflow analysis,

the research team identifies the optimal integration of AI and UAS usages and provides a workflow

diagram to recommend the best use of the proposed system. In the data-driven highway monitoring

system, the research team develops a machine learning-based grass mowing assessment framework using

image data collected from drones. The user interface incorporates the proposed system and provides clear

and useful information for GDOT. The contribution of the proposed solutions could support the GDOT

tasks of verifying the mowing performance of contractors and expand the application of AI and drone

technology within GDOT.

17. Key Words:

18. Distribution Statement:

Unmanned Aerial Vehicles, Artificial Intelligence,

No restrictions.

Operational Requirements, Technical Requirements,

Maintenance, Mowing, Workflow

19. Security Classification 20. Security Classification 21. Number of Pages: 22. Price:

(of this report):

(of this page):

Unclassified

Unclassified

88

Free

Form DOT 1700.7 (8-69)

Reproduction of completed page authorized.

GDOT Research Project 20-09
Final Report
DEVELOPMENT OF DRONE-ASSISTED HIGHWAY MOWING OPERATIONS PLANNING, MONITORING, AND VERIFICATION CAPABILITIES By Javier Irizarry, Ph.D., P.E. Professor of Building Construction Yong Kwon Cho, Ph.D. Professor of Civil and Environmental Engineering Yosuke Yajima Graduate Research Assistant Seongyong Kim Graduate Research Assistant Steven Kangisser Graduate Research Assistant
Georgia Tech Research Corporation Contract with
Georgia Department of Transportation In cooperation with
US Department of Transportation Federal Highway Administration
May 2022
The contents of this report reflect the views of the authors, who are responsible for the facts and the accuracy of the data presented herein. The contents do not necessarily reflect the official views or policies of the Georgia Department of Transportation or of the Federal Highway Administration. This report does not constitute a standard, specification, or regulation.
ii

Symbol
in ft yd mi
in2 ft2 yd2 ac mi2
fl oz gal ft3 yd3
oz lb T
oF
fc fl
lbf lbf/in2

SI* (MODERN METRIC) CONVERSION FACTORS

APPROXIMATE CONVERSIONS TO SI UNITS

When You Know

Multiply By

To Find

LENGTH

inches

25.4

millimeters

feet

0.305

meters

yards

0.914

meters

miles

1.61

kilometers

AREA

square inches

645.2

square millimeters

square feet

0.093

square meters

square yard

0.836

square meters

acres

0.405

hectares

square miles

2.59

square kilometers

VOLUME

fluid ounces

29.57

milliliters

gallons

3.785

liters

cubic feet

0.028

cubic meters

cubic yards

0.765

cubic meters

NOTE: volumes greater than 1000 L shall be shown in m3

MASS

ounces

28.35

grams

pounds

0.454

kilograms

short tons (2000 lb)

0.907

megagrams (or "metric ton")

TEMPERATURE (exact degrees)

Fahrenheit

5 (F-32)/9

Celsius

or (F-32)/1.8

ILLUMINATION

foot-candles foot-Lamberts

10.76 3.426

lux candela/m2

FORCE and PRESSURE or STRESS

poundforce

4.45

newtons

poundforce per square inch

6.89

kilopascals

Symbol
mm m m km
mm2 m2 m2 ha km2
mL L m3 m3
g kg Mg (or "t")
oC
lx cd/m2
N kPa

Symbol
mm m m km
mm2 m2 m2 ha km2
mL L m3 m3
g kg Mg (or "t")
oC
lx cd/m2
N kPa

APPROXIMATE CONVERSIONS FROM SI UNITS

When You Know

Multiply By

To Find

LENGTH

millimeters

0.039

inches

meters

3.28

feet

meters

1.09

yards

kilometers

0.621

miles

AREA

square millimeters

0.0016

square inches

square meters

10.764

square feet

square meters

1.195

square yards

hectares

2.47

acres

square kilometers

0.386

square miles

VOLUME

milliliters

0.034

fluid ounces

liters

0.264

gallons

cubic meters

35.314

cubic feet

cubic meters

1.307

cubic yards

MASS

grams

0.035

ounces

kilograms

2.202

pounds

megagrams (or "metric ton")

1.103

short tons (2000 lb)

TEMPERATURE (exact degrees)

Celsius

1.8C+32

Fahrenheit

ILLUMINATION

lux candela/m2

0.0929 0.2919

foot-candles foot-Lamberts

FORCE and PRESSURE or STRESS

newtons

0.225

poundforce

kilopascals

0.145

poundforce per square inch

Symbol
in ft yd mi
in2 ft2 yd2 ac mi2
fl oz gal ft3 yd3
oz lb T
oF
fc fl
lbf lbf/in2

**SSII iisstthheesysmymbobl ofolrftohre tInhterInnatteiornnaal tSiyosnteaml SoyfsUtneimts.oAfpUpnroitpsri.aAteprporuondpirnigatsehoruoldunbedimnagdsehtouclodmbpelymwiathdSeetcotiocnom4 opflAySwTiMthES3e8c0t.ion 4 of ASTM E(R3e8v0is. e(dReMvairscehd20M0a3r) ch 2003)

iii

TABLE OF CONTENTS
EXECUTIVE SUMMARY .............................................................................................. 1 CHAPTER 1. INTRODUCTION .................................................................................... 3
OVERVIEW ................................................................................................................. 3 RESEARCH OBJECTIVES ....................................................................................... 5 RESEARCH METHODOLOGY................................................................................ 5
Activity 1: Mapping Current Workflow for Mowing-related Tasks ............... 6 Activity 2a: Performing Initial Pilot Tests.......................................................... 7 Activity 2b: Developing Grass Condition Assessment Algorithm Using
Image Data....................................................................................................... 7 Activity 3: Testing Algorithm-based Processing of Initial Pilot Test Data...... 9 Activity 4: Performing Multiple Pilot Tests ..................................................... 10 Activity 5: Analyzing Data and Developing Maintenance Contractor
Performance Report Prototype ................................................................... 10 Activity 6: Integrating UAS into GDOT Workflow......................................... 12 CHAPTER 2. LITERATURE REVIEW ...................................................................... 13 TASK WORKFLOW................................................................................................. 13 ENVIRONMENT AND INFRASTRUCTURE MONITORING IN GDOT ....... 13 UAS AND IMAGE PROCESSING FOR INFRASTRUCTURE MONITORING ..................................................................................................... 14 3D COMPUTER VISION AND AI .......................................................................... 15 CHAPTER 3. DRONE DATA COLLECTION ........................................................... 17 EXPERIMENT EQUIPMENT ................................................................................. 18 SITE SELECTION .................................................................................................... 19 FIGHT PARAMETERS AND DRONE SETTINGS .............................................. 23 CHAPTER 4. DATA PROCESSING............................................................................ 24 PHOTOGRAMMETRY SOFTWARE SELECTION............................................ 24 MEASUREMENT ACCURACY OF PHOTOGRAMMETRY SOFTWARE .... 26 DATA PEPARATION FOR AUTOMATIC POINT CLOUD REGISTRATION .................................................................................................. 29 CHAPTER 5. AI-DRIVEN GRASS MOWING ASSESSMENT ............................... 30 DATA ANALYSIS AND ANNOTATIONS ............................................................. 30 Preparing Point Cloud Data .............................................................................. 30 Preparing Image Data Using 2D-3D Co-labeling............................................. 31
iv

MOWING QUALITY ASSESSMENT FRAMEWORK USING DEEP LEARNING MODELS.......................................................................................... 33 2D Image-based Framework for Detecting Mowing Quality ......................... 33 3D Point Cloud Processing Model ..................................................................... 35
CHAPTER 6. PROCESSING RESULTS (2D IMAGES AND 3D POINT CLOUD)........................................................................................................................... 39
2D IMAGE-BASED FRAMEWORK....................................................................... 39 2D Deep Learning Model in Drone Image Level ............................................. 39 Image-based Mowing Quality Detection........................................................... 40
3D POINT CLOUD PROCESSING ......................................................................... 42 3D Segmentation Results .................................................................................... 42 Temporal Grass Height Difference Estimation Result.................................... 44
CHAPTER 7. MOWING QUALITY DETECTION FRAMEWORK IMPLEMENTATION .................................................................................................... 48
SITE DATA COLLECTION .................................................................................... 48 PHOTOGRAMMETRY PROCESSING WITH PIX4DMAPPER ....................... 50 USER INTERFACE DEVELOPMENT IN ArcGIS SOFTWAWRE................... 56
2D-3D Integration ............................................................................................... 56 ArcGIS User Interface........................................................................................ 57 CHAPTER 8. UAS INTEGRATED WORKFLOW DEFINITION RESULTS ....... 59 EXISTING WORKFLOW ........................................................................................ 59 INTEGRATION OF USA TECHNOLOGY ........................................................... 63 WORKFLOW VALIDATION.................................................................................. 63 CHAPTER 9. LESSONS LEARNED: RECOMMENDATIONS FOR GRASS MOWING ASSESSMENT GUIDELINES .................................................................. 65 FLIGHT PARAMETER OPTIMIZATION............................................................ 65 3D SCENE CONSTRUCTION FROM DRONE IMAGES ................................... 68 CHAPTER 10. CONCLUSIONS AND FUTURE RESEARCH ................................ 70 APPENDIX A. INSTRUCTIONS FOR REMOVAL OF UNWANTED AREA ...... 73 ACKNOWLEDGEMENTS ........................................................................................... 77 REFERENCES................................................................................................................ 78
v

LIST OF FIGURES
Figure 1. Flowchart. Research work plan. .................................................................... 6 Figure 2. Illustration. 3D point cloud of the highway environment generated with
SfM. ........................................................................................................................ 9 Figure 3. Illustration. Filtering of mowed vs. unmowed areas using a
combination of geometry and color features. ....................................................... 10 Figure 4. Illustration. Example of the proposed contractor performance report
integrated into an ArcGIS software interface. ...................................................... 11 Figure 5. Flow diagram. Proposed data process pipeline. .......................................... 17 Figure 6. Photo. Tractor and mowing attachment used by mowing contractors. ....... 18 Figure 7. Photo. DJI Marvic Pro drone model............................................................ 19 Figure 8. Screen shot. Example of the UAV flight path setting in PIX4Dmapper. .... 19 Figure 9. Map. Overview of test sites and UAV home locations. .............................. 20 Figure 10. Photos. Test sites with grass measurement points at NCAT facilities. ..... 21 Figure 11. Photos. Examples of test sites at I-675 with grass measurement points. .. 21 Figure 12. Photos. Different highway environments at NCAT facilities. .................. 22 Figure 13. Photos. Different highway environments at I-675. ................................... 22 Figure 14. Image. Raw point cloud data generated from PIX4Dmapper. .................. 26 Figure 15. Photo. Examples of traffic cones used in this experiment......................... 27 Figure 16. Photos. Examples of static objects used in the experiments. .................... 28 Figure 17. Annotations. Examples of 3D annotation files.......................................... 31 Figure 18. Annotations. Additional examples of 3D annotations............................... 31 Figure 19. Images. 2D annotated image samples converted from the 3D
annotations. ........................................................................................................... 32 Figure 20. Flowchart. 2D image-based framework. ................................................... 33 Figure 21. Diagram. Modified U-Net models used for training the image
processing model. ................................................................................................. 34 Figure 22. Flowchart. 2D framework to detect mowing quality................................. 35 Figure 23. Images. 2D deep learning model results.................................................... 40 Figure 24. Images. 2D framework results on I-675 Location 1; prediction only
shows mowed areas, unmowed areas, and bared spots......................................... 41 Figure 25. Images. Visualization of 3D segmentation at NCAT Location 1.............. 43 Figure 26. Images. Visualization of 3D segmentation at I-675 Location 2. ............... 43 Figure 27. Images. Visualization of 3D segmentation at I-675 Location 1. ............... 44 Figure 28. Images. Visualization of 3D segmentation at NCAT Location 2.............. 44 Figure 29. Plot. Example of manual measurements. The red points indicate tall
grass and the green points represent short grass. .................................................. 46 Figure 30. Images. Original point cloud data from photogrammetry software. ......... 47 Figure 31. Images. Visualization of mowed/unmowed grass height; red color
represents an unmowed area and blue color represents a mowed area. ................ 47 Figure 32. Flowchart. Workflow for the mowing quality assessment data
processing. ............................................................................................................ 49 Figure 33. Screenshot. Example of setting group numbers in PIX4Dmapper,
where group1 is the target images and group2 is the reference images. .............. 51
vi

Figure 34. Screenshot. Setting the image scale to image size in PIX4Dmapper. ... 52 Figure 35. Screenshot. Point Cloud tab settings for point cloud generation............... 53 Figure 36. Screenshot. 3D Textured Mesh tab settings for point cloud generation.... 54 Figure 37. Screenshot. Advanced tab settings for point cloud generation.................. 55 Figure 38. Photos. Comparison between 2D and 3D data. ......................................... 57 Figure 39. Flowchart. Model builder sample sequential processing of
geography-related algorithms. .............................................................................. 58 Figure 40. Screenshot. ArcGIS interface implementing the proposed framework..... 58 Figure 41. Diagram. UAS integrated workflow for mowing performance
verification. ........................................................................................................... 62 Figure 42. Image. Orthophoto generation failure at the boundary. ............................ 66 Figure 43. Image. Orthophoto alignment failure. ....................................................... 67 Figure 44. Images. Failure example of automatic point cloud registration due to a
large difference between reference and target models.......................................... 69 Figure 45. Image. Drastic change in target model compared to the reference
model..................................................................................................................... 69 Figure 46. Screenshot. Example of CloudCompare user interface. The first step is
to open point cloud data by navigating the "File" tab and selecting "Open." ...... 74 Figure 47. Screenshot. CloudCompare dialog box to upload data from a file.
Select the "Apply all" button to visualize the point cloud data. ........................... 74 Figure 48. Image. Example of point cloud data visualization when "Apply all" is
selected in CloudCompare. ................................................................................... 75 Figure 49. Screenshot. Segment an object using the "Segment" option under the
"Edit" tab in CloudCompare. ................................................................................ 75 Figure 50. Screenshot. Select the region of interest.................................................... 76
vii

LIST OF TABLES Table 1. Data collection time frame............................................................................ 22 Table 2. Sample of photogrammetry software available. ........................................... 24 Table 3. Settings used to measure the accuracy of SfM-generated models at
NCAT highway test site and Georgia Tech test site using small objects. ............ 28 Table 4. Comparison between our semantic segmentation model with the original
PointNet++. ........................................................................................................... 36 Table 5. Performance on 2D deep learning model. .................................................... 40 Table 6. 2D framework evaluation. ............................................................................ 41 Table 7. Result of 3D segmentations. ......................................................................... 42 Table 8. Typical grass height before grass is mowed. ................................................ 45 Table 9. Overall estimated grass height from data analysis........................................ 46 Table 10. Interview participants and date. .................................................................. 61 Table 11. Result of grass height accuracy among camera angle combinations.......... 67 Table 12. Result of missing object in point cloud data............................................... 68 Table 13. Result of flipping errors. ............................................................................. 68
viii

2D 3D AI CNN CRS DL FAA FLANN GDOT GIS GPS GSD GT Georgia Tech IMU LiDAR ML MP NCAT RSME RTK SfM UAS UAV UTM

LIST OF ACRONYMS
Two dimensions / Two-dimensional Three dimensions / Three-dimensional Artificial intelligence Convolutional neural network Coordinate reference system Deep learning Federal Aviation Administration Fast Library for Approximate Nearest Neighbors Georgia Department of Transportation Geographic information system Global positioning system Ground sampling distance Georgia Institute of Technology Georgia Institute of Technology Inertial measurement unit Light detection and ranging system Machine learning Million pixels National Center for Asphalt Technology Root mean square error Real time kinematics Structure from motion Unmanned aerial system Unmanned aerial vehicle Universal Transverse Mercator

ix

EXECUTIVE SUMMARY
In August 2020, a team from the Georgia Institute of Technology entered into a research project to develop an automated performance verification system of mowing contractors using unmanned aerial systems (UASs) and artificial intelligence (AI) in the Georgia Department of Transportation (GDOT) operations. The research tasks include optimizing the workflow analysis for the GDOT mowing assessment, developing a data-driven automated highway monitoring system, and integrating a user-friendly interface for GDOT. This project uses a UAS equipped with a GPS, inertial measurement units (IMUs), and a camera sensor to collect image data on highway vegetation areas. In the workflow analysis, the research team identifies optimal integration of AI and unmanned aerial vehicle (UAV) usages and provides a workflow diagram to recommend the best use of the proposed system. In the data-driven highway monitoring system, the research team develops a machine learning-based grass mowing assessment framework using image data collected from drones. The user interface incorporates the proposed system and provides clear and useful information for the GDOT Office of Maintenance. The contribution of the proposed solutions could support GDOT's task of verifying the mowing performance of contractors and expand the application of AI and drone technology within GDOT.
The project lasted for two years, and the research team conducted three main tasks in the study, including optimized workflow analysis, automated grass assessment software development, and user interface development. The results from these three main tasks provide comprehensive performance verification of mowing contractors for GDOT and allow them to optimally integrate the performance verification report into their mowing verification system. In the evaluation of the proposed software, a total of four locations
1

were used for field tests, including a real highway area and a highway area near a pond. During the experiment, the drone was deployed to collect images at different elevations and camera angles; the flights were performed in automated mode with flight path planning applications. The collected image data were processed with photogrammetry software that converts image data into 3D point cloud data. The 3D data are then used as inputs to the AI-driven grass mowing assessment software to identify mowed/unmowed areas and provide details of mowing contractor performance. The output of these results is provided via the user interface for GDOT within a geographic information system (GIS). The recommendations for using the proposed software are included to address UAS operations planning and execution, data collection equipment and data management, and user interface personnel requirements. The presented results could contribute to GDOT's plans for assessing the performance of mowing contractors. Keywords: Unmanned Aerial Vehicles, Artificial Intelligence, User Interface, Operational Requirements, Technical Requirements, Inspections, Maintenance, Highway Mowing, Workflow Optimization
2

CHAPTER 1. INTRODUCTION
OVERVIEW With recent innovation in the use of unmanned aerial systems (UASs) and artificial intelligence (AI) technology, automated data collection using UASs and the application of machine learning (ML) has been increasingly considered for infrastructure monitoring and site mapping applications in the United States. The UASs are typically equipped with a camera sensor that takes photos while flying around the target sites. AI and ML apply a data-driven approach to predict conditions and objects from the collected data.
Current planning, monitoring, and evaluation of highway maintenance tasks face several challenges, including measuring the performance of grass mowing contractors. Mowing grass near highway landscaped areas is especially challenging due to limited access to measure the condition of grass as well as potential inconsistent visual inspections that rely on the inspectors' judgment. The Georgia Department of Transportation (GDOT) normally contracts with grass mowing companies, and an inspector from GDOT verifies the performance of the mowing contractors for payment purposes and workload planning. In the current manual inspection process, verifying the performance of grass mowing contractors is complicated and labor-intensive due to large inspection areas and potentially biased performance evaluation with a visual inspection.
In this document, the research team proposes AI-driven approaches to automate current manual performance-verification processes and verify the performance of mowing contractors using numerical evaluation. The team uses a drone equipped with a camera sensor and automatically collects image data to determine actual work performed using AI
3

and ML techniques. The team also develops a standard quality and performance report and user interface from the analyzed result for GDOT to monitor grass mowing work at greater accuracy and reliability than when performed through visual inspection by personnel alone. The use of drones can also allow GDOT to monitor progress in the challenging areas for which inspectors cannot easily verify performance, such as steep slopes and high-traffic areas.
In the first part, the proposed research focuses on data processing of collected aerial images using photogrammetry software for reconstructing 3D point clouds from images. In the next step, computer vision and machine learning are applied to detect grass regions using the generated 3D point clouds with the structure from motion (SfM) method. In the last steps, the temporal grass height estimation computes the grass height difference and estimates the area of mowed and unmowed grass. The user interface displays all information about the quality of grass mowing by showing the location of mowed and unmowed regions and showing the area of bare spots and rooster tails using imageprocessing techniques. The proposed research also develops a workflow diagram through interviews with GDOT personnel. The workflow diagram integrates data collection, image processing, and grass height estimation methods to enhance tasks and planning, and verification of highway mowing work by contractors. The development of the workflow analysis and diagram can lead to expanded applications of drone technology within GDOT.
In the following sections, the details of the research objectives and the methods of workflow analysis, data collection, data processing, grass area detection, temporal grass height detection, user interface development, and verification of grass mowing are discussed in detail.
4

RESEARCH OBJECTIVES The main goal of this project is to develop and test a drone-assisted workflow and a Geographic Information System (GIS)-based visual user interface software tool to be employed in planning, monitoring, and verifying the performance of maintenance tasks performed by mowing contractors. The proposed method uses images collected by drones that would be processed by machine learning and image processing algorithms developed by the research team to achieve progress monitoring goals of autonomous maintenance operations. The objectives of this study are as follows: (1) determine drone-assisted workflow analysis and technological feasibility of utilizing AI and UASs in the operations of the GDOT Office of Maintenance; (2) develop an algorithm to achieve a high level of automation in the assessment of the maintenance contractor tasks using the UAS and image data as inputs; (3) evaluate test cases using proposed workflow analysis and the algorithm in the highway scenes; and (4) provide a user-friendly GIS-based interface for visualizing performance results.
RESEARCH METHODOLOGY The research activities involved deep collaboration with GDOT personnel throughout the Activity. Figure 1 presents a flowchart of the research work plan and is followed by descriptions of these related activities:
Activity 1 Mapping current workflow for mowing-related tasks. Activity 2a Performing initial pilot tests. Activity 2b Developing an image processing method. Activity 3 Testing Algorithm-based processing of pilot test data.
5

Activity 4 Performing multiple pilot tests. Activity 5 Analyzing data and developing a maintenance contractor performance
report prototype. Activity 6 Integrating UAS into the GDOT workflow.

Activity 1
Mapping of Current Workflow

Activity 2
Pilot Test & Algorithm Development

Activity 3
Evaluation of
Algorithm

Activity 4 Pilot
Testing

Activity 5 Data
Analysis & Prototype

Activity 6 Integration of UAS into
GDOT Workflow

Figure 1. Flowchart. Research work plan.

Activity 1: Mapping Current Workflow for Mowing-related Tasks The initial research task was to identify and map the current workflow for pre- and postactivities of right-of-way mowing. This was necessary to understand the data collection, analysis, and reporting practices currently employed to accomplish the required tasks. All contacts made with GDOT state-level users for this first part of this activity were through online Microsoft Teams meetings. The second part of this activity was the collection and analysis of current practices from GDOT district-level personnel, who have a more detailed and practical understanding of current workflow practices. To validate findings, cognitive walkthrough interviews were conducted with the same users in order to evaluate user practices and interactions by analyzing the mental processes required to perform the task. This method is ideal for determining the sequence of events in accomplishing a goal. It is also helpful in identifying problems with a workflow and the causes of those issues (Lewis and Wharton 1997).

6

Activity 2a: Performing Initial Pilot Tests During this activity, the research team developed a data collection plan and a schedule of the field tests for optimizing the parameters of the drone's settings and the photogrammetry software. During the data collection phase, a drone equipped with a camera sensor was used to collect a total of four different highway areas with different altitudes and camera angles to evaluate the effect of image quality and performance of photogrammetry software. Data collection was performed once per month to keep track of grass conditions on different days and weather conditions. Testing was performed in compliance with current Federal Aviation Administration (FAA) regulations applicable to the operation of unmanned aerial systems. The details of the estimated test period and overall test instructions are described later in the Chapter 3 Drone Data CollectionThe collected image data are primarily used in the development of an image processing and grass condition assessment algorithm presented in the chapter 5 AI-driven grass mowing assessment. The analysis of different parameters used in the drone and camera sensors is also described in the latter section. The safety of UAS operations was managed by having a certified Part 107 pilot1 performing flights. Locations for possible test sites were coordinated with GDOT personnel to collect data from realistic environments as much as possible.
Activity 2b: Developing Grass Condition Assessment Algorithm Using Image Data This activity focused on developing a grass condition assessment framework using image data collected by the drone. The photogrammetry software transforms images into two-
1 To fly a drone under the FAA's Small UAS Rule (Part 107), a pilot must obtain a Remote Pilot Certificate from the FAA. See https://www.faa.gov/uas/commercial_operators/become_a_drone_pilot/ for requirements.
7

dimensional (2D) orthomosaic images and three-dimensional (3D) point cloud data that represent a highway environment and can be used to measure the condition of grass on highway areas. The photogrammetry algorithm works by calculating the 3D geometry and the location where each image is taken so that the images can be properly merged into a 3D point cloud. An example of the output of the photogrammetry process is shown in figure 2. The accuracy of the 3D reconstruction can be improved by utilizing the global positioning system (GPS) data from unmanned aerial systems (UASs). The outputs are then used as inputs to the 2D and 3D deep learning models to predict the area of grass and the condition of the grass. The output of the 3D deep learning models is then processed to the grass height estimation code to process the temporal difference and estimate the performance of the grass condition for measuring the performance of the mowing contractors. The output of the 2D deep learning model is used as a secondary method to measure the condition of the grass, such as bare spots, unmowed grass, and mowed grass areas. The details of the final prototype are described in the Chapter 7 mowing quality detection framework implementation.
8

Figure 2. Illustration. 3D point cloud of the highway environment generated with SfM.
Activity 3: Testing Algorithm-based Processing of Initial Pilot Test Data In this activity, the research team processed the 3D point cloud acquired in activity 2b to identify the grass regions, as well as detect whether each region is mowed or unmowed. In the first step, a convolutional neural network (CNN) takes inputs of a processed point cloud and predicts the grass area in the highway scene. In the second step, a temporal detection algorithm uses a reference model to compute the grass height difference and estimate the condition of mowed and unmowed grass. To improve the computation time, the grass regions are separated into multiple patches, and each patch shows a classification of mowed or unmowed grass. Additional 2D CNN is also used to detect conditions of grass in the highway scene, such as eroded grass area and rooster tails. The CNN models are trained with various highway scenes to predict objects found in the highway scenes, such as road, trees, grass, bare spots, and rooster tails. Figure 3 shows an example of detecting mowed/unmowed regions in the highway regions.
9

Figure 3. Illustration. Filtering of mowed vs. unmowed areas using a combination of geometry and color features.
Activity 4: Performing Multiple Pilot Tests This activity focused on collecting additional field test data and evaluating the proposed algorithm. In this phase, the research team refined the developing algorithm to validate the accuracy of mowed/unmowed area detection accuracy using evaluation metrics. Additional datasets were used to capture different grass heights to provide results of different grass conditions.
Activity 5: Analyzing Data and Developing Maintenance Contractor Performance Report Prototype In this activity, the researchers developed a performance evaluation method to meet the needs of Objective 3 (test case of drone-assisted GDOT mowing contractor task planning and performance monitoring). In particular, this activity included data summarization from processed point clouds, performance metrics for the mowing contractor, as well as a user interface to visualize the results. ArcGIS, a well-known GIS software, was used to
10

implement the proposed framework and deployed the results containing information about the geometry, analysis, and results. The function and toolbox in ArcGIS were developed by the research team, and the output could be exported into a geodatabase compatible to other GIS software. Figure 4 shows an example of the proposed results visualization overlaid on an aerial image. The visualization interface is based on ArcGIS software, mapping the processed geodata with the coordinate reference systems based on the GPS data from the unmanned aerial systems. The visualization contains information about which regions correspond to the grassy areas next to the highway, the percentage of mowed area, and whether the mowed area meets the desired quality grade. The user can examine through different stretches of the highway of interest and visualize the mowed vs. unmowed regions. The overall contractor performance report is compiled based on the total unmowed area compared to the mowed area, which can be used to measure progress toward task completion.
Figure 4. Illustration. Example of the proposed contractor performance report integrated into an ArcGIS software interface. 11

Activity 6: Integrating UAS into GDOT Workflow The final phase of the research project determined appropriate points for introducing UAS technology into the current workflow. This evaluation was conducted collaboratively by the entire project team. As part of this activity, higher-level tasks were decomposed to provide details of the general tasks identified by the project team. Descriptions were developed to understand the activity required to complete the task clearly. At this point, a new round of cognitive walkthroughs was conducted with the seven GDOT district offices. The cognitive walkthrough technique was the same as the earlier set, as was the domain of experts being interviewed.
12

CHAPTER 2. LITERATURE REVIEW
This section describes comprehensive reviews of drone and AI applications used in the infrastructure monitoring and civil engineering domain. The details of available computer vision, image processing, and AI are also discussed. Prior research (Irizarry and Johnson 2019) conducted by the research team with GDOT describes drone applications and GDOT projects.
TASK WORKFLOW All transportation management tasks must ensure the safety of DOT personnel, contractors, and the public. Human factors aspects of workflow are central to the safe human unmanned aerial vehicle (UAV) teaming (Alami et al. 2006; Hoffman and Breazeal 2007; Lasota and Shah 2015). Proper consideration of human factors in workflow design is integral to obtaining acceptance by intended users. Human performance has been demonstrated to be highly dependent on workload. A key factor is to maintain human situational awareness. Transportation personnel must be provided with the ability to overcome technology failure; therefore, workflow design must consider possible human errors and prevent failures from over-trust in automation and subsequent degradation of situational awareness and direct control of the system (Kaber and Endsley 2004).
ENVIRONMENT AND INFRASTRUCTURE MONITORING IN GDOT Several drone applications have been applied to GDOT research and operations (Irizarry and Johnson 2019). The prior study with GDOT investigated drone technology in the civil and environmental engineering field in airport and railway inspections, bridge inspections,
13

and road construction data collection. The study also provided guidelines for drones in compliance with FAA regulations.
UAS AND IMAGE PROCESSING FOR INFRASTRUCTURE MONITORING Inspection and infrastructure monitoring have been a primary focus of the civil and environmental engineering community. Most inspection techniques are based on landbased visual and subjective observation performed by trained on-site personne. With the recent development of mobile robots and the ability to capture a wide range of visual data, various methods have been explored to automate data collection and analyze the collected data for civil infrastructure monitoring applications. In particular, recent innovations in computer vision have advanced the performance and quality of infrastructure monitoring in the domain of civil and environmental engineering. Civil engineers utilize a UAV for remote automated data acquisition and apply computer vision techniques to process and analyze conditions of the environment and buildings. For instance, (De Castro et al. 2018; Lopez-Granados et al. 2016; Pap et al. 2019) applied a data-driven image processing algorithm to identify agricultural products and trees using a UAV. In terms of building inspection and post-disaster site mapping, (Bhatnagar et al. 2020 and Pi et al. 2021) both applied computer vision and machine learning with images collected by UAVs to measure the condition of infrastructure damage. Kim et al. 2019 assessed construction points for grade control and reference in 3D utilizing images from a UAV and laser-scanned point clouds from a mobile robot, which was sponsored by GDOT.
Additionally, (Kim and Irizarry 2020) utilized a UAS to identify the use cases of industry practitioners to integrate UAS applications into construction and infrastructure
14

management tasks. The existing literature of infrastructure and environmental monitoring applications primarily focuses on agricultural or building inspection cases; however, monitoring grass in the highway environment is not explored. In this report, we will explore and demonstrate the ability of UAV and computer vision techniques to analyze the use case of vegetation management on highway areas.
3D COMPUTER VISION AND AI Compared to 2D image and UAV applications, 3D computer vision using point cloud data has benefits for understanding the height of vegetation and object information in 3D space. In this literature review, the primary focus is types of semantic segmentations in the computer science field and applications of 3D computer vision in the civil and environmental engineering field. In terms of post-disaster site and highway monitoring applications, (Yajima et al. 2021b) apply 3D computer vision on infrastructure monitoring at post-disaster sites to analyze the conditions of damaged objects. During the performance of this project, image processing and 3D computer vision have been applied to highway vegetation control (Kahoush et al. 2021; Yajima et al. 2021a).
In terms of 3D computer vision and object semantic segmentation methods, diverse methods exist to classify and segment 3D objects in point clouds. (Milioto et al. 2019) converts 3D points into 2D images and use a 2D deep learning model to detect objects. Another approach converts the 3D point clouds into a voxel grid and applies 3D deep learning models (Maturana and Scherer 2015). However, these methods rely on transforming original data into low-resolution data, which can lose some meaningful information. To overcome this issue, the research group in the current study uses a 3D
15

computer vision algorithm that directly uses original 3D point clouds and outputs the classification of objects. PointNet++ (Qi etal. 2017) applies a deep learning model directly to the input of raw point cloud data and processes the semantic segmentations. This model can learn the 3D contexts in point cloud data and perform better than the existing methods discussed in this literature review.
16

CHAPTER 3. DRONE DATA COLLECTION This chapter presents the data collection and preprocessing pipeline focusing on the image data collection, the flight parameter setting for the drone, and the 2D and 3D annotation file generation for the machine learning model (see figure 5). This data preparation is necessary to acquire a high-quality dataset that helps our proposed method detect grass conditions. Drone datasets were collected between October 2020 and November 2021 at several test areas and highway environments. The collected image data are then preprocessed to prepare annotation files for the machine learning model to classify object classes (e.g., grass, highway roads, trees, etc.). The 3D point cloud data are also prepared for measuring the grass height difference using the photogrammetry software. Annotation files for the 3D data are also prepared to train a machine learning model that classifies objects in point cloud data. The software and the experimental setup are described in the following sections. An example of a tractor with a mower is shown in figure 6.
Figure 5. Flow diagram. Proposed data process pipeline.
17

Figure 6. Photo. Tractor and mowing attachment used by mowing contractors.
EXPERIMENT EQUIPMENT The team alternately uses DJI Marvic Pro and DJI Marvic 2 Pro drone models that each contain 4K cameras and a GPS sensor (figure 7). These drones are controlled by the DJI Go 4 and PIX4Dmapper (see figure 8) mobile applications, which are freely available. A drone pilot can readily set up a flight path and select the boundary of the scanning area with different flight parameters, such as altitude, camera angle, and flight speed. Additionally, ground truths are manually measured to validate our proposed model. Traffic cones are installed at several representative locations within the highway scene, and the grass heights at these points are measured through manual tape measurements.
18

Figure 7. Photo. DJI Marvic Pro drone model.
Figure 8. Screen shot. Example of the UAV flight path setting in PIX4Dmapper. SITE SELECTION The drone data were collected at a test facility and in a highway environment: the NCAT test track, and Interstate 675 (I-675) in Georgia (see figure 9, figure 10, and figure 11). The project team worked with GDOT Districts 1 and 7 to identify test locations that are close to Atlanta, safe for both access and operation, provide large areas for mowing, both flat and sloped, and include treed portions, signage, and guardrails. Two locations at the south
19

of I-285 at I-675 were selected, as they best meet the criteria and provided safe parking space. These recommended test sites were provided by GDOT personnel while arranging with mowing contractors. Additionally, the team selected two highway scenes from the National Center for Asphalt Technology (NCAT) test track in Alabama, as it also provided the required environment with safe and reliable access.
Figure 9. Map. Overview of test sites and UAV home locations.
20

Figure 10. Photos. Test sites with grass measurement points at NCAT facilities.
Figure 11. Photos. Examples of test sites at I-675 with grass measurement points. The team performed the highway field tests several times to collect data in various situations, including mowed or unmowed grass, different grass textures, and existence of highway objects, such as guardrails and post signs (figure 12 and figure 13). Table 1 shows the dates on which data were collected.
21

Figure 12. Photos. Different highway environments at NCAT facilities.

Figure 13. Photos. Different highway environments at I-675.

Collection Dates

Table 1. Data collection time frame.

I-675 Location 1
May 14, 2021 May 25, 2021 July 22, 2021 Sep 9, 2021 Nov 7, 2021

I-675 Location 2

NCAT Location 1

May 25, 2021 July 22, 2021 Sep 9, 2021

Oct 26, 2020 Nov 16, 2020 Feb 24, 2021

NCAT Location 2
Oct 26, 2020 Nov 16, 2020 Feb 24, 2021

22

FLIGHT PARAMETERS AND DRONE SETTINGS Various flight parameters, such as camera angle, flight speed, flight altitude, and overlapping image ratio, were selected for each drone flight. Since some of these parameters directly affect the performance of the proposed framework, they were selected carefully. Generally, as flight altitude decreases, the overlapping ratio is higher, and, therefore, the highway environment can be zoomed in closely in drone images. This improves the mowing feature detection. However, at the same time, it causes long flight times; thus, the team adjusted the flight path and parameters according to the site situation. Besides these parameters, one parameter that can be selected for a specific purpose in the framework is the camera angle. The team compared various camera angles, including 45, 60, and 90 degrees, and their combinations, to validate the optimal performance for generating 3D scenes and evaluating mowing quality. The recommended combinations of flight angles are either 90 degrees alone or a combination of 45- and 60-degree camera angles. The details of evaluating the best camera angles are described in the Chapter 9. Lessons Learned: Recommendations for Grass Mowing Assessment guidelines
23

CHAPTER 4. DATA PROCESSING
This chapter describes the photogrammetry software that generates 3D data using the image data collected from the various field test sites. The photogrammetry software also provides camera calibration information and orthophotos that are a part of the inputs for the grass height estimation algorithm. The overall data process is described in detail with various examples in the following sections.

PHOTOGRAMMETRY SOFTWARE SELECTION Various photogrammetry software tools are available for processing image data into graphical representations, including 3D models and orthomosaic maps, that are inputs to the proposed grass height estimation algorithm. Table 2 provides two samples of photogrammetry software with descriptions of output format, processing mode, cost, and reference websites. Among the available photogrammetry software, PIX4Dmapper and Agisoft Metashape, were selected to compare features.

Table 2. Sample of photogrammetry software available.

Application PIX4Dmapper

Output
3D Point Cloud, 3D Mesh Object, Camera Params,
Orthophoto

Agisoft Metashape

3D Point Cloud, Camera Params

Processing Mode Local
Local

Cost (per licensed user)
$1,990/one-time Educational Version,
$4,990/one-time Professional Version
$179/one-time Standard Edition, $3,499/one-time Professional Edition

Product Website
https://www.pix4d.com/p ricing?tab=products&pla
ns
https://www.agisoft.com/ buy/online-store/

PIX4Dmapper was used as a primary photogrammetry software tool for this research project because the software provides the outputs of 3D models, camera parameters, and

24

orthomosaic photos necessary for the grass height estimation algorithm. Additionally, the PIX4Dmapper user interfaces allow the generation of a 3D point cloud with georegistering the GPS coordinates to improve the quality of the 3D models. The PIX4Dmapper also enables users to combine multiple images collected at different times and dates to reconstruct 3D models from images, allowing the user to automatically align two 3D pointcloud data sets. PIX4Dmapper is also compatible with DJI products, and the photogrammetry software provides a smartphone application to plan and collect image data. PIX4Dmapper also has a "floating license" feature that allows for the activation/deactivation of a license tied to a certain computer at any time in the case a new computer is required. This provides flexibility to users since the license can be migrated from one user to another. In addition, PIX4D is already used by GDOT.
The test environment includes straight and curved road sections, vegetation, a pond, trees, and other objects. The flight parameters of the UAV consist of flight elevation between 20 m (approx. 65 ft) and 30 m (approx. 98 ft) from the ground, ground sampling distance (GSD) of 0.20.28 (inch/pixels), and camera angle of 4590 degrees. Regarding the UAV flight process, the UAV data are collected using the PIX4Dcapture application, and the UAV collects image data within 10 to 20 minutes of flight time. A trained pilot is in command of the flight and able to take control of the automated flight, if needed. Detailed descriptions of the use of PIX4Dmapper and processing the data are provided in (Irizarry and Johnson 2019) and summarized in this report. A sample of a point cloud generated by the photogrammetry software is shown in figure 14.
25

Figure 14. Image. Raw point cloud data generated from PIX4Dmapper.
MEASUREMENT ACCURACY OF PHOTOGRAMMETRY SOFTWARE In this section, the accuracy of the 3D reconstruction by the photogrammetry software is evaluated by comparing the ground truth measurement of static objects on test sites and the 3D data generated from the PIX4Dmapper software. This experiment is carried out to verify the measurement accuracy of the 3D reconstruction using images and GPS coordinate information captured by the drone. The photogrammetry software applies the structure from motion (SfM) method, which includes a georegistration process. The georegistration process converts the 3D point cloud into georegistered point cloud in Universal Transverse Mercator (UTM) coordinate GPS data from images. The outcome of the georegistration provides the correct scale of objects on highway areas that can be used as inputs to the image and point cloud processing framework. The measurement accuracy
26

of the SfM was evaluated using the heights of multiple physical traffic cones and small boxes (see figure 15 and figure 16). for examples of the accuracy of the photogrammetry software). The measurement accuracy was also evaluated using multiple camera angles and flight parameters (see table 3). In this evaluation, the UAV flight heights are between 20 m (~65 ft) and 30 m (~98 ft), and the camera angle is 90 degrees. Overall, the average height estimation error is 1.11 cm, which is within GDOT's stated 5 cm tolerance for mowing operations. The average of the measured traffic cones was 44.6 cm with a standard deviation of 2.75mm. The ground truth traffic cones are all 50 cm.
Figure 15. Photo. Examples of traffic cones used in this experiment.
27

Figure 16. Photos. Examples of static objects used in the experiments.

Table 3. Settings used to measure the accuracy of SfM-generated models at NCAT highway test site and Georgia Tech test site using small objects.

Drone Settings

NCAT Test 1

NCAT Georgia Georgia Test 2 Tech 1 Tech 2

Overlap (front%side%)

8070

8070 8070 8070

Camera Resolution (MP)

12.35

12.35 12.35 12.35

GSD (inch/pixel)

0.28

0.2

0.66

0.99

Flight Elevation (ft)

100

50

20

30

Camera Angle (degrees)

90

45

90

90

In addition to the highway test site, measurement analysis was also conducted using multiple boxes and other small objects at different drone altitudes. The average accuracy

28

was measured as 3.5 cm. This result shows that the SfM method can provide reasonable object height estimation.
DATA PREPARATION FOR AUTOMATIC POINT CLOUD REGISTRATION In this step, the automatic point cloud registration feature from PIX4Dmapper was used to align two point clouds into the same coordinate frame. Traditionally, aligning two point clouds has been done by manually selecting common points between two point clouds. The benefit of automatic point cloud registration includes minimizing manual registration errors and reducing time to manually register point cloud data. Manually registering two point clouds has errors from photogrammetry sources, such as a warped point cloud and manual registration errors. The PIX4Dmapper includes camera optimization and 3D point cloud reconstruction steps. The automatic point cloud registration eliminates the warped point cloud because the process first optimizes the camera parameters with all the images to obtain uniform camera parameters used in the 3D reconstruction phase. The process for obtaining the registered point cloud data is described in the "Field-test-based guideline development for the integration of unmanned aerial systems (UASs) in GDOT operations. No. FHSA-GA-19-1609" (Irizarry and Johnson 2019). After running the photogrammetry software, two dense point clouds are generated for the grass height difference estimation process.
29

CHAPTER 5. AI-DRIVEN GRASS MOWING ASSESSMENT
DATA ANALYSIS AND ANNOTATIONS This section describes the procedure for preparing the annotation files used in the machine learning algorithm. Description of these steps is for information purposes and is not meant to be performed by GDOT when implementing the UAS-assisted workflow. In machine learning, data preparation is necessary to train an agent to predict objects of interest from both 3D point clouds and 2D images. For the 3D annotation, the annotation files are prepared using CloudCompare. CloudCompare has features to visualize the point cloud file and segment point clouds into small objects that can be used to annotate the point cloud data. The 2D annotation is processed using a 2D3D co-labeling tool that converts 3D annotations into 2D annotations, using camera parameters and software developed by the research team. These object classes are used to train both 2D and 3D machine learning models and are also used in the evaluation section to confirm the performance of machine learning models. The next section discusses additional details for preparing the 3D annotation and 2D3D co-labeling software tools, again, for information purposes only.
Preparing Point Cloud Data Two different versions of annotations were developed: one is for classifying highway objects, and another is for detecting the mowing quality. The object classification dataset targets seven classes of highway objects (clutter, building, grass, road, guardrail, tree, and sign), and the mowing quality detection dataset includes four classes (mowed area, unmowed area, bare spots, and clutter). Sample annotation files are shown in figure 17 and figure 18. The first annotation is used for distinguishing the highway assets and clipping
30

the grass area of interest, and the second annotation is for granularly detecting mowing quality.
Figure 17. Annotations. Examples of 3D annotation files.
Figure 18. Annotations. Additional examples of 3D annotations. Preparing Image Data Using 2D-3D Co-labeling To avoid duplication of the annotation work, a 3D-to-2D co-labeling approach was employed. It utilizes 3D annotation to generate 2D annotation for training the deep learning network by projecting 3D coordinates of the annotated points into 2D raw drone images. This approach took advantage of the fact that the images acquired by the UAV were mostly overlapping, and semantic annotations from one image could be copied to other images that overlap with it and, thus, save manual labeling time.
31

The 3D semantic labels were assigned back to the original 2D drone images by utilizing the camera projection equation. Since the camera's intrinsic and extrinsic parameters for each image were known after the structure-from-motion step, each 3D point can be associated with a 2D pixel. Thus, the 3D point labels can be assigned to the corresponding 2D pixels in the original images. Finally, for empty pixels that do not have a matching 3D point, the labels were assigned by searching for the nearest neighbor pixel that has an existing label using the Fast Library for Approximate Nearest Neighbors (FLANN). The advantage of this annotation approach is that the point cloud for each scene only has to be manually annotated once, and the labels for that scene can be propagated to hundreds of images automatically. Figure 19 shows the semantically annotated drone image from the 3D annotation.
Figure 19. Images. 2D annotated image samples converted from the 3D annotations.
32

MOWING QUALITY ASSESSMENT FRAMEWORK USING DEEP LEARNING MODELS 2D Image-based Framework for Detecting Mowing Quality This section describes how the overall framework utilized 2D image processing algorithms to distinguish the various grass conditions, such as mowed, unmowed, and bare spot (see figure 20).
Figure 20. Flowchart. 2D image-based framework. The U-Net, a CNN-based classification model with an encoderdecoder architecture, was applied (figure 21). The encoder part of the model consists of a series of alternating convolution and pooling operations, which allows the model to progressively increase the number of feature maps by downsampling feature maps. The decoder part of the model then upsamples the feature map and semantically projects the features learned by the encoder onto the pixel space to obtain classification results. Overall, this structure of contracting and expansive operations enables the model to capture the localized
33

segmentation of input image and propagate the feature information to successive layers with higher resolutions.
Figure 21. Diagram. Modified U-Net models used for training the image processing model.
To train the proposed U-Net, a customized dataset of over 700 aerial images was used, as well as the corresponding labeled segmented images. We employed a total of four classes: mowed, unmowed, bare spot, and miscellaneous. To improve the performance of the model, we experimented with images of varied resolutions; for example, prior to being resized to the deep learning input size, each original image was splited into multiple images for higher resolution. Finally, the dataset was divided for training, validation, and testing. The validation dataset was used to tune the hyperparameters of the model that maximize the model accuracy and minimize the sparse categorical cross-entropy loss function. The validation loss is monitored to ensure no overfitting or underfitting during the training process. As the next step of the classifier training, the trained model was extended from a single image-level to cover the entire highway scene (figure 22). Firstly, an orthophoto that covers
34

the whole scene is generated, then it is split into small patches with a similar range of the size to the training images. The split patch is inferred through the trained model, indicating mowing quality on each spot, whether the grass is well mowed or not, and is unified into one scene with the original coordinate system. Note that the boundary of each split patch that might cause inconsistent prediction is smoothed using a nearest neighbor algorithm.
Figure 22. Flowchart. 2D framework to detect mowing quality. 3D Point Cloud Processing Model 3D Semantic Segmentation Model To detect objects of interest, such as grass, the developed method uses a deep learning model to classify and segment objects from point cloud data. PointNet++ (Qi et al. 2017) is a neural network that directly processes the input highway data into predicted objects relevant to the highway scenes, such as grass, road, tree, etc. The highway dataset has point cloud data generated from SfM, and each point feature includes position (x, y, z location), colors, and normalized position (x, y, z location) to corresponding highway scene (from 0 to 1). The points input size is updated to 8,192 points
35

from the original 4,096 points to cover the large scale of the highway scenes in table 4. The deep learning models are trained and tested with 20 highway scenes collected at different dates and locations. Seventeen highway scenes with 15,781,672 points (about 80 percent of overall data) are used in the training set, and 4 highway scenes with a total of 2,513,059 points (about 20 percent of overall data) are used in the validation set. The target classes include grass, road, trees, and clutter objects. The clutter class is solely used to assign uncategorized objects, and the performance of this class is not considered in the results section.

Table 4. Key difference between our semantic segmentation model with the original PointNet++.

Method PointNet++ Our model

Grid Size (m) 1 1
25 25

Input Points 4096
8192

Scale of Point Clouds
~ 5 m 5 m
~ 1000 m 1000 m

Environment Indoor Building Highway Roads

Temporal Height Estimation In this section, temporal change detection in 3D point cloud data is performed to identify any physical changes in the highway environment that occur over time that are relevant for highway maintenance, such as the growth of weeds, ground erosion, and collapsed traffic signs. Change measurements are computed for specific classes of objects using the segmented point cloud model from Chapter 4. Data Processing. This study focuses on grass height difference estimation for monitoring highway mowing operations. First, a reference model is generated to store the initial baseline grass heights. Multiple physical markers in the form of traffic cones are placed at the test site, and the grass height at each marker is

36

manually tape measured as ground truth for evaluating the grass height estimation performance. Next, point cloud registration is performed between new point clouds and the reference point cloud. Once the models are registered, the height difference estimation method uses the baseline height of the nearest neighbor marker from the reference model to estimate the grass height of the segmented point cloud. The difference in Z-value between the new point cloud and the reference point cloud is then added to the baseline height.
Automatic Point Cloud Registration Using PIX4Dmapper This section describes the use of the existing automatic point cloud registration process available in the PIX4D software that minimized the misalignment of two point clouds compared to manual point cloud registration. When measuring the difference in grass height between two point clouds, it is necessary to register the point cloud data so that the coordinate system is consistent. Traditionally, point cloud registration is performed by manually selecting points between two point clouds. However, the manual registration process may not accurately align two point clouds due to the noises and warped point cloud data generated by the photogrammetry software. To overcome this issue, the developed method relies on automatic point cloud registration that minimizes the errors of aligning two point clouds. In the first step, the images for both reference and target models are combined into one dataset. In PIX4Dmapper, the initial processing option estimates the camera parameters using the combined images. This process allows the unison of the camera parameters between images and provides a single camera parameter that can be used to generate point cloud data. After the camera parameters are obtained, the point cloud and mesh option generate 3D point clouds for the target and reference models. Because
37

this process applies initial processing for combined images and generates two point clouds, the errors of point cloud registration and the warping effect can be minimized, if not eliminated.
38

CHAPTER 6. PROCESSING RESULTS (2D IMAGES AND 3D POINT CLOUD)
2D IMAGE-BASED FRAMEWORK This section describes how to evaluate the proposed 2D framework to distinguish the various grass conditions, such as mowed, unmowed, and bare spot, for assessing mowing quality. The evaluation takes place in two stages. First, the team validates the deep learning model that significantly affects the performance of the overall framework, then the mowing quality map on the whole highway scene is evaluated based on the ground truth data generated manually. 2D Deep Learning Model in Drone Image Level The 2D deep learning network to distinguish various grass conditions was trained using four datasets of the I-675 Location 1 which were collected on May 25, July 22, July 30, and November 11, 2021. These datasets consisted of pairs of: (1) a raw drone image and (2) the corresponding semantic annotation. Figure 23 shows the exemplary results of the model and the color legend is as follows; green- mowed area, blue- unmowed area, and red- bared spots. To quantitively validate the trained model, a matrix is used of accuracy, precision, recall, and F-1 score, which are dominant criteria in 2D image evaluation. Based on this, the predicted results are evaluated at the pixel level (see table 5).
39

Original

Prediction

Ground Truth

Figure 23. Images. 2D deep learning model results.

Table 5. Performance on 2D deep learning model on the I-675 Location 1.

Accuracy Precision Recall F-1

Mowed 0.989 0.945 0.959 0.946

Unmowed 0.994 0.888 0.908 0.895

Bare 0.995 0.473 0.497 0.468

Misc 0.985 0.766 0.988 0.863

Image-based Mowing Quality Detection As the results from a single drone image do not necessarily indicate the performance on the entire highway scene results that are inferred based on the orthophoto, the final 2D framework result is separately evaluated (see Figure 24). Unlike in 2D Deep Learning

40

Model in Drone Image Level above, a dataset that includes various grass conditions and evaluates the framework based on average values across all the classes was selected for testing (see table 6).

Figure 24. Images. 2D framework results on I-675 Location 1; prediction only shows mowed areas, unmowed areas, and bared spots

Table 6. 2D framework evaluation on the I-675 Location 1.

Data collection date
Accuracy Precision Recall F-1

May 14, 2021
0.872 0.876 0.876 0.811

July 22, 2021
0.826 0.768 0.730 0.729

July 30, 2021
0.891 0.815 0.874 0.823

Nov 11, 2021
0.912 0.411 0.401 0.405

Compared to the single image-level evaluation, the final result is slightly lower because of the low resolution, incompleteness, and lost texture information of the orthophoto. While creating an orthophoto, some texture and color information is lost and smoothed to stitch all drone images consistently, thus lowering the performance of the trained deep learning

41

model. However, it can be readily complemented when integrating the 2D framework results with the corresponding 3D framework.

3D POINT CLOUD PROCESSING In this section, evaluation of 3D segmentation and temporal grass height estimation are discussed with numerical results. The results of the 3D point cloud analysis are used to estimate the mowed/unmowed grass areas in the ArcGIS user interface.

3D Segmentation Results A total of four highway scenes were used to validate model performance. These test sets consist of NCAT Location 1, NCAT Location 2, I-675 Location 1, and I-675 Location 2, and the datasets are separated from the training dataset. The 3D CNN model has a training set batch size of 20, a learning rate of 0.001, and feature vectors of location, colors, and normalized point locations. Two deep learning models are trained with different class objects to determine the optimal deep learning model by setting one model to classify only grass, road, and tree objects and another model to classify bare spot, guardrails, and traffic signs on top of existing objects, as shown in Tabletable 7. Accuracy metrics are used to measure the performance of the CNN models. Based on the result, the CNN with four object classes shows a more robust performance than the CNN with seven object classes.

Table 7. Result of 3D segmentation (Accuracy in %).

Grass Road 4 Object Classes 94.2 91.0

Tree 97.8

Bare Spot


Sign

Guardrail Other

Avg Point Accuracy





15.9 93.9

7 Object Classes 86.4 85.2 84.1 0.0 4.3

1.3

50.9

83.7

42

In figure 25, figure 26, figure 27, and figure 28, the visualization of 3D segmentation results is shown. The accuracy of seven class objects is low due to an imbalanced dataset and because these objects are relatively small compared to large objects such as grass, road, and trees. Therefore, the 3D segmentation model is only trained with four object classes to predict grass area. The detected grass areas are then used in the temporal height detections.
Figure 25. Images. Visualization of 3D segmentation at NCAT Location 1.
Figure 26. Images. Visualization of 3D segmentation at I-675 Location 2.
43

Figure 27. Images. Visualization of 3D segmentation at I-675 Location 1.
Figure 28. Images. Visualization of 3D segmentation at NCAT Location 2. Temporal Grass Height Difference Estimation Result The grass height difference estimation using 3D point cloud is evaluated using the areas near I-675. A total of 20 sample measurements were recorded to confirm the performance of the grass height estimation code as described in the chapter 4. Data Processing. The predicted grass height is evaluated based on the accuracy of grass height between the estimated and ground-truth grass height. According to GDOT's mowing contractors, the
44

typical grass height before mowing grass depends on the season and location (e.g., spring, summer, and winter). Based on the interviews, the grass near the target area typically grows more in spring and summer than in late fall. There is no mowing activity in winter. The mowing contractor typically mows grass less than 6 inches, per GDOT requirements. Therefore, the expected grass height difference before and after grass mowing is around 6 inches by subtracting the minimum height (1- 4ft) of the grass before mowing in the fall from the GDOT requirement (6 inches). The seasonal typical grass heights before mowing are shown in table 8.

Table 8. Seasonal typical grass height before the grass is mowed.

Types of Highway Interstate Highway Secondary Roads

Spring (ft) 3 3

Summer (ft) 34 33.5

Fall (ft) 11.5 12

To evaluate the grass height, the ground truth measurement of grass is determined. In the test sets, a total of 20 sets of point clouds are collected from various dates and times. The estimated grass height is computed by adding the reference height of grass and the difference between the reference and target point cloud data. An example of the difference in point cloud data is provided in figure 29. As shown in table 9, the overall estimated grass height is 3.21 inches (8.16 cm), which is within the tolerance of GDOT standards (6 inches).

45

Figure 29. Plot. Example of manual measurements. The red points indicate tall grass and the green points represent short grass.

Table 9. Overall estimated grass height from data analysis.

Location

Average Errors1 (cm)

I-675 Location 1 (tall grass)

9.79

I-675 Location 1 (short grass)

6.52

Overall
1 Average errors between manual measurements and estimated grass height.

8.16

Visualization of the grass height estimation is shown in figure 30 and figure 31. Two different types of grass height are measured at different times. One data (i.e., reference) set is collected immediately after the grass is mowed and the other (i.e., target) is collected when the grass is tall. The red points represent the grass that is considered to be an unmowed region, and the green or blue color represents the mowed grass area. The output of these visualizations is used in the ArcGIS user interface.

46

Figure 30. Images. Original point cloud data from photogrammetry software.
Figure 31. Images. Visualization of mowed/unmowed grass height; red color represents an unmowed area and blue color represents a mowed area.
47

CHAPTER 7. MOWING QUALITY DETECTION FRAMEWORK IMPLEMENTATION
The proposed framework includes site data collection, in addition to algorithm processing executed through software. Figure 32 shows the overall mowing quality assessment framework, consisting of (1) field data collection, (2) photogrammetry, and (3) the ArcGIS user interface.
SITE DATA COLLECTION Visualization of the mowing contractor performance evaluation performed by 2D and 3D data analysis requires the collection of image data to create a reference model to be compared to data collected as mowing work is performed at the intervals determined by GDOT personnel. The models created from data collected for inspection purposes are target models. The reference model is made from the site data immediately after mowing has been performed, which satisfies the GDOT's all mowing requirement. This reference model is used as a baseline to measure the mowing performance of the target models. The reference model is created once for each area to be evaluated. It only requires updating if there are changes in geometry, topography, or vegetation type in the area modeled. Image data for the target models are collected as frequently as the performance of mowing work is to be evaluated. The photogrammetry process using the PIX4Dmapper program is reviewed in the next section.
48

Figure 32. Flowchart. Workflow for the mowing quality assessment data processing. 49

PHOTOGRAMMETRY PROCESSING WITH PIX4DMAPPER This section describes how to use the photogrammetry software and set up the options for processing. The basic options and instruction, such as generating the orthomosaic images, are omitted in this section and can be reviewed in "Field-test-based guideline development for integration of unmanned aerial systems (UASs) in GDOT Operations. No. FHWA-GA19-1609" (Irizarry and Johnson 2019), which includes step-by-step instructions for basic photogrammetry software use, as described in Chapter 4. Data Processing, the reference models, and the target models are generated by combining all images. PIX4Dmapper has options to set group1 and group2, as shown in figure 33. The reference images and the target images are separated by these groups.
50

Figure 33. Screenshot. Example of setting group numbers in PIX4Dmapper, where group1 is the target images and group2 is the reference images.
In the photogrammetry software settings, a custom keypoints image scale of one-half the image size is used for the initial processing (see figure 34). Due to reconstructing the grass area, these settings help to increase the number of keypoints matching by reducing the image size as recommended on the PIX4Dmapper website.2 The other parameters are kept as the default settings. To learn more about the technical details of automatic point cloud registration, refer to section Chapter 4. Data Processing.
2 https://support.pix4d.com/hc/en-us/articles/202560159-How-to-improve-the-outputs-of-dense-vegetationareas-using-Pix4Dmapper.
51

Figure 34. Screenshot. Setting the image scale to image size in PIX4Dmapper.
Under the Point Cloud and Mesh processing option, the point cloud types selected under the Point Cloud tab are the LAS and PLY formats (see figure 35). Additionally, the Merge Tiles into One File option is selected to combine the point cloud data into each target and reference model. The option selected under the 3D Textured Mesh tab is Medium Resolution. The Advanced tab setting shows selected options for group1 and group2; these options must be checked so that the point cloud results output a reference and target point cloud data. These settings are shown in figure 35, figure 36, and figure 37.
52

Figure 35. Screenshot. Point Cloud tab settings for point cloud generation.
53

Figure 36. Screenshot. 3D Textured Mesh tab settings for point cloud generation.
54

Figure 37. Screenshot. Advanced tab settings for point cloud generation.
Once the photogrammetry software completes the 3D reconstruction and orthophoto image generation, the output files include a reference and a target point cloud in the respective PLY and LAS files, an initial folder that contains the camera matrix files, and orthophoto images. The initial folder is generated automatically by the PIX4Dmapper software. These files are inputs to the user interface software, which is described in the next section.
55

USER INTERFACE DEVELOPMENT IN ARCGIS SOFTWARE This section describes the procedure for developing a user interface that readily implements the proposed framework on the GIS software, ArcGIS. ArcGIS, developed and supported by ESRI, is a representative geoprocessing software that can process, manage, and visualize databases with geographical reference. ArcGIS's current use by GDOT personnel facilitates the potential adoption of the proposed framework for geoprocessing and visualization. The proposed framework is implemented in ArcGIS in the form of a plug-in, thereby enabling GDOT personnel to check the mowing quality assessment results in the ArcGIS platform and to manage the mowing quality and highway environment in a consistent and compatible way.
2D-3D Integration The 2D and 3D frameworks were developed in parallel and were integrated into one framework that complements each. Point cloud data and image data have their own properties (figure 38). Point cloud data have advantages in representing volumetric and vertical objects, thus showing good performance in detecting highway assets and areas, whether grassy or not, and distinguishing if the grass is tall or short. However, due to low resolution, these data complicate the analyzing process in areas where the grass is blurred. On the other hand, image data with high resolution have strengths in texture and color information, thus enabling the model to robustly detect bare spots and rooster-tail areas given sufficient training data. Taking these characteristics into account, the results of the two models were combined by weighting them in areas with their own strengths.
56

Figure 38. Photos. Comparison between 2D and 3D data. Taken on 11/07/2021.
ArcGIS User Interface The ArcGIS built-in framework comprises: (1) the toolbox, and (2) the model builder; we use it to plug our developed algorithms into the ArcGIS environment (figure 39). Through these procedures, an ArcGIS function was created (with an extension of .tbx), implementing the mowing-quality detection algorithms in the ArcGIS software, internally processing the algorithms (including the deep learning parts), and solely resulting in the final output of the detected mowing quality in the target areas of interest (figure 40). The input for the function would be an orthophoto, a point cloud, and the log files of the camera matrix of the target areas of interest, which are generated in Chapter 4. Data Processing, performing photogrammetry through PIX4Dmapper. The output would be the georegistered theme map as raster data, indicating mowing quality in color. In addition, information on the spot measurement and linkage with the corresponding drone image would be provided for visually checking the site.
57

Figure 39. Flowchart. Model builder sample sequential processing of geography-related algorithms.
Figure 40. Screenshot. ArcGIS interface implementing the proposed framework. 58

CHAPTER 8. UAS INTEGRATED WORKFLOW DEFINITION RESULTS
EXISTING WORKFLOW This section presents an examination of the current workflow employed by GDOT for the inspection of contractor performance. The initial research task was identifying and mapping the current workflow for pre-and post-activities for right-of-way mowing. This was undertaken to understand data collection, analysis, and reporting practices that are currently employed to accomplish the required tasks. Due to limitations to personal contact implemented to reduce the spread of coronavirus disease 2019 (Covid-19), GDOT's current workflow had to be analyzed utilizing only remote techniques. This made the use of methodologies such as focus groups unrealistic. For this portion of the study, all contacts with state DOT users were conducted through online Microsoft Teams meetings. Interviews began with collecting a general overview of current practices through meetings with three GDOT state office managers. This afforded the researchers a high-level view of right-of-way maintenance practices.
The second part of this task was the collection and analysis of current practices from GDOT district-level personnel. These officials have a more detailed and practical understanding of current workflow practices. For this step, 21 district-level professionals, consultants, and materials laboratory personnel were interviewed. The positions and dates of these interviews are outlined in table 10. After these interviews, the current workflow was mapped using an operational sequence diagram. These diagrams document a sequence of control movements and/or information-collecting activities executed to accomplish a task (Kirwan and Ainsworth 1992). Very similar to detailed flow diagrams, they permit limited
59

branching, looping, and conditional activities. In this case, these diagrams are employed to illustrate operations associated with the current workflow. Operations are categorized as behavioral elements and assigned specific symbols. Categorization is achieved by applying both output and methodology criteria using five common symbols. These symbols define information flow and behavior and are like those used for process charts (Kirwan and Ainsworth 1992). Once the workflow was diagrammed, it was validated by distributing the draft workflow diagram to the same professionals in the initial interviews. The providers were asked to mark corrections and other modifications. A very limited number of modifications were identified at this stage. To improve upon this validation, cognitive walkthrough interviews were conducted with the same users. Again, these interviews were conducted online using the Microsoft Teams platform due to contact health concerns related to Covid-19. Cognitive walkthroughs are used to evaluate user practices and interactions by analyzing the mental processes required to perform the task. This method is ideal for determining the sequence of events in accomplishing a goal. They are also helpful in identifying problems with a workflow and the causes of these issues (Lewis and Wharton 1997). The cognitive walkthroughs produced many corrections and clarifications to the initial workflow diagram, which were mapped as before. The result was a much higher fidelity representation of the workflow in use. The current workflow is incorporated within figure 41 as the non-shaded blocks within the operational sequence diagram.
60

Table 10. Interview participants and date.

Position Maintenance Contracts Assistant Assistant Area Engineer Permits Inspection Engineer Area Engineer Assets Manager Area Engineer District Contracts Manager District Maintenance Manager Manager, Construction District Maintenance Manager District Construction Engineer District Maintenance Engineer District Maintenance Manager District Area Engineer Assistant Materials Engineer State Pavement Engineer Maintenance Design Manager Maintenance Liaison State Maintenance Officer District Consultant Consultant

District 1 1 1 2 3 3 3 4 4 5 5 6 7 7
OMAT OMAT OMAT State State Consultant Consultant

Date March 4, 2021 March 4, 2021 March 4, 2021 March 3, 2021 March 5, 2021 March 5, 2021 March 5, 2021 March 5, 2021 February 24, 2021 February 24, 2021 February 25, 2021 February 25, 2021 February 17, 2021 February 17, 2021 April 13, 2021 April 13, 2021 April 13, 2021 March 30, 2021 March 30, 2021 March 30, 2021 March 30. 2021

61

Figure 41. Diagram. UAS integrated workflow for mowing performance verification. 62

INTEGRATION OF UAS TECHNOLOGY The next phase of the research project was to determine appropriate points for introducing unmanned aerial system technology into the current workflow. Criteria for utilization of this technology included improvements to data accuracy, reliability, collection time, cost, detail, and safety considerations. This evaluation was conducted collaboratively by the entire project team. Higher-level UAS tasks were then inserted into the workflow diagram. At this point, key higher-level UAS tasks were decomposed to identify new workflow components at a greater level of fidelity. Task decomposition methodologies help identify greater detail related to tasks of interest compared to the level that experts typically offer, allowing task descriptions to become more detailed and informative (Kirwan and Ainsworth 1992). The first step in task decomposition was to outline the general tasks identified by the project team. Descriptions were developed in adequate detail to clearly understand the activity required to complete the task. Tasks were divided into several subtasks to facilitate understanding. Next, each task element was decomposed, categorized, and labeled. At this point, the decomposed tasks were documented. The team was requested to add any missing details about the subtask. The result was a compilation of highly detailed task descriptions for UAS-related modifications to the current GDOT workflow for rightof-way mowing operations. Figure 32 represents the decomposition of the mowing quality assessment data processing task.
WORKFLOW VALIDATION A new round of cognitive walkthroughs was conducted at this point with the seven GDOT district offices. This was to validate the ability of GDOT district personnel to implement
63

the UAS workflow, as well to confirm the effectiveness of the data the UAS would collect in fulfilling the needs of GDOT in right-of-way maintenance. The cognitive walkthrough technique was the same as the earlier set, as were the domain experts being interviewed. The research team again relied on these walkthroughs to validate the expected effectiveness and feasibility of implementing the proposed set of UAS-assisted workflows. The UAS tasks presented during this set of cognitive walkthroughs are illustrated as the shaded blocks in figure 41. The GDOT personnel included in this step provided several useful comments on the advantages they saw in the data being collected using UAS, as compared to the limitations of the current practices. The GDOT participants did not request any changes to the workflow as proposed.
64

CHAPTER 9. LESSONS LEARNED: RECOMMENDATIONS FOR GRASS MOWING ASSESSMENT GUIDELINES
This chapter discusses recommendations for the grass mowing assessment using the user interface in the ArcGIS software, based on lessons learned from tasks performed in the research project.
FLIGHT PARAMETER OPTIMIZATION Drone planning and parameters highly depend on the field schedule and the limited flight areas affected by FAA regulations and field obstacles, which can directly affect the quality of the generated 3D models and orthophotos. In this section, we analyze several process failure scenarios and their causes to avoid their occurrence during the implementation of the framework developed in the research project.
1. Orthophoto generation failure primarily occurs at the boundary of the target area due to the shortage of overlapping images. To prevent the area of interest from being affected, it is recommended that the flight path covers a sufficient area beyond the intended boundaries. This extended coverage will enable the orthophoto of the target grass area of interest to be generated correctly. An example of generation failure at the boundaries is shown in figure 42.
65

Figure 42. Image. Orthophoto generation failure at the boundary. 2. Alignment failure is an error where the generated orthophotos on the same areas are
not aligned correctly, thus causing the detected mowing quality per each dataset not to be identified. This error is due to inaccurate GPS coordinates in each drone image when generating orthophotos (i.e., while using the PIX4D software). Manual registration or the adoption of a more accurate GPS sensor, such as one capable of real-time kinematics (RTK), can solve this type of error. An example of alignment failure is shown in figure 43.
66

Figure 43. Image. Orthophoto alignment failure. 3. Variation in the point cloud generated from structure from motion can output the
point cloud data with various quality and noise. Optimal flight camera angles are evaluated by computing the static objects' height on the highway, the number of missing objects, and inverse point cloud errors. The inverse point cloud sometimes occurs due to the poor quality of images taken by the drone (i.e., blur or shadow in the images). Overall, the best combination was found to be a camera angle of 90 degrees or a combination of images at 45- and 60-degree camera angles (see table 11, table 12, and table 13). Table 11. Result of grass height accuracy among camera angle combinations.
67

Test Sites (RSME1 in
cm) I-675 Location 1 I-675 Location 2

45 degrees 5 1

Overall

6

1 Root mean square error.

60 degrees
1 4 5

90 degrees
4 6 10

45+60 degrees
6
5 11

45+90 degrees
3
2 5

60+90 degrees
2
3 5

Table 12. Result of missing object in point cloud data.

Test Sites (RSME in
cm) I-675 Location 1 I-675 Location 2
Overall

45 degrees
5
1 6

60 degrees
1
4 5

90 degrees
4
6 10

45+60 degrees
6
5 11

45+90 degrees
3
2 5

60+90 degrees
2
3 5

Test Sites (Flipping Errors)
I-675 Location 1
I-675 Location 2
Overall

Table 13. Result of flipping errors.

45

60

degrees degrees

Yes (90%)

Yes (60%)

No (10%) No (0 %)

Yes

Yes

90 degrees
No (0%)
(incomplete)
No (0 %)

45+60 degrees
No (10 %) No (0 %)
No (5 %)

45+90 degrees
Yes (50 %)
No (10 %)
Yes

60+90 degrees
Yes (60%) No (0 %)
Yes

3D SCENE CONSTRUCTION FROM DRONE IMAGES 4. Registration failure causes significant errors when estimating the grass height, as the proposed model assumes that the reference scene and the target scene are registered correctly. 5. Scene completeness failure causes errors in automatic point cloud registration if there are significant differences between the reference and target models. To
68

improve the correctness of the automatic point cloud registration, verify that the reference and target models are not significantly different. Examples are shown in figure 44 and figure 45.
Figure 44. Images. Failure example of automatic point cloud registration due to a large difference between reference and target models.
Figure 45. Image. Drastic change in target model compared to the reference model. 69

CHAPTER 10. CONCLUSIONS AND FUTURE RESEARCH
The project results presented in this report show the potential use of a performance evaluation monitoring application of grass mowing contractors. The developed algorithms show successful detection of mowed/unmowed grass areas by using temporal point cloud information and image processing techniques that use images collected by unmanned aerial systems. In the first step of the process, the point cloud data are generated using photogrammetry software. The point cloud data are processed with a deep learning model to obtain a grass region. In the next step, the grass height estimation algorithm estimates the vegetation condition by measuring the grass difference compared to the mowed condition. Additionally, the 2D framework based on image processing techniques shows good performance at distinguishing color and texture information, supplementing the results of the 3D framework by further contributing to the detection of mowed and bare spots.
Cognitive walkthrough testing at the GDOT district level has indicated that the UAS technology developed by the research team can be successfully integrated into the current GDOT workflow.
Future research on the mowing quality detection framework developed in this study should consider the following:
Conducting data collection, analysis, and reporting with GDOT district-level personnel performing all tasks, including drone flights. Time on task and error measurements should be taken and used to determine if changes to the revised workflow are advisable.
70

Determining the skill set and training required to implement the UAS-based performance evaluation method at the district level.
Gathering more highway data to evaluate different types of highway environments and road conditions. The diversity in the road and vegetation on highways can improve the data-driven approaches. Both grass detection algorithms in image processing and point cloud processing rely on deep learning models, which learn patterns from a collected dataset.
Testing more advanced camera sensors and RTK GPS can improve the quality of point cloud models generated by the photogrammetry software. In particular, the RTK GPS has better performance than the traditional GPS sensor attached to the drone. Future research should consider testing RTK GPS to minimize errors from the photogrammetry software and improve the performance of the grass height estimation algorithm.
Automating the noise removal from the point cloud generated with structure from motion. This can be achieved by introducing machine learningbased abnormality detection and removing unnecessary noise that can potentially cause inaccurate grass measurement in the proposed algorithm.
Using advanced CNN architecture to improve the accuracy of detecting grass areas. Although the research used a state-of-the-art deep learning model, the rapid advancement of machine learning algorithms means that new deep learning models are being introduced that perform better than the current methods used in this research.
71

Upgrading automatic point cloud registration with machine learningbased point cloud registration to match landmarks in two point cloud datasets. The current method relies on the performance of the photogrammetry software, and using machine learning approaches may improve performance in terms of point cloud registration.
Regarding the 2D image-based framework, directly exploiting a high-resolution drone image instead of an orthophoto with lost information, which may allow for better model performance.
72

APPENDIX A. INSTRUCTIONS FOR REMOVAL OF UNWANTED AREA The input data from PIX4Dmapper may contain point cloud noise that can cause lower accuracy and increase processing time in the deep learning model. The procedure for removing noise and inputting data using CloudCompare is described in this appendix. Note that this appendix does not cover how to prepare annotation files and train the deep learning models; models implemented are already trained using the highway scene dataset. The user interface connected to ArcGIS already contains a pre-trained deep learning model that classifies the target object proposed in this study. This appendix summarizes a step-by-step process for removing the unwanted area from the point cloud resulting from UAS images after the use of photogrammetry. In the first step, the raw point cloud data processed with the PIX4Dmapper program is visualized using the CloudCompare program (see figure 46 and figure 47). The user clicks on the File tab and then the Open option to select the input point cloud file. Next, the user clicks on the "Apply all" button to visualize the point cloud data (see figure 48). The CloudCompare program has an option to segment an area of interest (see figure 49), and the user can select the region of interest (see figure 50). Once the region of interest is selected, the user clicks the Segment icon to isolate that area. The user can then save the file and load it to the ArcGIS user interface. This allows the user to process a point cloud with undesired points removed.
73

Figure 46. Screenshot. Example of CloudCompare user interface. The first step is to open point cloud data by navigating the "File" tab and selecting "Open."
Figure 47. Screenshot. CloudCompare dialog box to upload data from a file. Select the "Apply all" button to visualize the point cloud data. 74

Figure 48. Image. Example of point cloud data visualization when "Apply all" is selected in CloudCompare.
Figure 49. Screenshot. Segment an object using the "Segment" option under the "Edit" tab in CloudCompare. 75

Figure 50. Screenshot. Select the region of interest. 76

ACKNOWLEDGEMENTS The research team acknowledges the Georgia Department of Transportation for its support of this research project, and thanks the following individuals for their direct involvement and contributions to the project:
Members of the GDOT Technical Advisory Committee and Supriya Kamatkar. Yosuke Yajima, Jingdao Chen, Seongyong Kim, and Steven Kangisser, graduate
students at Georgia Institute of Technology. Phillip Taylor of Pro Cutters Lawnscapes Inc.
77

REFERENCES
Alami, R., Chatila, R., Clodic, A., Fleury, S., Herrb, M., Montreuil, V., and Sisbot, E. A. (2006). "Towards human-aware cognitive robots." The fifth international cognitive robotics workshop (the AAAI-06 workshop on cognitive robotics).
Bhatnagar, S., Gill, L., and Ghosh, B. (2020). "Drone image segmentation using machine and deep learning for mapping raised bog vegetation communities." Remote Sensing, Multidisciplinary Digital Publishing Institute, 12(16), 2602.
De Castro, A. I., Torres-Snchez, J., Pea, J. M., Jimnez-Brenes, F. M., Csillik, O., and Lpez-Granados, F. (2018). "An Automatic Random Forest-OBIA Algorithm for Early Weed Mapping between and within Crop Rows Using UAV Imagery." Remote Sensing, 10(2).
Hoffman, G., and Breazeal, C. (2007). "Effects of anticipatory action on human-robot teamwork efficiency, fluency, and perception of team." Proceedings of the ACM/IEEE international conference on Human-robot interaction, 18.
Irizarry, J., and Johnson, E. N. (2019). Field-test-based guideline development for the integration of unmanned aerial systems (UASs) in GDOT operations. No. FHWA-GA-191609. Georgia. Department of Transportation. Office of Performance-Based Management & Research.
Kaber, D. B., and Endsley, M. R. (2004). "The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task." Theoretical issues in ergonomics science, Taylor & Francis, 5(2), 113153.
Kahoush, M., Yajima, Y., Kim, S., Chen, J., Park, J., Kangisser, S., Irizarry, J., and Cho, Y. K. (2021). "Analysis of Flight Parameters on UAV Semantic Segmentation Performance for Highway Infrastructure Monitoring." ASCE 2021 International Conference on Computing in Civil Engineering (i3CE).
Kim, K., Chen, J., and Cho, Y. K. (2019). "Evaluation of machine learning algorithms for worker's motion recognition using motion sensors." Computing in Civil Engineering 2019: Data, Sensing, and Analytics, American Society of Civil Engineers Reston, VA, 5158.
Kim, S., and Irizarry, J. (2020). "Knowledge-Based Considerations for Developing UAS Operational Procedures on Infrastructure and Construction Task Environments." Construction Research Congress 2020: Computer Applications, American Society of Civil Engineers Reston, VA, 268277.
Kirwan, B., and Ainsworth, L. K. (1992). A guide to task analysis: the task analysis working group. CRC press.
78

Lasota, P. A., and Shah, J. A. (2015). "Analyzing the effects of human-aware motion planning on close-proximity humanrobot collaboration." Human factors, Sage Publications Sage CA: Los Angeles, CA, 57(1), 2133.
Lewis, C., and Wharton, C. (1997). "Cognitive Walkthroughs." Handbook of HumanComputer Interaction, Elsevier, 717732.
Lpez-Granados, F., Torres-Snchez, J., de Castro, A. I., Serrano-Prez, A., MesasCarrascosa, F. J., and Pea, J. M. (2016). "Object-based early monitoring of a grass weed in a grass crop using high resolution UAV imagery." Agronomy for Sustainable Development, 36, 112.
Maturana, D., and Scherer, S. (2015). "Voxnet: A 3d convolutional neural network for real-time object recognition." 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 922928.
Milioto, A., Vizzo, I., Behley, J., and Stachniss, C. (2019). "Rangenet++: Fast and accurate lidar semantic segmentation." 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 42134220.
Pap, M., Kiraly, S., and Moljak, S. (2019). "INVESTIGATING THE USABILITY OF UAV OBTAINED MULTISPECTRAL IMAGERY IN TREE SPECIES SEGMENTATION." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2/W18, 159165.
Pi, Y., Nath, N., and Behzadan, A. (2021). "Detection and Semantic Segmentation of Disaster Damage in UAV Footage." Journal of Computing in Civil Engineering, 35, 4020063.
Qi, C. R., Yi, L., Su, H., and Guibas, L. J. (2017). "Pointnet++: Deep hierarchical feature learning on point sets in a metric space." Advances in neural information processing systems, 30.
Yajima, Y., Kahoush, M., Kim, S., Chen, J., Park, J., Kangisser, S., Irizarry, J., and Cho, Y. K. (2021a). "AI-driven 3D Point Cloud-Based Highway Infrastructure Monitoring System using UAV AI-driven 3D Point Cloud-Based Highway Infrastructure Monitoring System using UAV." ASCE 2021 International Conference on Computing in Civil Engineering (i3CE).
Yajima, Y., Kim, S., Chen, J., and Cho, Y. K. (2021b). "Fast Online Incremental Segmentation of 3D Point Clouds from Disaster Sites." Proceedings of the 38th International Symposium on Automation and Robotics in Construction (ISARC), Dubai, UAE, 341348.
79

Locations