GEORGIA DOT RESEARCH PROJECT 20-10 Final Report
DEVELOPMENT OF DRONE-ASSISTED PAVEMENT PROFILE MAPPING: NEARSURFACE VOID DETECTION APPLICATION
Office of Performance-based Management and Research
600 West Peachtree Street NW | Atlanta, GA 30308 May 2022
TECHNICAL REPORT DOCUMENTATION PAGE
1. Report No.:
2. Government Accession No.:
FHWA-GA-2010
N/A
4. Title and Subtitle:
Development of Drone-Assisted Pavement Profile Mapping:
Near-Surface Void Detection Application
7. Author(s): Javier Irizarry (PI), Ph.D., P.E.; Tarek Rakha, Ph.D.
3. Recipient's Catalog No.: N/A
5. Report Date: May 2022
6. Performing Organization Code: N/A
8. Performing Organ. Report No.: 20-10
9. Performing Organization Name and Address: Georgia Tech Research Corporation 225 North Ave NW Atlanta, GA 30332
10. Work Unit No.: N/A
11. Contract or Grant No.: PI# 0017431
12. Sponsoring Agency Name and Address: Georgia Department of Transportation (SPR) Office of Performance-Based Management and Research 600 West Peachtree St. NW Atlanta, GA 30308
13. Type of Report and Period Covered: Final Report (August 2020May 2022)
14. Sponsoring Agency Code: N/A
15. Supplementary Notes: Prepared in cooperation with the U.S. Department of Transportation, Federal Highway Administration.
16. Abstract: Monitoring and performance-verification activities of roadway maintenance are labor-intensive tasks and prone to errors. The adoption of unmanned aerial system (UAS) technologies provides an opportunity to improve upon these tasks. This study had the goals of: (1) developing a drone-assisted workflow for pavement maintenance-related tasks, (2) developing and validating algorithms for the assessment of pavement maintenance-related tasks, and (3) performing a test case of a drone-assisted GDOT pavement maintenance task-planning and performancemonitoring process resulting in a prototype of a report of work performance assessment. After performing interviews with GDOT personnel to understand the process of inspecting pavements pre- and post-repairs, a workflow model of the current process was developed. A proposed UAS-integrated workflow was developed afterward and validated with GDOT personnel. Field tests were conducted in which images of pavements were collected at various locations focusing on pre-repair scenarios as well as locations suspected of having subsurface voids present. The collected images were used in the development of algorithms aimed at identifying anomalies in the pavement that could be indicative of the presence of subsurface voids. In the last phase of the project, a prototype reporting method leveraging the ArcGIS platform was developed to provide GDOT personnel with a way to visualize the results of analysis of images collected with UAS. Results showed that RGB (red, green, and blue) and infrared images have the potential to show anomalies in pavements that could not be possible with other inspection methods. Future work to improve the benefits of UAS applications in pavement maintenance could include the collection of additional images to train algorithms to improve the accuracy of results. An evaluation of the implementation of the proposed UAS-integrated workflow in the field and with GDOT personnel can answer questions on the practical implications of UAS use for pavement maintenance tasks.
17. Key Words: Unmanned Aerial Vehicle (UAV), Unmanned Aerial System (UAS), Operational Requirements, Technical Requirements, Maintenance, Pavement, Voids, Workflow.
19. Security Classification (of this report): Unclassified
Form DOT 1700.7 (8-69)
20. Security Classification (of this page): Unclassified
18. Distribution Statement: No restriction.
21. No. of Pages: 22. Price:
76
Free
Reproduction of completed page authorized
GDOT Research Project 20-10
Final Report
DEVELOPMENT OF DRONE-ASSISTED PAVEMENT PROFILE MAPPING: NEAR-SURFACE VOID DETECTION APPLICATION
By
Javier Irizarry, Ph.D., P.E. Professor of Building Construction1
Tarek Rakha, Ph.D. Assistant Professor of Architecture1
Steven Kangisser Graduate Student1
Kaiwen Chen, Ph.D. Assistant Professor of Civil
Engineering2
Yasser El Masri Graduate Research Assistant1
Hala Alfalih Graduate Research Assistant1
Samantha Morton Graduate Research Assistant1
1 Georgia Institute of Technology 2 University of Alabama
Georgia Tech Research Corporation
Contract with Georgia Department of Transportation
In cooperation with U.S. Department of Transportation Federal Highway Administration
May 2022
The contents of this report reflect the views of the authors, who are responsible for the facts and the accuracy of the data presented herein. The contents do not necessarily reflect the official views or policies of the Georgia Department of Transportation or of the Federal Highway Administration. This report does not constitute a standard, specification, or regulation.
ii
Symbol
in ft yd mi
in2 ft2 yd2 ac mi2
fl oz gal ft3 yd3
oz lb T
oF
fc fl
lbf lbf/in2
SI* (MODERN METRIC) CONVERSION FACTORS
APPROXIMATE CONVERSIONS TO SI UNITS
When You Know
Multiply By
To Find
LENGTH
inches
25.4
millimeters
feet
0.305
meters
yards
0.914
meters
miles
1.61
kilometers
AREA
square inches
645.2
square millimeters
square feet
0.093
square meters
square yard
0.836
square meters
acres
0.405
hectares
square miles
2.59
square kilometers
VOLUME
fluid ounces
29.57
milliliters
gallons
3.785
liters
cubic feet
0.028
cubic meters
cubic yards
0.765
cubic meters
NOTE: volumes greater than 1000 L shall be shown in m3
MASS
ounces
28.35
grams
pounds
0.454
kilograms
short tons (2000 lb)
0.907
megagrams (or "metric ton")
TEMPERATURE (exact degrees)
Fahrenheit
5 (F-32)/9
Celsius
or (F-32)/1.8
ILLUMINATION
foot-candles foot-Lamberts
10.76 3.426
lux candela/m2
FORCE and PRESSURE or STRESS
poundforce
4.45
newtons
poundforce per square inch
6.89
kilopascals
Symbol
mm m m km
mm2 m2 m2 ha km2
mL L m3 m3
g kg Mg (or "t")
oC
lx cd/m2
N kPa
Symbol
mm m m km
mm2 m2 m2 ha km2
mL L m3 m3
g kg Mg (or "t")
oC
lx cd/m2
N kPa
APPROXIMATE CONVERSIONS FROM SI UNITS
When You Know
Multiply By
To Find
LENGTH
millimeters
0.039
inches
meters
3.28
feet
meters
1.09
yards
kilometers
0.621
miles
AREA
square millimeters
0.0016
square inches
square meters
10.764
square feet
square meters
1.195
square yards
hectares
2.47
acres
square kilometers
0.386
square miles
VOLUME
milliliters
0.034
fluid ounces
liters
0.264
gallons
cubic meters
35.314
cubic feet
cubic meters
1.307
cubic yards
MASS
grams
0.035
ounces
kilograms
2.202
pounds
megagrams (or "metric ton")
1.103
short tons (2000 lb)
TEMPERATURE (exact degrees)
Celsius
1.8C+32
Fahrenheit
ILLUMINATION
lux candela/m2
0.0929 0.2919
foot-candles foot-Lamberts
FORCE and PRESSURE or STRESS
newtons
0.225
poundforce
kilopascals
0.145
poundforce per square inch
Symbol
in ft yd mi
in2 ft2 yd2 ac mi2
fl oz gal ft3 yd3
oz lb T
oF
fc fl
lbf lbf/in2
**SSII iiss tthheessyymmboblofol rfothrethIneteIrnntaetironnaatliSoynsatel mSyosfteUmnitso.f AUpnpitros.pAriaptperroopunridainteg rsohouunlddibnegmshaodeultdo bcoemmplaydweithtoSceocmtiopnl4y owfiAthSTSMecEti3o8n0.4 of ASTM E(3R8e0vi.s(eRdeMviasrecdh 2M0a03rc)h 2003)
iii
TABLE OF CONTENTS
EXECUTIVE SUMMARY .............................................................................................. 1
CHAPTER 1. INTRODUCTION .................................................................................... 3
PROBLEM STATEMENT ........................................................................................ 4
GOALS AND OBJECTIVES..................................................................................... 5
Objective 1. Drone-assisted Workflow Definition and Initial Data Collection ......................................................................................................... 6
Objective 2. Development of Algorithm and Validation ................................... 6 Objective 3. Test Case, Task Planning, and Performance Monitoring............ 7
CHAPTER 2. BACKGROUND....................................................................................... 8
CHAPTER 3. METHODOLOGY ................................................................................. 10
ACTIVITY 1: MAPPING OF CURRENT WORKFLOW FOR PAVEMENT MAINTENANCE-RELATED TASKS, SPECIFICALLY, PROFILE MAPPING ....................................................................................... 10
ACTIVITY 2A: DRONE-ASSISTED WORKFLOW DESIGN .......................... 14
ACTIVITY 2B: INITIAL VISUAL DATA COLLECTION ................................ 17
ACTIVITY 3: DEVELOPMENT OF IMAGE PROCESSING AND CLASSIFICATION ALGORITHMS.............................................................. 17
ACTIVITY 4: DATA COLLECTION FOR VALIDATION OF ANALYSIS ALGORITHMS ................................................................................................. 17
ACTIVITY 5: DATA ANALYSIS AND DEVELOPMENT OF ARCGIS CONNECTION FOR RESULTS VISUALIZATION REPORT PROTOTYPE .................................................................................................... 18
CHAPTER 4. FIELD DATA COLLECTION ............................................................. 19
TEST SITE SELECTION ........................................................................................ 19
FIELD DATA COLLECTION RESULTS............................................................. 21
Henry County ...................................................................................................... 21 Walker County .................................................................................................... 26 Wilkinson County (McIntyre and Irwinton) .................................................... 30 National Center for Asphalt Technology .......................................................... 31
iv
CHAPTER 5. DATA PREPROCESSING: DUAL IMAGE UNDISTORTION....... 39 PHOTOGRAMMETRY PROCESSING................................................................ 42 Photogrammetry Software Selection................................................................. 42 Photogrammetry Process ................................................................................... 44 Processing with RTK .......................................................................................... 45
CHAPTER 6. DATA ANALYSIS ................................................................................. 47 ALGORITHM FOR RGB IMAGE ANALYSIS ................................................... 47 Segmentation of Crack Pixels by Deep Learning............................................. 47 Performance Evaluation..................................................................................... 48 Computing Processing Environment Requirements ....................................... 51 ALGORITHM FOR THERMAL INFRARED IMAGE ANALYSIS ................. 51 Segmentation of Crack Pixels by Sobel Edge ................................................... 51 Fusion of RGB and IR Imagery Information................................................... 53
CHAPTER 7. VISUALIZATION AND DOCUMENTATION IN ARCGIS ............ 57 ARCGIS WORK PLATFORM ............................................................................... 57 Processing Workflow in the ArcGIS Platform................................................. 58 BENEFITS AND LIMITATIONS........................................................................... 60
CHAPTER 8. CONCLUSIONS AND FUTURE WORK ........................................... 61 ACKNOWLEDGMENTS .............................................................................................. 64 REFERENCES................................................................................................................ 65
v
LIST OF FIGURES
Figure 1. Flowchart. Framework of the research methodology and required steps............ 6 Figure 2. Flowchart. Void detection workflow................................................................. 16 Figure 3. Map. Site locations for data collection. ............................................................. 20 Figure 4. Photos. Sample aerial images of data collection sites. ...................................... 21 Figure 5. Map. Henry County data collection site location. ............................................. 22 Figure 6. xxx. Mavic 2 Enterprise Advanced UAS used in data collection. .................... 23 Figure 7. Photos. Images of data collection activity in Henry County. ............................ 26 Figure 8. Map. LaFayette, Georgia, data collection site location. .................................... 27 Figure 9. xxx. Mavic Enterprise Dual UAS used in data collection................................. 27 Figure 10. Photos. Sample aerial images of data collection site in Walker County. ........ 30 Figure 11. Map. Wilkinson County data collection site locations. ................................... 31 Figure 12. Map. NCAT test track data collection site location. ....................................... 32 Figure 13. xxx. UAS units used in data collection at the NCAT Test Track.................... 33 Figure 14. Photos. Sample images of data collection activity at the NCAT Test
Track. .......................................................................................................................... 37 Figure 15. Photos. Damage experienced during flight incident........................................ 38 Figure 16. Images. Distortion in RGB and IR image sets from different dual-camera
drones. ......................................................................................................................... 40 Figure 17. Images. Image undistortion by camera calibration parameters process. ......... 41 Figure 18. Images. Examples of photogrammetry processing results. ............................. 45 Figure 19. Images. Comparison of IR photogrammetry processing with/without RTK
sensor. ......................................................................................................................... 46 Figure 20. Diagram. Architecture of U-Net neural network for the pixel-wise
segmentation of pavement crack................................................................................. 48 Figure 21. Charts. Loss, IoU, and accuracy of the trained set and validation set for
U-Net........................................................................................................................... 49 Figure 22. Images. Examples of crack segmentation results using the trained U-Net
model........................................................................................................................... 50 Figure 23. Images. Identification of cracks in infrared images by the Sobel edge
detector........................................................................................................................ 52 Figure 24. Images. IR and RGB alignment by ORB-ASIFT matching. ........................... 54 Figure 25. Images. Differentiation of true cracks and fixed crack patches by fusing
IR and RGB predictions.............................................................................................. 55 Figure 26. Images. Detecting potential thermal voids by fused RGB and IR imagery
data. ............................................................................................................................. 56 Figure 27. Screenshot. ArcGIS image geoprocessing interface. ...................................... 57 Figure 28. Flowchart. Processing workflow in ArcGIS platform..................................... 59 Figure 29. Images. Examples of prediction results for RGB and IR orthomosaics in
the ArcGIS platform. .................................................................................................. 59
vi
LIST OF TABLES Table 1. Interview participant positions and dates............................................................ 11 Table 2. Site locations for data collection......................................................................... 19 Table 3. Mavic 2 Enterprise Advanced UAS specifications............................................. 23 Table 4. Mavic Enterprise Dual UAS specifications. ....................................................... 28 Table 5. Mavic 2 Pro and Matrice 200 UAS specifications. ............................................ 33 Table 6. Sample of photogrammetry software available. ................................................. 43 Table 7. GPS accuracy with/without RTK sensor. ........................................................... 45 Table 8. Performance of U-Net model for pixel-level crack segmentation. ..................... 50
vii
AI ASIFT CNN CV FAA FCN GDOT GIS GPS GT HPC IMU IR ML NCAT NDT OMAT OMR ORB OSD
LIST OF ACRONYMS
Artificial intelligence Affice scale-invariant feature transform Convolutional neural network Computer vision Federal Aviation Administration Fully convolutional networks Georgia Department of Transportation Geographic information system Global positioning system Georgia Institute of Technology High-performance computing Inertial measurement unit Infrared Machine Learning National Center for Asphalt Technology Nondestructive testing Office of Materials and Testing Office of Materials and Research Oriented FAST and Rotated BRIEF Operational sequence diagram
viii
R-CNN RGB RTK UAS UAV
Region-based CNN Red, green, and blue Real-time kinematic Unmanned aerial systems Unmanned aerial vehicle
ix
EXECUTIVE SUMMARY
A team of researchers from the Georgia Institute of Technology initiated a research project with the Georgia Department of Transportation (GDOT) in August 2020. The goal was exploring a drone-assisted workflow to be employed in planning road pavement maintenance work as well as post-maintenance work-quality assessment employing unmanned aerial systems (UASs) and artificial intelligence (AI). Identifying near-surface voids was the primary aim. The research tasks consist of optimizing the workflow analysis for the GDOT pavement assessment, development of a data-driven process to analyze images with the goal of identifying pavement problem areas requiring maintenance as well as assessing the quality of repairs, and integration of a user-friendly interface for GDOT. This project uses a UAS equipped with a global positioning system (GPS), inertial measurement units (IMUs), camera sensor in the visible and infrared spectrum (IR) to collect image data on pavement areas. In the workflow analysis, the research team identifies optimal integration of AI and UAV usages and provides a workflow diagram to recommend best use of the proposed system. In the visual data-driven pavement analysis process, the research team develops a machine learningbased method to detect anomalies that could be indicative of the presence of sub-surface voids. The user interface incorporates the proposed system and provides clear and useful information for the GDOT Maintenance Department. The contribution of the proposed solutions could support GDOT's task of verifying performance of pavement contractors and expand application of AI and drone technology within GDOT.
1
The project was performed over a two-year period, with research tasks including the definition of drone-assisted workflow for identification of roadway pavement issues, the development of the AI driven analysis of drone-collected images from test sites, and development of the method to visualize the results of image analysis leveraging GDOT's existing GIS infrastructure and systems. In the evaluation of the proposed method, a total of four locations were used for field tests, including local roadways in Georgia, and a test facility for various roadway material types in Alabama. Some of the roadways were recommended by GDOT personal as locations suspected of subsurface issues. All the test sites included asphalt pavement roadways. During experiments, the drone was deployed to collect images at different elevations with visual and IR sensors. Flights were performed in both manual and automated mode with flight path planning applications. The collected image data were processed with photogrammetry software that converts image data into 3D point cloud data. Tests with 3D data were performed with the AI-driven image processing algorithm developed. The output of these results is provided via the user interface for GDOT within a geographic information system (GIS). Recommendations for the use of the proposed software are included to address UAS operations planning and execution, data collection equipment and data management, and user interface personnel requirements. The presented results could contribute to GDOT's plans for assessing the performance of pavement contractors.
Keywords: Unmanned Aerial Vehicles, Artificial Intelligence, User Interface, Operational Requirements, Technical Requirements, Inspections, Maintenance, Pavement, Workflow Optimization
2
CHAPTER 1. INTRODUCTION
The planning, monitoring, and performance verification of roadway maintenance tasks present several challenges. One example of such tasks is the identification of near-surface voids on roadway pavements. The Georgia Department of Transportation's (GDOT's) roadway maintenance planning requires that roadway areas scheduled for maintenance be provided to contractors before work can be performed. In addition, contractor performance of roadway maintenance must be evaluated for quality control by GDOT personnel. This labor-intensive and manual process sometimes misses areas not properly repaired by contractors, which may provide an inaccurate assessment of work performance and quality. With the recent adoption of unmanned aerial systems (UASs) or drones by GDOT, the challenges GDOT faces in identification of near-surface voids on roadway pavements could be overcome. By deploying drones to autonomously collect images in the infrared (IR) spectrum, in addition to the visible spectrum, the proposed research aims at developing a process to identify near-surface voids on roadway pavements. A pavement issue survey and quality report that GDOT could use to address pavement maintenance needs can be developed. This method can potentially provide more accurate information than field personnel inspections alone. The proposed research leverages drone technology and algorithms that rely on computer vision (CV), image processing techniques, and artificial intelligence (AI) to process the images of roadway pavement and identify areas with potential near-surface voids. The use of drones to collect images to be processed also addresses challenges with the current methods of task verification, such as collecting data over long distances as well as in dangerous traffic conditions. The successful development
3
of the envisioned workflow can lead to expanded applications of drone technology within GDOT.
PROBLEM STATEMENT The use of drones by various departments of transportation (DOTs) has increased in recent years. Since 2008, many states have studied the application of drones for various tasks. Georgia DOT has researched the use of drones since 2014, proposing several uses of the technology. Irizarry and Johnson (2014) identified several applications of drone technology at GDOT, such as airport and railway inspections, bridge inspections, and road construction data collection, and tested them in the field (Irizarry et al. 2019). The current research employs proven methods such as UAS-based photogrammetry and image processing, which have been used to assess the condition of existing roadways (Zhang and Elaksher 2012, Hart and Gharaibeh 2011, and Ruzgien et al. Error! Reference source not found.). The accuracy of the methods investigated had low error rates between the products from the processed images and actual project data. An algorithm was developed to identify pavement cracks from images obtained with a UAS and integrate the results with geographic information system (GIS) to illustrate the identified cracks in Google Maps (Ersoz et al. 2017). UAS-based photogrammetry and thermography workflows have also been effectively developed to identify building envelope anomalies in real time and build three-dimensional (3D) representations of existing infrastructure (Rakha and Gorodetsky 2018). Rakha et al. (2018) explored the autonomous detection of irregularities in the built environment using CV techniques. The developed frameworks are designed to be combined with other nondestructive testing (NDT) techniques to detect cracks, abnormalities, or vulnerabilities in infrastructure assets (El Masri and Rakha 2020), such
4
as roadways (Saarenketo and Scullion 2000). This sets the groundwork for routine monitoring and evaluation of functioning roadway conditions. Such investigations have demonstrated the necessary proof-of-concept framework for the advanced development of UAS-based NDT workflows specific to roadway surface inspection as a means of diagnosing problems through anomaly detection. The research team leveraged existing methods, as well as their own, to achieve the goals of the project. This contributes to achieving operational implementation of drone-assisted assessment of roadway pavement issues leveraging image/signal processing, feature classification, change detection, and AI methods (Tsai et al. 2017, Tsai and Chatterjee 2017, Tsai et al. 2014, Kaul et al. 2010, Tsai et al. 2010, and Wu and Tsai 2006).
GOALS AND OBJECTIVES The main objective of this research was to explore a drone-assisted workflow to be employed in planning road pavement maintenance work as well as post-maintenance workquality assessment. Identifying near-surface voids is the primary aim. The process would provide GDOT personnel with a report indicating locations of areas needing repair, and after repair, reports of quality of work performed by GDOT maintenance contractors, considering the presence of previously observed anomalies. The workflow would use images (visual and IR) collected by drones and processed by algorithms developed at the Georgia Institute of Technology (GT) to identify problem areas and determine if needed pavement repairs have been performed. An overview of the research methodology is provided in figure 1, and the objectives are described next.
5
Figure 1. Flowchart. Framework of the research methodology and required steps.
Objective 1. Drone-assisted Workflow Definition and Initial Data Collection To accomplish this objective, a workflow analysis was performed to map current practices in the identification of road pavement problem areas and evaluation of maintenance contractor performance. In addition, data collection and reporting needs were identified. A drone-assisted workflow was developed and validated with GDOT personnel. Visual data in the visible and infrared spectrum were collected at various locations selected in consultation with GDOT maintenance personnel. Visual data (images) were used for the development and/or calibration of the image processing algorithms developed.
Objective 2. Development of Algorithm and Validation An algorithm to analyze images with the goal of identifying pavement problem areas requiring maintenance as well as assessing the quality of repairs was developed and
6
validated. Image processing and machine learning methods available to the research team were employed. Additional data were collected for this task. Objective 3. Test Case, Task Planning, and Performance Monitoring The developed workflow and algorithm were implemented in test environments to assess the performance of the method in identifying problem areas. The method was also evaluated for data collection feasibility, processing time, identification of selected features, and accuracy of identification. The deliverable following data collection, algorithm development, and data analysis was a prototype report in the ArcGIS platform used by GDOT, which visualizes the image analysis results.
7
CHAPTER 2. BACKGROUND
As the nation's infrastructure grows, there will be an ever increasing need to extend the service life and usability of highway infrastructure. State highway maintenance personnel are charged with monitoring road conditions, assessing levels of damage, and analyzing maintenance needs. Road conditions are generally considered poor when issues such as structural integrity, skid resistance, or surface defects are evident (Schnebele et al. Error! Reference source not found.). Investment in highway infrastructure maintenance is a driver of economic growth (U.S. Congress 2021). The inspection, repair, and post-repair inspection tasks are necessary for the prevention of pavement failure and related repair (Shaghlil and Khalafallah Error! Reference source not found.). UAVs, or drones, have the potential to facilitate infrastructure inspections, including roadways, which is the area of interest in this research project.
A UAV is an aircraft without an aviator aboard. A drone has been commonly equipped with cameras capable of capturing high-resolution images and recording videos used to identify structural damage in infrastructure assets, among other uses (Eschmann et al. Error! Reference source not found.). UAVs incorporate the drone along with its related components, including command and control as well as navigation personnel (i.e., pilots). Together, these allow the operation of a UAV (Millian Error! Reference source not found.).
Benefits of UAVs include flexibility and low cost of implementation. Zakeri et al. (Error! Reference source not found.) points out:
8
"...the advantages of UAV systems are their low cost, fast speed, high maneuverability, and safety when collecting images. UAVs have already replaced satellites and manned vehicles in many applications where aerial images are needed. Moreover, they overcome the low flexibility and prohibitive cost involved in acquiring aerial imagery."
The major advantages of UAVs are their capability to cover large areas effectively and the high-resolution imagery they can acquire. A single UAV can cover a large area while identifying details at 1 cm (about 0.39 inch) scale or smaller. Therefore, UAVs can be utilized to identify problem areas with high accuracy in infrastructure assets (Millian Error! Reference source not found.). Cost savings, capabilities, and flexibility are all drivers of the increased use of drones by DOTs in recent years, including GDOT which has researched the use of drones since 2014 (Irizarry and Johnson 2014).
An important aspect of UAS implementation is the human factors consideration. Transportation management tasks must always ensure safety of DOT personnel, contractors, and the public. Human factors consideration of work processes or "workflow" are central to safe humanUAV teaming (Alami et al. 2006, Hoffman and Breazeal 2007, Lasota and Shah 2015). Proper consideration of human factors in workflow design is integral to obtaining acceptance by the intended users. Human performance has been demonstrated to be highly dependent on workload (Parasuraman et al. 2018; Young et al. 2011). A key factor is to maintain human situational awareness; transportation personnel must be provided with the ability to overcome technology failure. Therefore, workflow design must consider possible human errors and prevent failures from relying heavily on
9
automation and subsequent degradation of situational awareness and direct control of the system (Kaber and Endsley 2004).
10
CHAPTER 3. METHODOLOGY
Research activities for this study involve several types of interaction with GDOT personnel to define tasks to be included in a drone-assisted workflow and the required field tests. The research encompasses five main activities:
1. Mapping of the current workflow for pavement maintenance-related tasks, specifically profile mapping.
2. (a) Drone-assisted workflow design, and (b) initial visual data collection with the selected sensors (red, green, blue [RGB]; and IR).
3. Development of image processing and classification algorithms, and testing with the data set from the initial data collection.
4. Data collection for validation of the analysis algorithms. 5. Data analysis and development of an ArcGIS connection for the results
visualization report prototype.
ACTIVITY 1: MAPPING OF CURRENT WORKFLOW FOR PAVEMENT MAINTENANCE-RELATED TASKS, SPECIFICALLY, PROFILE MAPPING The initial research task was the investigation and mapping of GDOT's current workflow for identification and repair analysis of near-surface voids and other pavement conditions. This task was undertaken to understand data collection, analysis, and reporting practices currently employed to accomplish the required tasks. Due to limitations of personal contact implemented to reduce the spread of coronavirus disease 2019 (COVID-19), the current workflow had to be analyzed utilizing only remote techniques. This made the use of methodologies such as focus groups unfeasible. For this portion of the study, all contacts
11
with state DOT users were performed through online Microsoft Teams meetings. Interviews began with the collection of a general overview of current practices through interviews with three GDOT state office managers. This afforded the researchers a highlevel view of pavement repair practices.
The second part of this task was collection and analysis of current practices from GDOT district-level personnel, including the Office of Materials and Testing (OMAT). These officials have a more detailed and practical understanding of current workflow practices. District-level professionals, consultants, and materials laboratory personnel were interviewed. The positions and the dates of these interviews are outlined in table 1.
Table 1. Interview participant positions and dates.
Participant Position
Maintenance Contracts Assistant Assistant Area Engineer Permits Inspection Engineer Area Engineer Assets Manager Area Engineer District Contracts Manager District Maintenance Manager Manager, Construction District Maintenance Manager District Construction Engineer District Maintenance Engineer District Maintenance Manager District Area Engineer Assistant Materials Engineer State Pavement Engineer Maintenance Design Manager Maintenance Liaison State Maintenance Officer District Consultant Consultant
District
1 1 1 2 3 3 3 4 4 5 5 6 7 7 OMAT OMAT OMAT State State Consultant Consultant
Date
March 4, 2021 March 4, 2021 March 4, 2021 March 3, 2021 March 5, 2021 March 5, 2021 March 5, 2021 March 5, 2021 February 24, 2021 February 24, 2021 February 25, 2021 February 25, 2021 February 17, 2021 February 17, 2021 April 13, 2021 April 13, 2021 April 13, 2021 March 30, 2021 March 30, 2021 March 30, 2021 March 30. 2021
12
At the conclusion of these interviews, the current workflow was mapped using an operational sequence diagram (OSD). These diagrams document a sequence of control movements and/or information-collecting activities, executed to accomplish a task (Kirwan and Ainsworth 1992). Similar to detailed flow diagrams, OSDs permit limited branching, looping, and conditional activities. In this case, these diagrams are employed to illustrate operations associated with the current workflow. Operations are categorized as behavioral elements and assigned specific symbols. Categorization is achieved by applying both output and methodology criteria using five common symbols. These symbols define information flow and behavior and are similar to those used for process charts (Kirwan and Ainsworth 1992).
Once the workflow was diagrammed, it was validated by distributing the draft workflow diagram to the same professionals who took part in the initial interviews. Participants were asked to mark corrections and other modifications. A limited number of modifications were identified at this stage. To improve upon this validation, cognitive walkthrough interviews were conducted with the same users. Again, these interviews were conducted online using the Microsoft Teams platform due to health concerns related to COVID-19. Cognitive walkthroughs are a method used to evaluate user practices and interactions by analyzing the mental processes required to perform the task. This method is ideal for determining the sequence of events that take place in accomplishing a goal. They are also useful in identifying problems with a workflow and causes of issues (Lewis and Wharton 1997).
The cognitive walkthroughs produced an extensive number of corrections and clarifications to the initial workflow diagram, which were mapped previously. The result was a much higher fidelity representation of the current workflows in use. The current
13
workflows are incorporated in figure 2 as the nonshaded blocks within the operational sequence diagram.
14
ACTIVITY 2A: DRONE-ASSISTED WORKFLOW DESIGN In this activity, the research team analyzed the tasks performed by GDOT personnel to determine the appropriate UAS platform and related sensors to be integrated into the tasks. Appropriate points for the introduction of UAS technology into the current workflow were determined. Criteria for utilization of these technologies included improvements to data accuracy, reliability, collection time, cost, detail, and safety considerations. This evaluation was conducted collaboratively by the entire project team. Higher level UAS tasks were then inserted into the workflow diagram.
Key higher level UAS tasks were decomposed to identify new workflow components at a greater level of fidelity. Task decomposition methodologies helped in the identification of greater detail related to tasks of interest as compared to the level that experts typically offer, allowing task descriptions to become more detailed and informative (Kirwan and Ainsworth 1992).
The first step in task decomposition was to outline the general tasks identified by the project team. Descriptions were developed in adequate detail to clearly understand the activity required to complete the task. Tasks were divided into several subtasks to facilitate understanding. Next, each task element was decomposed, categorized, and labeled. At this point, the decomposed tasks were documented. The team was asked to add any missing detail about the subtask. The result was a compilation of highly detailed task descriptions for UAS-related modifications to the current GDOT workflow for pavement evaluation. Figure 2 represents the decomposition of the pavement void data processing task.
15
Following this step, a new round of cognitive walkthroughs was conducted with the seven GDOT district offices. This was to validate the ability of GDOT district personnel to implement the UAS workflow, as well as to confirm the effectiveness of the data collected by the UAS in meeting the needs of GDOT in pavement analysis. The cognitive walkthrough included the same domain experts who were interviewed in the previous step. The research team again relied on these walkthroughs to validate the expected effectiveness and feasibility of implementing the proposed set of UAS-assisted workflows. The UAS tasks presented during this set of cognitive walkthroughs are illustrated as the shaded blocks in figure 2. GDOT personnel included in this step provided several useful comments on advantages they saw in the data being collected using UAS, as compared to the limitations of the current methods. The GDOT participants did not request any changes.
16
Figure 2. Flowchart. Void detection workflow. 17
ACTIVITY 2B: INITIAL VISUAL DATA COLLECTION During this task of the study, the team developed a schedule for the field data collection to be performed throughout the research period. Field data collection was performed according to the experimental design developed in the first activity. Data collection was performed in compliance with current Federal Aviation Administration (FAA) regulations applicable to the operation of UAS. Site locations are described in Section 4 on Field Data Collection.
ACTIVITY 3: DEVELOPMENT OF IMAGE PROCESSING AND CLASSIFICATION ALGORITHMS The team developed algorithms aimed at autonomously detecting items such as voids, cracks, and other anomalies, to guide the detection process and increase robustness. Detection of objects from images involves extracting useful object features. The approach was based on deep learning, in which a machine learning (ML) model, specifically a deep convolutional neural network (CNN) feature descriptor is trained by many example images of the same object together with some negative images (non-object). In this project a method taking advantage of such feature extraction approaches was developed. Based on the RGB and IR camera specifications, the team developed two neural networks for IR and RGB inputs.
ACTIVITY 4: DATA COLLECTION FOR VALIDATION OF ANALYSIS ALGORITHMS This activity involved additional field data collection involving the workflow for the collection of images to be processed by the developed analysis method. The research team
18
arranged to have GDOT personnel involved in the tasks benefiting from UAS integration, to participate in data collection activities. As with the initial data collection activities, flights were performed in compliance with current FAA regulations applicable to the operation of UASs. ACTIVITY 5: DATA ANALYSIS AND DEVELOPMENT OF ARCGIS CONNECTION FOR RESULTS VISUALIZATION REPORT PROTOTYPE In this activity, a UAS-based application framework for the collection of data, its analysis and identification of anomalies, such as cracks and voids, was developed. A connection with the ArcGIS platform was developed to provide GDOT with a prototype of a report that visualizes the results of the image-processing algorithms by showing anomalies in a georeferenced map. Additional data for validation of the prototype were collected.
19
CHAPTER 4. FIELD DATA COLLECTION
TEST SITE SELECTION Sites in three counties and one out-of-state site were identified for data collection. Locations are listed in table 2. Data collection sites were recommended by GDOT personnel based on the potential presence of desired pavement anomalies, such as voids, for identification. Table 2 includes information on these sites, including name, location, date visited, number of flights performed, and amount of data collected. Figure 3 illustrates the location of the sites where data were collected. Figure 4 provides examples of aerial images collected at various sites. The markers present in the aerial photos illustrate the locations of the images captured.
Table 2. Site locations for data collection.
Site Name
Henry County
Henry County
Walker County Wilkinson County 1 Wilkinson County 2 National Center for Asphalt Technology (NCAT)
Location
1522 S Zack Hinton Pkwy, McDonough, GA 1522 S Zack Hinton Pkwy, McDonough, GA Main Street, SR-1 Business Route, Lafayette, GA GA 29, McIntyre, GA
GA 29, Irwinton, GA
1600 Lee Road 151, Opelika, AL 36804
Date Visited
May 6, 2021
May 24, 2021
May 17, 2021 May 20, 2021 May 20, 2021 Oct. 26, 2020, Dec. 2, 2020, Dec. 11, 2020
No. of Flights
3
3
12 4 1
35
Images Captured
414
349
542 288 92
1,020
20
Figure 3. Map. Site locations for data collection. 21
Figure 4. Photos. Sample aerial images of data collection sites. FIELD DATA COLLECTION RESULTS Henry County Field data collection at the Henry County location was performed on May 6 and 24th, 2021. The site was located at South Zack Hinton Pkwy in McDonough, Georgia, approximately 35 miles from the Georgia Tech campus. The data collection environment is illustrated in figure 5.
22
Figure 5. Map. Henry County data collection site location. A GDOT representative was present at the data collection site. A Mavic 2 Enterprise Advanced UAS was used to collect the images (see figure 6). Specifications on the UAS unit used in the data collection are shown in table 3. Three flights were performed and a total of 414 images were collected. Flights were performed in manual control mode, as automated flight mode is not readily available in the UAS unit used in the data collection session. Figure 7 illustrates the data collection session.
23
Figure 6. Photo. Mavic 2 Enterprise Advanced UAS used in data collection.
Table 3. Mavic 2 Enterprise Advanced UAS specifications.
Mavic 2 Enterprise Advanced Specifications
Takeoff Weight (Without Accessories) Max Speed
909g 50 kph (P-mode, without wind)
Max Service Ceiling Above Sea Level Max Flight Time
6000 m 31 min (measured while flying at 25 kph in windless conditions) 28 min (RTK module attached)
Operating Temperature
-10 to 40 C
GNSS
GPS+GLONASS
Internal Storage
24 GB
M2EA Thermal Camera Sensor
Uncooled VOx Microbolometer
Focal Length
Approx. 9mm
24
Sensor Resolution Accuracy of Thermal Temperature Scene Range Digital Zoom Pixel Pitch Spectral Band Photo Format Video Format Metering Method M2EA Visual Camera Sensor Lens
ISO Range Digital Zoom Max Image Size Video Resolution Photo Format Video Format Sensing System Sensing System Forward
Backward
25
35 mm format equivalent: Approx. 38mm 640512 @30Hz Measurement: 2 or 2%, whichever is greater. -40 C to 150 C (High Gain) -40 C to 550 C (Low Gain) 16 12 m 8-14 m R-JPEG MP4 Spot Meter, Area Measurement
1/2" CMOS, Effective Pixels: 48 M FOV: 84 35 mm format equivalent: 24 mm Aperture: f/2.8 Focus: 1 m to Video: 100-12800 (auto) Photos: 100-1600 (auto) 32 80006000 38402160@30fps 19201080@30fps JPEG MP4
Omnidirectional Obstacle Sensing[2] Precision Measurement Range: 0.5 - 20 m Detectable Range: 20 - 40 m Effective Sensing Speed: 14m/s FOV: Horizontal: 40, Vertical: 70 Precision Measurement Range: 0.5 - 16 m Detectable Range: 16 - 32 m
Upward Downward
Sides
Remote Controller Operating Frequency
Max Transmission Distance (unobstructed, free of interference)
Storage Operating Temperature Intelligent Flight Battery Capacity Battery Type RTK Module Dimensions Connections RTK Positioning Precision In RTK FIX
Effective Sensing Speed: 12m/s FOV: Horizontal: 60, Vertical: 77 Precision Measurement Range: 0.1-8 m Precision Measurement Range: 0.5 -11m Detectable Range: 11-22 m Precision Measurement Range: 0.5 - 10 m Effective Sensing Speed: 8m/s FOV: Horizontal: 80, Vertical: 65
2.400 - 2.483 GHz; 5.725 - 5.850 GHz 2.400 - 2.483 GHz; 5.725 5.850 GHz FCC: 10000m ROM 16GB + microSD Extensible Storage -20 C - 40 C
3850 mAh LiPo
69 mm x 69 mm x 59mm Micro USB Port
1cm+1ppm (Horizontal) 1.5cm+1 ppm (Vertical)
SD Cards Supported SD Cards
Micro SDTM Supports a microSD with capacity of up to 128 GB. A UHS-I Speed Grade 3 rating microSD card is required.
26
Figure 7. Photos. Images of data collection activity in Henry County. Walker County The field test at Walker County was conducted on May 17, 2021. The site is located on Main Street, or otherwise known as SR-1 Business Route in LaFayette, Georgia, approximately 97 miles from the Georgia Tech campus (see figure 8). Sample images collected can be seen in figure 10. A representative from the GT research team collected data at the site. A Mavic Enterprise Dual (see figure 9) was used to collect the images. Specifications on the UAS unit used in data collection are shown in
table 4. Twelve flights were carried out and a total of 542 images were collected.
27
Figure 8. Map. LaFayette, Georgia, data collection site location.
Figure 9. Photo. Mavic Enterprise Dual UAS used in data collection. 28
Table 4. Mavic Enterprise Dual UAS specifications.
Mavic 2 Enterprise Dual Specifications
Takeoff Weight (Without Accessories)
Dual Edition:899 g
Max Speed (near sea level, no wind)
72 kph (S-mode without wind)
50 kph (P-mode without wind)
Max Service Ceiling Above Sea Level
6000 m
Max Flight Time (no wind)
31 min (at a consistent speed of 25 kph)
Max Hovering Time (no wind)
29 min
Operating Temperature Range
-10C to 40C
GNSS
GPS+GLONASS
Internal Storage
24 GB
M2ED Thermal Camera
Sensor
Uncooled VOx Microbolometer
Lens
HFOV: 57
Aperture: f/1.1
Sensor Resolution
160120
Pixel Pitch
12 m
Spectral Band
8-14 m
Image Size [2]
640480 (4:3);
640360 (16:9)
Photo
JPEG
M2ED Visual Camera
Sensor
1/2.3" CMOS; Effective pixels: 12M
Lens
FOV: approx. 85
35 mm format equivalent:24 mm
Aperture: f/2.8
Focus: 0.5 m to
ISO Range
Video: 100-12800 (auto)
Max Image Size
Photo: 100-1600 (auto) 405630404:34056228016:9
Photo
JPEG
Sensing System
Omnidirectional Obstacle Sensing [3]
Forward
Precision Measurement Range:0.5 - 20 m
Detectable Range: 20 - 40 m
Effective Sensing Speed: 14m/s
FOV: Horizontal: 40, Vertical: 70
29
Backward
Upward Downward
Sides
Remote Controller Operating Frequency Max Transmission Distance (Unobstructed, free of interference) Operating Temperature Range Battery Charging Time Charging Temperature Operating Temperature Range Charging Time Supported SD Cards
Precision Measurement Range: 0.5 - 16 m Detectable Range: 16 - 32 m Effective Sensing Speed: 12m/s FOV: Horizontal: 60, Vertical: 77 Precision Measurement Range: 0.1 - 8 m Precision Measurement Range: 0.5 - 11 m Detectable Range: 11 - 22 m Precision Measurement Range:0.5 - 10 m Effective Sensing Speed: 8m/s FOV: Horizontal: 80, Vertical: 65
2.400 - 2.483 GHz; 5.725 - 5.850 GHz
FCC: 10000 m 0 to 40 3950mAh 2 hours 15 min 5 - 40 -10 to 40 90 min Micro SDTM
Supports a microSD with capacity of up to 128 GB. A UHS-I Speed Grade 3 rating microSD card is required.
30
Figure 10. Photos. Sample aerial images of data collection site in Walker County.
Wilkinson County (McIntyre and Irwinton) The field tests in Wilkinson County were conducted on May 20, 2021. The sites are located at McIntyre, Georgia, and Irwinton, Georgia, approximately 120 miles from the Georgia Tech campus (see figure 11). A representative from the GT research team was present at both tests. A Mavic 2 Enterprise Advanced UAS was used to collect the images. Five flights were carried out: four at McIntyre with 288 images collected and one at Irwinton with 92 images collected.
31
Figure 11. Map. Wilkinson County data collection site locations. National Center for Asphalt Technology The field tests at the NCAT Test Track facility were conducted on October 26, 2020; December 2, 2020; and December 11, 2020. The site is located at Opelika, Alabama, approximately 115 miles from the Georgia Tech campus (see figure 12). The team collected images of various segments of the track with known anomalies (types of damage). Two types of UAS were used to collect thermal and RGB images. The images were used to develop the computer vision workflow for identifying anomalies with the UAS-based images.
32
Figure 12. Map. NCAT test track data collection site location. Three members from the GT research team were present at the facility for the tests. A Mavic Enterprise Dual UAS, a Mavic 2 Pro UAS, and a Matrice 200 UAS (see figure 13) were used to collect the images. Specifications for the Mavic 2 Pro and the Matrice 200 UAS are shown in table 5. The total number of flights carried out at the NCAT facility were 35, and a total of 1,020 images were collected. Sample images from data collection are shown in figure 14.
33
(a) Mavic 2 Pro UAS
(b) Matrice 200 UAS Figure 13. Photo. UAS units used in data collection at the NCAT Test Track.
Table 5. Mavic 2 Pro and Matrice 200 UAS specifications.
Max Takeoff Weight Max Payload (2 TB50) Max Payload (2 TB55)
Matrice 200 UAS Specs. 6.14KG Approx.2.34kg (with two standard batteries) Approx.1.61kg (with two standard batteries)
34
Vertical: 1.64 feet (0.5m) or 0.33 feet (0.1m, Downward Vision System enabled)
Hovering Accuracy (P-mode with GPS)
Max Service Ceiling Above Sea Level Max Wind Resistance Max Flight Time (No Payload, with TB55) Operating Temperature Forward Vision System Obstacle Sensing Range FOV Operating Environment Downward Vision System Velocity Range Altitude Range Operating Range Operating Environment Ultrasonic Sensor Operating Range
Horizontal: 4.92 feet (1.5m) or 0.98 feet (0.3m.Downward Vision System enabled) A Mode: 51.4mph (82.8kph) 1.86 mi (3000 m) 39.4 ft/s (12 m/s) 38min -4 to 113 F (-20 to 45 C)
2.3-98.4 feet (0.7-30 m) Horizontal 60Vertical 54 Surfaces with clear patterns and adequate lighting (> 15 lux)
<32.8 ft/s (10 m/s) at the height of 6.56 feet (2 m) <32.8 feet (10 m) <32.8 feet (10 m) Surfaces with clear patterns and adequate lighting (> 15 lux) 0.33-16.4 feet (10-500 cm)
Ultrasonic Sensor Operating Environment Remote Controller Model
Operating Frequency
Max Transmitting Distance (unobstructed, free of interference) Video Output Ports Power Supply Charging Dual User Capability Mobile Device Holder Max Mobile Device Width Operating Temperature Battery Upward Infrared Sensor Obstacle Sensing Range
Non-absorbing material, rigid surface (thick indoor carpeting will reduce performance)
GL6D10A 2.400-2.483 GHz 5.725-5.850 GHz 2.4 GHz: 4.3 miles (7 km, FCC); 2.2 miles (3.5 km, CE); 2.5 miles (4 km, SRRC) 5.8 GHz: 4.3 miles (7 km, FCC); 1.2 miles (2 km, CE); 3.1 miles (5 km, SRRC) USB, HDMI Built-in battery DJI charger Host-and-Slave connection Tablet or Smart Phone 170 mm -4 to 104 F (-20 to 40 C) 6000mAh 2S LiPo
0-16.4 feet (0-5 m)
35
FOV Operating Environment
5
Large-sized object with diffuse reflecting surface or high reflective rate (>10%
Mavic 2 Pro Camera Sensor Lens
ISO Range
Still Image Size Video Resolution
Photo Format Video Format Aircraft Takeoff Weight
Mavic 2 Pro Specs. 1" CMOS Effective Pixels: 20 million
FOV: about 77 35 mm Format Equivalent: 28 mm Aperture: f/2.8f/11 Shooting Range: 1 m to Video: 100-6400 Photo: 100-3200 (auto) 100-12800 (manual) 54723648 4K: 38402160 24/25/30p 2.7K: 2688x1512 24/25/30/48/50/60p FHD: 19201080 24/25/30/48/50/60/120p JPEG / DNG (RAW) MP4 / MOV (MPEG-4 AVC/H.264, HEVC/H.265)
Mavic 2 Pro: 907 g
Max Speed (near sea level, no wind) Maximum Takeoff Altitude Max Flight Time (no wind) Max Flight Distance (no wind) Operating Temperature Range GNSS Internal Storage Sensing System Sensing System Forward
72 kph (S-mode) 6000 m 31 minutes (at a consistent 25 kph) 18 km (at a consistent 50 kph) -10C to 40C GPS+GLONASS 8 GB
Omnidirectional Obstacle Sensing1 Precision Measurement Range: 0.5 - 20 m Detectable Range: 20 - 40 m
36
Backward
Upward Downward
Sides
Remote Controller Operating Frequency Max Transmission Distance (unobstructed, free of interference)
Operating Temperature Range Battery Intelligent Flight Battery Capacity Voltage Charging Temperature Range Supported SD Cards
Effective Sensing Speed: 14m/s FOV: Horizontal: 40, Vertical: 70 Precision Measurement Range: 0.5 - 16 m Detectable Range: 16 - 32 m Effective Sensing Speed: 12m/s FOV: Horizontal: 60, Vertical: 77 Precision Measurement Range: 0.1 - 8 m Precision Measurement Range: 0.5 - 11 m Detectable Range: 11 - 22 m Precision Measurement Range: 0.5 - 10 m Effective Sensing Speed: 8m/s FOV: Horizontal: 80, Vertical: 65
2.400 - 2.483 GHz; 5.725 - 5.850 GHz
2.400 - 2.483 GHz; 5.725 - 5.850 GHz FCC: 10000 m 0 - 40 3950 mAh
3850 mAh 15.4 V 5 to 40 Micro SDTM Supports a microSD with capacity of up to 128 GB. A UHS-I Speed Grade 3 rating microSD card is required.
37
Figure 14. Photos. Sample images of data collection activity at the NCAT Test Track.
38
UAS Incident Report Two flights were successfully conducted on December 2, 2021, with the Matrice 200 UAS unit. During an attempt to land, the drone collided with a post and experienced a hard landing. This resulted in a damaged rotary blade, landing gear connector, and two landing gears. There were no injuries to participants in the flights, which only included research team members. The area was secured and there were no nonflight participants present. The data collection activity continued with the Mavic 2 Pro unit. Images of the damage experienced are shown in figure 15. This incident provided an opportunity to learn about the potential for equipment issues during data collection.
Figure 15. Photos. Damage experienced during flight incident.
39
CHAPTER 5. DATA PREPROCESSING: DUAL IMAGE UNDISTORTION Preprocessing of the paired digital and infrared images collected by UAVs is necessary to correct the distortion issues before aligning and fusing the two types of imagery data to detect anomalies that can be attributed to the presence of voids in pavements. Detailed procedures are presented with examples to illustrate the preprocessing method and its performance. Dual-camera drones can collect a pair of infrared and visual spectrum digital images simultaneously. However, different lenses are used for capturing infrared and digital images, respectively. Some distortion in the images may occur due to the curvilinear bending of light. Figure 16 includes sample RGB and IR images where curved road marking lines or pavement edges can be seen, although they are straight in the field. The distortion inconsistency causes difficulties in the alignment and fusion of IR and RGB spectral imagery data, which is important to detect potential voids on pavement surfaces.
40
Figure 16. Images. Distortion in RGB and IR image sets from different dual-camera drones.
A solution to the distortion issue was developed by camera calibration through the structure from motion (SfM) algorithm. The SfM process results in adjusted camera calibration parameters. Additionally, the undistortion process can be automatically performed through packages available through the OpenCV library. Figure 17 shows an example of digital image undistortion by the calibrated camera parameters. It can be observed that after
41
undistortion, the paired infrared and digital images can be successfully aligned and overmapped with each other by image-matching techniques.
Figure 17. Images. Image undistortion by camera calibration parameters process. 42
PHOTOGRAMMETRY PROCESSING Photogrammetry Software Selection Several software applications can be used to enable the UAS-assisted tasks tested in this research. Visual data collected with UAS platforms can be processed into graphical representations such as 3D models and orthomosaic maps, which, in turn, allow for the inspection, surveying, mapping, and monitoring of infrastructure, among other tasks. Some available applications include: PIX4Dmapper, DroneDeploy, Agisoft Metashape, Autodesk ReCap, and PhotoModeler UAS. Table 6 provides information on each application referenced.
43
Application PIX4Dmapper DroneDeploy Agisoft Metashape Autodesk ReCap
PhotoModeler UAS
Table 6. Sample of photogrammetry software available.
Measurement Features
Processing Mode
Cost (per licensed
user)
Product Website
Polylines, distance, surface, volume
Cloud & Local
$1,900 Educational
$3,500/year Professional
https://pix4d.com/wp-content/uploads/2017/10/Pix4DmapperV4.0-Feature-List_NEW_version-m.pdf
Volume, crop health, roof
Local
$399 per user/month
https://prismic-io.s3.amazonaws.com/dronedeploywww%2Fd25e2331-b928-471d-9c6b-fbbc4f7e456b_dronedeploypricing-comparison.pdf
Coordinate,
$179 Standard
distance, area,
Local Edition, one http://www.agisoft.com/pdf/photoscan_presentation.pdf
volume
computer
Ortho distances,
pipe diameters, angles, snap to
Cloud $300 annually https://www.autodesk.com/products/recap/overview
objects
$3,995
permanent
Volume, terrain contour
Local
license $2,075
http://www.photomodeler.com/products/UAS/default.html
annually
$199 monthly
44
Agisoft Metashape and PIX4Dmapper were selected as the primary data processing software tools to be tested for this research. These two applications are highly compatible with the drone units employed on the project. Metashape provides the option to process data locally whereas PIX4Dmapper allows both local and cloud processing. Metashape can automatically transfer coordinate systems between global and local systems; PIX4Dmapper can automatically separate a dense point cloud into five groups: ground, road surfaces, buildings, high vegetation, and human-made objects. Both applications support the measurement of distance, area, and volume. Finally, both Metashape and PIX4Dmapper have a "floating license" feature that allows for the activation/deactivation of a license tied to a certain computer at any time in the case a new computer is required. This provides flexibility to users since the license can be migrated from one user to another.
Photogrammetry Process The photogrammetry processing workflow includes: (1) calculating camera calibration parameters, (2) aligning photos and generating a sparse cloud, (3) building a dense cloud, and (4) building the orthomosaic. Detailed instructions for using the photogrammetry software are described in Irizarry et al. (2019). The outputs (figure 18) after photogrammetry processing mainly include the 3D dense point cloud and 2D orthomosaic images containing the global positioning system (GPS) world file, which can be used for automatic georegistration in a GIS map.
45
Figure 18. Images. Examples of photogrammetry processing results.
Processing with RTK
The processing of IR images using photogrammetry software is more challenging due to the small image resolution at around 500400 pixels. The meter-level horizontal and vertical GPS errors (table 7) create interferences when detecting and matching imagery feature keypoints within sparse image sets. This project tested a drone equipped with a real-time kinematic (RTK) sensor. The horizontal and vertical GPS accuracies were significantly improved to centimeter-level errors (table 7). The photogrammetry processing of IR images is steadier and more successful compared with those without RTK sensors. An example is presented in figure 19.
Table 7. GPS accuracy with/without RTK sensor.
GPS Accuracy Horizontal Error
Vertical Error
Without RTK 5 m 10 m
With RTK 0.015 m 0.030 m
46
Figure 19. Images. Comparison of IR photogrammetry processing with/without RTK sensor.
47
CHAPTER 6. DATA ANALYSIS
ALGORITHM FOR RGB IMAGE ANALYSIS The following sections provide the reader with an introduction to the technical concepts related to the image processing required to perform the analysis of images collected by drone. These sections provides GDOT personnel with an understanding of the background process that results in the visualization of anomalies detected in pavement surfaces. The type of anomaly that was found through the processing of the images was related to cracks in the pavement.
Segmentation of Crack Pixels by Deep Learning
The segmentation of crack pixels within each patch classified as a "crack" supports the measurement and assessment of anomalies related to cracks in the pavement surface. Semantic segmentation is a pixel-level classification task that aims at clustering pixels that belong to the same category. It provides the most detailed information regarding the target objects in a given image. Existing approaches for semantic segmentation mostly utilize deep convolutional neural networks such as Mask R-CNN (region-based CNN), and fully convolutional networks (FCN). Among them, U-Net is a symmetric neural network built upon FCNs specialized in extracting high-resolution features with the combination of their spatial information, which leads to a fast and precise segmentation of objects with limited training images. This approach was first put forward by Ronneberger et al. (Error! Reference source not found.) for biomedical image segmentation tasks. Recently U-Net networks have been proven to be effective and accurate in concrete crack detections (Cheng
48
et al. 2018, Liu et al. Error! Reference source not found.). Figure 20 presents the architecture of the U-Net neural network for pixel-wise pavement crack segmentation.
Figure 20. Diagram. Architecture of U-Net neural network for the pixel-wise segmentation of pavement crack.
Performance Evaluation For the training and validation of the U-Net model for crack pixel segmentation, 614 and 175 sets of images and masks were used, respectively, for the training and validation process. Three metrics were used to evaluate the training and validation process: loss, intersection over union (IoU), and accuracy. The IoU is the ratio of the intersection area of the predicted crack pixels and the ground truth crack pixels to their union area. Figure 21 shows the change of loss, IoU, and accuracy values throughout the training and evaluation process, which indicates a good convergence of the neural network. The trained U-Net model reached an accuracy of more than 0.96 with an IoU of about 0.6 for both the training and validation dataset.
49
Figure 21. Charts. Loss, IoU, and accuracy of the trained set and validation set for U-Net.
The trained U-Net crack segmentation model was tested with the testing dataset to evaluate the model quality. Through the U-Net model, each pixel was predicted as a value ranging from 0 to 1. When the predicted value was equal to or greater than the threshold, the pixel was considered as a crack, otherwise as a noncrack. The precision, recall, and F1-score for the crack and noncrack pixel predictions in the testing dataset can be calculated. As shown in table 8, 96.00 percent of the pixels in the testing images were correctly predicted. Of the predictions, 95.22 percent can be trusted. The precision, recall, and F1-score were above 97 percent for the training and validation datasets and around 96 percent for the testing datasets. This indicated high performance with slight overfitting.
50
Table 8. Performance of U-Net model for pixel-level crack segmentation.
Training dataset Validation dataset Testing dataset
Precision (%) 97.47 97.28 96.00
Recall (%) 97.95 97.23 95.22
F1-score (%) 97.70 97.26 95.61
Figure 22 shows an example of the detection results of UAV-captured high-definition images using the proposed method. It can be observed that most of the major crack anomalies could be detected. Moreover, the number and percentage of crack pixels within each UAV-image can also be calculated, which is useful to estimate the crack coverage area. This is done by setting up a measuring scale with reference to the field of view of each image.
Figure 22. Images. Examples of crack segmentation results using the trained U-Net model.
51
Computing Processing Environment Requirements
The training of a deep neural network model with hundreds to thousands of neurons is computationally demanding. The research team explored distinct options of computing environments. The high-performance computing (HPC) resources from Georgia Tech were used during the later stages of the research. By assigning a specific number of computing nodes to the central processing unit (CPU) cores, HPC supported the customized computing needs required by complex neural network training and fast data processing (Kenneally 2018). Given the volume of training data and neural network complexity, the research utilized 1 node with a total of 32 CPU cores for 24 hours for the training of the U-Net model for the semantic segmentation of crack pixels. The need for such computing resources revealed a potential challenge to the implementations of the developed method. Once models are trained, the computing needs for processing are not as demanding; however, if models are to be trained to identify additional anomalies, HPC resources would be needed.
ALGORITHM FOR THERMAL INFRARED IMAGE ANALYSIS
Segmentation of Crack Pixels by Sobel Edge
Pavement cracks and other leakage issues mostly show clear boundaries in a thermal image. To accelerate the processing speed, the analysis of infrared images refers to a classic image-processing method called the "Sobel edge" to identify crack pixels. The Sobel edge can estimate areas where there are extreme differences in the intensities of the pixel to detect an edge of an object. As shown in figure 23, all crack-shaped anomalies in the infrared image were successfully detected using the Sobel edge method.
52
Figure 23. Images. Identification of cracks in infrared images by the Sobel edge detector.
The "cracks" detected by the Sobel edge detector show different temperature patterns in an infrared image. This can help classify cracks into various categories. By referring to the corresponding RGB image, the detected cracks with a higher temperature (bright yellow)
53
in the infrared image are patches used to fix crack issues, while those with a lower temperature (dark purple) are more likely to be true cracks containing water leakage or air voids. Therefore, it is useful to fuse the detection results and information from both IR and RGB images to improve the analysis of cracking issues on a pavement surface. That could provide evidence of conditions that could lead to the development of subsurface voids in pavements inspected.
Fusion of RGB and IR Imagery Information The fusion of RGB and IR images can improve the analysis of detected anomalies. Figure 24 shows that the scope and angle for each set of IR and RGB images is quite different. This project applied a CV matching algorithm called "ORB-ASIFT" to automate the transformation and alignment of two types of images. ORB (Oriented FAST and Rotated BRIEF) is an image feature detector specializing in rotation invariance by computing the intensity-weighted centroid of the patch with located corner at the center. ASIFT (affine scale-invariant feature transform) is another detector specialized in scale and tilting invariance. Therefore, the ORB-ASIFT feature detector integrates the invariance in rotation, scale, and tilt. Figure 24 shows an example of alignment by the ORB-ASIFT matching algorithm. The computed matching and transformation matrix can be repetitively used for future image sets since the camera parameters usually remain constant when flying a drone at a constant altitude and angle.
54
Figure 24. Images. IR and RGB alignment by ORB-ASIFT matching. After the alignment of each set of IR and RGB images, the prediction results from the IR images and RGB images can be fused to detect cracks with potential risks of water or air voids. Figure 25 shows an example of fusing the prediction results of IR and RGB using different algorithms and fuses them together to differentiate true cracks and fixed crack patches.
55
Figure 25. Images. Differentiation of true cracks and fixed crack patches by fusing IR and RGB predictions.
Additionally, the fusion of IR and RGB images could also be used to display cracks with their temperature visualized in distinct colors. The image analysis steps can be simplified as: (1) segment crack pixels in RGB images by the trained U-Net model; (2) dilate the crack pixel and multiply with the IR temperature data; and (3) threshold the temperature to categorize the detected cracks. Figure 26 shows examples of the fusion results. Since these images were taken in winter, cracking regions with a lower temperature are more likely to contain voids of air or water near the pavement surface.
56
Figure 26. Images. Detecting potential thermal voids by fused RGB and IR imagery data.
57
CHAPTER 7. VISUALIZATION AND DOCUMENTATION IN ARCGIS ARCGIS WORK PLATFORM The analysis of IR and RGB images can be executed within the ArcGIS platform by adding a user-friendly geoprocessing interface. As shown in figure 27, a raster image in the GIS map could be geoprocessed by the previously developed deep learning or image processing algorithms within the GIS platform. By selecting the "Input Raster," defining the "Output Path," and clicking "Run" in the user interface, the analysis of RGB or IR images will be processed together with the embedded geospatial information. After processing the images, the prediction results will automatically be georegistered to the basemap for localization and visualization.
Figure 27. Screenshot. ArcGIS image geoprocessing interface.
58
Processing Workflow in the ArcGIS Platform ArcGIS can provide a user-friendly interface for the processing of UAV-captured images. The entire workflow of the post-processing is presented in figure 28. After capturing sets of RGB and IR images, they are respectively imported to photogrammetry software (e.g., Metashape, PIX4D) to generate RGB and IR orthomosaic panorama images and their georeferencing world files. With the world file, the orthomosaic panorama image will be assigned with geo-coordinates to be directly imported and georegistered into the world map. Then, the georegistered RGB orthomosaic image can be analyzed to detect cracks by the trained U-Net model using an ArcGIS Python geoprocessing tool. Similarly, the georegistered IR orthomosaic image can also be analyzed by the Sobel edge algorithm within the ArcGIS Python geoprocessing tool. Finally, prediction results respectively from registered RGB and IR orthomosaic images can be fused together to differentiate the crack voids and fixed crack patches. Figure 29 shows an example of the prediction results using this workflow.
59
Figure 28. Flowchart. Processing workflow in ArcGIS platform.
Figure 29. Images. Examples of prediction results for RGB and IR orthomosaics in the ArcGIS platform. 60
BENEFITS AND LIMITATIONS Executing the image analysis within the ArcGIS platform has multiple benefits. First, the detected pavement anomalies can be geo-located and measured for condition rating. Second, the embedded geodatabase can provide a data management platform to document the multi-sourced imagery data and prediction data. The powerful visualization functions in ArcGIS can be used to annotate and display different anomalies. Moreover, the geodatabase can also support the documentation of time-series inspection data so that the road condition can be compared before and after maintenance as an evaluation of contractor performance. Lastly, using similar workflows, the ArcGIS platform can be used for the image-based detection of many other pavement anomalies such as potholes and delamination by employing the image analysis Python code embedded in the platform.
However, the current ArcGIS platform has some limitations. The major constraint is that the U-Net model is trained to analyze high-resolution UAV-captured RGB images. The performance is significantly reduced when processing the orthomosaic panorama RGB images. In the future, a deep learning model specialized in processing orthomosaic RGB images could be trained to improve the prediction accuracy. Additionally, the detection of near-surface voids focused on exploring voids beneath the visible cracking regions. The data of invisible near-surface voids needed to validate the assumptions was lacking. With the development of NDT inspection technologies, including those based on unmanned aerial vehicle (UAV) applications, the detection of invisible near-surface anomalies could be more accurate and efficient in the future.
61
CHAPTER 8. CONCLUSIONS AND FUTURE WORK
In this study, the researchers developed a workflow that leverages the use of unmanned aerial systems, or drones, to identify anomalies that could indicate the presence of nearsurface voids on roadway pavements. Images of areas of interest were obtained by flying drone data-collection missions and later were processed using computing algorithms. This process can be used to produce a pavement issue survey and report that GDOT can employ to address pavement maintenance needs with more accuracy. The research has leveraged algorithms that rely on computer vision, image processing techniques, and artificial intelligence. These methods are used to process RGB and IR images of roadway pavement and present results in the ArcGIS platform that GDOT currently uses for various asset management applications. Fusing the RGB data with IR images as tested during the research provided the added benefit of being able to differentiate between anomaly types such as cracks and patches. This provides an indication that with additional testing image sets, the system could improve the capability of directly identifying other types of anomalies, including subsurface voids. The UAS-integrated workflow was validated through cognitive walkthroughs at the district level. Results indicated that the UAS procedure outlined in this report can be successfully integrated into the existing GDOT pavement inspection/maintenance workflows. One of the main advantages of the droneassisted workflow is that entire cross sections of roadway can be assessed instead of smaller portions, as with current methods.
The pavement profiling research that was conducted faces three limitations worth noting: data collection, near-surface voids detection, and technical equipment handling. The
62
research was limited to five test sites and 58 flights, collecting just over 2,700 images. Thus, the test sites were not representative of all geographical locations in the state. Another limitation with the images was the variety of times of year and times of day. Most of the images were collected during the winter and early summer months, as accessible locations where not available at other times and did not align with the research schedule. By collecting more images in a greater sample of locations and conditions, the algorithm will be able to adjust to a larger sample set and, in turn, have more robust and accurate outputs. Operating machine learning algorithms to detect near-surface voids requires a significant number of IR and RGB training image sets containing near-surface void patterns; however, such image data are limited. Future attempts could introduce sensors like ground-penetrating radar (GPR) to help scan pavement sections to capture more spots for data collection. Finally, implementing the developed workflow requires the skills of (1) an experienced drone pilot to safely fly the equipment over areas of interest following existing FAA regulations and GDOT guidelines for the use of UAS in the field, and (2) a skilled ArcGIS operator to perform the data analysis tasks.
Future work related to the use of UAS for pavement inspections could include:
Testing of the entire data-collection and analysis-workflow process with GDOT district personnel where they would be performing all the required tasks would be useful. Human factors as well as technology-related issues could be tested to determine if adjustments to the process are needed.
Collection of additional images from a wider range of geographic locations, environmental conditions, and times of day during varied times of the year is needed. This expanded dataset can improve the performance of the algorithms
63
trained to identify pavement anomalies and even expand the range of anomaly types identified. For enhanced identification methods, the development of controlled experiments in the field to recreate scenarios for which anomaly identification is desired would be of benefit to GDOT applications. In summary, the successful implementation of the developed workflow can lead to expanded applications of drone technology within GDOT in the future and automate both the identification of anomalies and verification of remediation performance.
64
ACKNOWLEDGMENTS The research team acknowledges the Georgia Department of Transportation for its support of the research project, and thanks the following individuals for their direct involvement and contributions to the project:
6. Members of the GDOT Technical Advisory Committee and Supriya Kamatkar. 7. Yasser El Masri Graduate Research Assistant, Georgia Institute of Technology 8. Steven Kangisser Graduate Research Assistant, Georgia Institute of Technology 9. Hala Alfalih Graduate Research Assistant, Georgia Institute of Technology 10. Samantha Morton Graduate Research Assistant, Georgia Institute of Technology 11. Personnel from the Maintenance group at GDOT 12. Industry Partner Pavement inspection contractors
65
REFERENCES
Alami, R., Chatila, R., Clodic, A, Fleury, S., Herrb, M., Montreuil, V., and Sisbot, E.A. (2006). "Towards Human-aware Cognitive Robots." Proceedings of the Fifth International Cognitive Robotics Workshop (the AAAI-06 Workshop on Cognitive Robotics), pp. 3946, Association for the Advancement of Artificial Intelligence, Palo Alto, CA.
Cheng, J., Xiong, W., Chen, W., Gu, Y., and Li, Y. (2018). "Pixel-level Crack Detection using U-Net." TENCON 2018 2018 IEEE Region 10 Conference, pp. 04620466.
El Masri, Y. and Rakha, T. (2020). "A Scoping Review of Non-destructive Testing (NDT) Techniques in Building Performance Diagnostic Inspections." Construction and Building Materials, 265, 120542.
Ersoz, A.B., Pekcan, O., and Teke, T. (2017). "Crack Identification for Rigid Pavements Using Unmanned Aerial Vehicles." IOP Conference Series: Materials Science and Engineering, 236(1), p. 12101..
Eschmann, C., Kuo, C. M., Kuo, C. H., and Boller, C. (2012). "Unmanned Aircraft Systems for Remote Building Inspection and Monitoring." Proc. 6th Eur. Workshop Struct. Health Monit. Dresd. Ger.
Hart, W.S. and Gharaibeh, N.G. (2011). "Use of Micro Unmanned Aerial Vehicles in Roadside Condition Surveys." First Congress of Transportation and Development Institute (TDI), March 1316, Chicago, IL.
Hoffman, G. and Breazeal, C. (2007). "Effects of Anticipatory Action on HumanRobot Teamwork Efficiency, Fluency, and Perception of the Team." Proceedings of the ACM/IEEE International Conference on HumanRobot Interaction, March, pp.18.
Huston, D., Hu, J.Q., Maser, K., Weedon, W., and Adam, C. (2000). "GIMA Ground Penetrating Radar System for Monitoring Concrete Bridge Decks." Journal of Applied Geophysics, 43(24), pp. 139146.
Irizarry, J. and Johnson, E. (2014). "Feasibility Study to Determine the Economic and Operational Benefits of Utilizing Unmanned Aerial Vehicles (UAVs)." Final Report, Research Project 12-38, Georgia Department of Transportation, Atlanta, GA.
Irizarry, J., Johnson, E., Kim, S., Lee, K., and Paes, D. (2019). "Field Test Based Guidelines Development for the Integration of Unmanned Aerial Systems (UASs) in GDOT Operations." Technical Report FHWA-GA-19-1609, Georgia Department of Transportation, Atlanta, GA.
66
Kaber, D. and Endsley, M. (2004). "The Effects of Level of Automation and Adaptive Automation on Human Performance, Situation Awareness and Workload in a Dynamic Control Task." Theoretical Issues in Ergonomics Science, 5(2), pp. 113153.
Kaul, V., Tsai, Y.J., and Mersereau, R.M. (2010). "Quantitative Performance Evaluation Algorithms for Pavement Distress Segmentation." Transportation Research Record: Journal of the Transportation Research Board, 2153(1), pp. 106113.
Kenneally, J. (2018). The Technology Stacks of High Performance Computing and Big Data Computing: What They Can Learn from Each Other. European Technology Platform for High Performance Computing (ETP 4 HPC), Big Data Value Association, The Netherlands.
Kirwan, B. and Ainsworth, L.K (eds). (1992). A Guide to Task Analysis: The Task Analysis Working Group. CRC Press, London.
Lasota, P.A. and Shah, J.A. (2015). "Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity HumanRobot Collaboration." Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(1), pp. 2133.
Lewis, C. and Wharton, C. (1997). "Cognitive Walkthroughs." Handbook of Human Computer Interaction, 2nd edition, Elsevier Science, Amsterdam, pp. 717732.
Liu, P. C.-Y., and N. El-Gohary. (2019). "Automatic Annotation of Web Images for Domain-Specific Crack Classification." Adv. Inform. Comput. Civ. Constr. Eng., I. Mutis and T. Hartmann, eds., 553560. Cham: Springer International Publishing.
Parasuraman, R., Sheridan, T.B., and Wickens, C.D. (2008). "Situation Awareness, Mental Workload, and Trust in Automation: Viable, Empirically Supported Cognitive Engineering Constructs." Journal of Cognitive Engineering and Decision Making, 2(2), pp. 140160.
Rakha, T. and Gorodetsky, A. (2018). "Review of Unmanned Aerial System (UAS) Applications in the Built Environment: Towards Automated Building Inspection Procedures Using Drones." Automation in Construction, 93, pp. 252264.
Rakha, T., Liberty, A., Gorodetsky, A., Kakillioglu, B., and Velipasalar, S. (2018). "Heat Mapping Drones: An Autonomous Computer-Vision-Based Procedure for Building Envelope Inspection Using Unmanned Aerial Systems (UAS)." Technology|Architecture + Design, 2(1).
Rodriguez-Millian, J. D. (2019). "Towards the Application of UAS for Road Maintenance at the Norvik Port."
Ronneberger, O., Fischer, P., and Brox, T. (2015). "U-net: Convolutional Networks for Biomedical Image Segmentation." International Conference on Medical Image Computing and Computer-assisted Intervention, ArXiv150504597 Cs. Springer, pp. 234-241.
67
Saarenketo, T. and Scullion, T. (2000). "Road Evaluation with Ground Penetrating Radar." Journal of Applied Geophysics, 43(24), pp. 119139.
Shaghlil, N., and Khalafallah, A. (2018). "Automating Highway Infrastructure Maintenance Using Unmanned Aerial Vehicles." Construction Research Congress, pp. 24.
Schnebele, E., Tanyu, B. F., Cervone, G., and Waters, A. N. (2015). "Review of Remote Sensing Methodologies for Pavement Management and Assessment." European Transport Research Review, 7(2), pp. 1-19.
Tsai, Y.J., Hu, Z., and Alberti, C. (2010). "Detection of Roadway Sign Condition Changes using Multi-Scale Sign Image Matching (M-SIM)." Photogrammetric Engineering & Remote Sensing, 76(4), pp. 391405.
Tsai, Y.-C.J., Chatterjee, A., and Jiang, C. (2017, August). "Challenges and Lessons from the Successful Implementation of Automated Road Condition Surveys on a Large Highway System." 2017 25th European Signal Processing Conference (EUSIPCO), pp. 20312035.
Tsai, Y.-C.J. and Chatterjee, A. (2017). "Comprehensive, Quantitative Crack Detection Algorithm Performance Evaluation System." Journal of Computing in Civil Engineering, 31(5).
Tsai, Y.-C.J., Jiang, C., and Huang, Y. (2014). "Multiscale Crack Fundamental Element Model for Real-World Pavement Crack Classification." Journal of Computing in Civil Engineering, 28(4).
U.S. Congress (2021). "H.R.3684 Infrastructure Investment and Jobs Act." Passed November 15, 2021.
Wu, J. and Tsai, Y.J. (2006). "Enhanced Roadway Geometry Data Collection Using an Effective Video Log Image-Processing Algorithm." Transportation Research Record: Journal of the Transportation Research Board, 1972(1), pp. 133140.
Young, M.S., Birrell, S.A., and Stanton, N.A. (2011). "Safe Driving in a Green World: A Review of Driver Performance Benchmarks and Technologies to Support `Smart' Driving." Applied Ergonomics, 42(4), pp. 533539.
Zakeri, H., Nejad, F. M., and Fahimifar, A. (2016). "Rahbin: A Quadcopter Unmanned Aerial Vehicle Based on a Systematic Image Processing Approach Toward an Automated Asphalt Pavement Inspection." Automation in Construction, pp. 72, 211-235.
Zhang, C. and Elaksher, A. (2012). "An Unmanned Aerial Vehicle-Based Imaging System for 3D Measurement of Unpaved Road Surface Distresses." Computer-Aided Civil and Infrastructure Engineering, 27(2), pp. 118129.
68