Information services in social networked transportation : governance and ITS

GEORGIA DOT RESEARCH PROJECT 12-26 FINAL REPORT
INFORMATION SERVICES IN SOCIAL NETWORKED TRANSPORTATION: GOVERNANCE AND ITS
OFFICE OF RESEARCH

This page intentionally left blank.

GDOT Research Project RP12-26
Final Report
INFORMATION SERVICES IN SOCIAL NETWORKED TRANSPORTATION: GOVERNANCE AND ITS
By Dr. Hans Klein Associate Professor School of Public Policy Georgia Institute of Technology
Dr. Kari E. Watkins Assistant Professor School of Civil and Environmental Engineering Georgia Institute of Technology
Contract with Georgia Department of Transportation
In cooperation with U.S. Department of Transportation
Federal Highway Administration June 2014
The contents of this report reflect the views of the author(s) who is (are) responsible for the facts and the accuracy of the data presented herein. The contents do not necessarily reflect the official views or policies of the Georgia Department of
Transportation or the Federal Highway Administration. This report does not constitute a standard, specification, or regulation.
3

This page intentionally left blank. 4

1.Report No.: FHWA-GA-14-1226

2. Government Accession No.: 3. Recipient's Catalog No.:

4. Title and Subtitle:
Information Services in Social Networked Transportation: Governance and ITS

5. Report Date: June 2014
6. Performing Organization Code

7. Author(s):
Dr. Hans Klein (P.I), Dr. Kari E. Watkins, PE (P.I.), James Wong, Landon Reed, Victor Wanningen, Bingling Zhang

8. Performing Organ. Report No.:

9. Performing Organization Name and Address: Georgia Institute of Technology School of Civil and Environmental Engineering School of Public Policy
12. Sponsoring Agency Name and Address: Georgia Department of Transportation Office of Materials & Research 15 Kennedy Drive Forest Park, GA 30297-2534

10. Work Unit No. 11. Contract or Grant No.:
GDOT Research Project No. 0010766 (RP 12-26; UTC Sub-Project) 13. Type of Report and Period Covered:
Final; May 2012-June 2014
14. Sponsoring Agency Code:

15. Supplementary Notes: Prepared in cooperation with the U.S. Department of Transportation, Federal Highway Administration. 16. Abstract:
The purpose of this research seeks to understand the functions and the benefits of social networked transportation (SNT), the processes that make SNT possible, and the institutional innovations needed to facilitate those processes. First, this research examines the design of procedures for standards-setting, using real-time transit data standards from both public and private organizations as a set of case studies. Secondly, this research has identified and analyzed an emerging data network in transportation, traffic management centers and third-party data providers via a web-based survey of TMC managers. Thirdly, this research pursues understanding foundational principles of and strategies for social networking, taking lessons from successful social networks in the IT sector (i.e. the Internet), and lessons from emergent social networks in other sectors (i.e. energy). Finally, a graduate level course and an application developer conference called Transportation Camp South are discussed as methods to move into the future. It is expected that the results of this research will interest a wide audience, from transportation researchers to field practitioners.

17. Key Words: Probe-based data; third-party data; traffic management centers; highways; transit; standard setting

19. Security Classification (of this report):
Unclassified

20. Security Classification (of this page):
Unclassified

Form DOT 1700.7 (8-69)

18. Distribution Statement:

21. Number of Pages:
209

22. Price:

5

This page intentionally left blank. 6

TABLE OF CONTENTS
Executive Summary ...............................................................................................11 Chapter 1: Background..........................................................................................15 Chapter 2: Course on Social Networked Transportation ........................................18 Chapter 3: Transit Information Standard Development ..........................................20
Introduction ......................................................................................................... 20 Scope .............................................................................................................. 23
Background ........................................................................................................26 Real-time Transit Information ..........................................................................26 The Need for ITS Data Standards ...................................................................30
Literature Review................................................................................................37 Standards Development Theory......................................................................37
Real-Time Transit Standards Development ........................................................60 Methodology ...................................................................................................60 Case Studies...................................................................................................63 Comparison of Standards and Standards Development Processes.................83
Recommendations .............................................................................................. 87 Moving Ahead for Innovation in the 21st Century .............................................87 Predictions for Continued Trends ....................................................................88 Federal Policy Recommendations ...................................................................90
Conclusions ........................................................................................................92 Key Findings ...................................................................................................93 Future Work ....................................................................................................94
Chapter 4: Traffic Management Centers and Third-Party Data ..............................97 Literature Review................................................................................................97 Traffic Engineering ..........................................................................................97 Traffic Management Centers .........................................................................100 Changes in Traffic Sensing Technology ........................................................ 104 Networked Government ................................................................................105 Public Private Partnerships ........................................................................... 106 Risk Assessment...........................................................................................107 Methodology ..................................................................................................... 107 Survey Results and Analysis.............................................................................109 Existing Risk ................................................................................................. 111
7

Real-Time Third-Party Data........................................................................... 113 Hypothetical Use ........................................................................................... 114 Discussion and Conclusions ............................................................................. 119 Chapter 5: Intelligent Systems in the Transportation and Energy Sectors ........... 121 Introduction ....................................................................................................... 121 Conceptual Frame of Reference ....................................................................... 123 Governance Framework ................................................................................ 123 Layered Model of Internet Connectivity ......................................................... 124 Conceptual Heuristic for the Comparative Analysis .......................................128 Results: Comparative Analysis ......................................................................... 129 Characteristics of Operational Domain .......................................................... 129 Institutions and Prescriptions......................................................................... 133 Network Applications Areas .......................................................................... 135 Conclusions ...................................................................................................... 142 Link Layer: Need for Wireless .......................................................................142 Latency ......................................................................................................... 143 Uncertainty, Information, and Value ..............................................................144 Locus of Innovation ....................................................................................... 145 Governance .................................................................................................. 146 Chapter 6: Conclusions and Recommendations for Further Research ................ 147 References ........................................................................................................... 150 Appendix A: Materials from Course on Social Networked Transportation ............ A-1 Appendix B: Additional Questions and Responses from TMC Survey ................. B-1
8

LIST OF TABLES
Table 1: Agency responses to question on underutilized AVL functions ......................28 Table 2 Comparison of key program interests for ITS in 2000 and 2013 .......................31 Table 3: Importance of open standards requirements to different stakeholders ............57 Table 4: Openness index scores for real-time transit passenger information ................84 Table 5: Average adoption rate for real-time standards ...............................................87
9

LIST OF FIGURES
Figure 1: Growth of transit agencies with open data by passenger miles served ...........21 Figure 2 Diversity of technology and equipment vendors for AVL systems....................30 Figure 3 Growth of open source lines of code from 1995 to 2006..................................59 Figure 4: Adoption of GTFS by U.S. transit agencies. ...................................................65 Figure 5: Number of documented changes for GTFS vs. GTFS-realtime.......................69 Figure 6: Diagram of TCIP Model Architecture ..............................................................73 Figure 7: Diagram of conceptual hierarchy for TCIP building blocks..............................74 Figure 8: Participants by sector in TCIP Passenger Information Technical Working
Group ...........................................................................................................75 Figure 9: Adoption of real-time data standards ..............................................................86 Figure 10: Reasons given by transit agencies for not providing public arrival times.......90 Figure 11: Issues agencies have with adoption of open standards for real-time data ...95 Figure 12: Relationships among traffic speed, flow, and density ..................................98 Figure 14: Primary and secondary functions of traffic management centers ...............110 Figure 15: Types of end point equipment used for traffic monitoring...........................111 Figure 16: Risks and potential impacts on the traffic management system .................113 Figure 17: Reasons traffic management centers choose not to use third-party data...115 Figure 18: Information traffic management centers' would consider purchasing from a
third-party vendor ........................................................................................ 116 Figure 19: TMC required understanding and transparency of third-party data ............ 118
10

Executive Summary
Over the past twenty years, the transportation sector has experienced an information technology (IT) revolution as the national program in Intelligent Transportation Systems (ITS) planned and launched a wide variety of IT-based systems. Today, the transportation sector is poised for a second IT-driven revolution even more far-reaching than the first.
In this project, we will call the second IT revolution "social networked transportation" (SNT). SNT realizes the functionality of social networks, already well known in the IT sector, in the transportation sector as well. Social networked transportation leverages pre-existing IT investments to realize new services and functions that significantly enhance mobility. Based on the experiences of other sectors in the economy where social networking is well underway, social networked transportation is predicted to require less investment than even traditional ITS as it can generate similarly enormous benefits.
This research combines research in social networking and research in transportation to achieve useful insights into social networked transportation (SNT). This project will analyze information flows and institutions in surface transportation in order to promote new information services. It attempts to illuminate the evolving role of state DOTs as transportation becomes more information intensive.
Traditionally, transportation is understood as the physical displacement of people, goods, and vehicles. Information technology is often used to model the system or to optimize the system. However, now, information is the essence of the system. In the social networked paradigm, transportation is reconceptualized as an information ecosystem in an institutional landscape.
Transportation consists of information services, i.e. data that are generated, exchanged, combined, processed, packaged, and distributed among institutions in
11

government, industry and the consumer market to perform a variety of functions. Information services provide both qualitative improvements to transportation (new services) and quantitative improvements (better performance of traditional transportation functions).
This research seeks to understand the functions and the benefits of SNT, the processes that make SNT possible, and the institutional innovations needed to facilitate those processes. The research focus is three-fold. First, it examines the design of procedures for standards-setting. As the transportation sector fully integrates with information technology, transit agencies face decisions that expose them to new technologies, relationships, and risks. With the rise in transit-related web and mobile applications, a set of competing real-time transit data standards from both public and private organizations have emerged. Case studies and interviews were conducted with members of standard-setting organizations of the three real-time transit data standards: the General Transit Feed Specification Realtime (GTFS-realtime), the Service Interface for Real Time Information (SIRI), and Transit Communications Interface Profiles (TCIP). This analysis produced an assessment of federal policy on standards development as well as current and future trends in this sector both technical and institutional. The results will inform federal transit policy and future action in standards-setting and intelligent transportation systems (ITS) requirements, identifying the potential catalysts that will increase the effectiveness of federal and agency-level programs.
Secondly, the research identifies and analyzes emerging data networks in transportation, focusing on traffic management centers and their attitudes towards thirdparty data. Existing traffic monitoring systems in the United States are largely infrastructure-based networks with end-point devices to report spot traffic speed, density, and counts to centralized traffic management centers (TMCs). Emerging probe technologies, such as GPS-enabled devices, offer new traffic monitoring methods for
12

TMCs. These technologies involve a fundamental change in TMCs' organizational structure; however, public agencies are unlikely to have the in-house expertise to deploy and manage large-scale mobile data aggregation. In these situations, agencies will likely require participation by third-parties as technology partners. The objective of this research is to assess the readiness of TMCs and their managing agencies to adopt third-party, probe-based data for traffic monitoring and the associated organizational changes. This research does not explore the technological options for probe-based data, but rather assumes the existence of a third-party data product that can provide real-time speed data for roadway segments based on mobile device data. The major findings from the web-based survey of TMC managers are that agencies are already exposed to third-party risks; that the industry is likely to have a large transition period in which agencies build confidence in the adequacy and permanence of the third-party data; and that third-parties will need to increase transparency and openness around the technology to build trust amongst agencies.
Thirdly, this research pursues understanding foundational principles of and strategies for social networking, taking lessons from successful social networks in the IT sector (i.e. the Internet), and lessons from emergent social networks in other sectors (i.e. energy). A comparative analysis was done identifying the similarities and differences between the application of the Internet to the transportation sector (intelligent transportation systems) and the energy sector (smart grid systems). A conceptual framework was developed to compare the two sectors. Researchers conclude that the transportation sector may present greater challenges to network adoption and may also be more radically transformed by networks. Where the Internet is appropriate, the costly uncertainty that characterizes the transportation sector means that innovation could happen rapidly. However, where its latency characteristics render the Internet inappropriate, it remains to be seen whether the transportation sector can successfully
13

develop totally new protocols and media. As new players enter the transportation sector, effect control of operations could be affected.
Ultimately, the research discusses lessons for the future and offers strategic directions and promising approaches. A graduate level course was developed to teach students about the ITS system and its relationship with the Internet. Additionally, funding from this project was used to host the second annual Transportation Camp South where transportation professionals discussed the future of the industry. It is expected that the results of this research will interest a wide audience, from transportation researchers to field practitioners.
14

Chapter 1: Background
Social networks join previously autonomous people and devices into connected networks that enable previously scattered data and intelligence to unite in an information ecosystem rich with functionality. Understanding how social networks work and are achieved can create systems of great value.
Social networking has already taken off in the IT sector. The Internet is the most notable example. The Internet is merely a software protocol that runs on top of a preexisting infrastructure of computers and data networks. As it developed, the Internet required minimal investment in new hardware, networks, or skills, yet it unleashed staggeringly large amounts of additional value from those assets. Other social networks have unleashed comparable gains. Google's search engine, for example, connects users with pre-existing content and thereby creates vast value. Likewise, Craigslist.com facilitates transactions between pre-existing sellers and buyers, and in so doing has fundamentally changed consumer markets.
Realizing the benefits of social networks and social networked transportation requires understanding as to how they are created. Perhaps the most important insight here is that the creation of social networks involves at least as much institutional innovation as technological innovation. For most such networks, the underlying infrastructure already exists, but its utility can only be unleashed through social processes that are institutional in nature.
Three essential elements of social networks were identified and traced back to institutional innovation. The three core elements of social networks are technical standard-setting, network interconnection and application development.
Technical standards are data formats and communication protocols. A community that develops such standards creates the preconditions for wide communication. Standardized data mean that the content of communication can be understood by
15

everyone in the community, and standardized protocols mean that the content can be assessed throughout the community. An information-rich environment is possible when data can be widely shared and understood.
The main challenge to standardization is institutional in nature, not technical. It is difficult for a community to achieve widespread agreement among its numerous members. Often technical standards are only achieved after the creation of appropriate institutions, institutions that make collective dialogue and agreement possible. Understanding technical standard-setting processes and the institutions that embody them is crucial for creating social networks.
The second core element in social networking is network interconnection. Standards make it possible to connect, but ultimately members of the community have to decide to seize the opportunity to connect. As with technical standard-setting, the challenge to interconnecting is institutional, not technical.
The decision to interconnect is made in the context of organizational policies, plans, and agreements. Interconnection may be impossible without changes to attitudes, rules, and even property rights. Again, institutional innovation is required. Understanding interconnection decisions requires awareness of historical, cultural, and organizational factors, factors which are often summed up as "That is just the way we do things around here." Practical insights into interconnection involve the study of incentives, organizational change strategies, and cost and benefit calculations.
The third core element in social networking is application development. Once data are standardized and interconnected, it still remains to develop applications to convert that data into useful information. Application development is more of a technological task, but often the core of the technology is institutional in nature. Good applications route around institutional barriers, unite collaborators, and create incentives to participation. Systems designed for one context are likely to yield insights for developers
16

working in other contexts. It seems likely that transportation could learn from other sectors, such as the energy sector where smart grids are being realized through the application of social networking strategies.
The importance of institutions in social networks can be summed up in one word: governance. Social networks emerge where governance institutions emerge that allow for collective decisions on standards, interconnection, and application development. The benefits of social networking have been demonstrated in the IT sector. Researchers are only now beginning to understand the dynamics of their creation and the strategies for achieving them. The application of these insights to surface transportation remains to be done, as does the development of practical strategies for achieving social networked transportation.
This report proceeds as follows. Chapter 2 discusses a course developed in Social Networked Transportation which focused on the topic of transportation and the Internet. Chapter 3 focuses on the development of technical standards in transit information and communication by institutions. Chapter 4 discusses the readiness of traffic management centers to use third-party data, which would require the cooperation of the public and private sectors and a restructuring of the institutions. Chapter 5 focuses on the lessons in social networking that the transportation sector can learn from the energy sector. Finally, Chapter 6 discusses the future trends, visions, and goals for the transportation sector.
17

Chapter 2: Course on Social Networked Transportation
Information technology is revolutionizing transportation. In the public sector, the United States program in intelligent transportation systems (ITS) has developed and deployed specialized systems for transportation. In the private sector (and increasingly in the public sector), new products and services are being offered on the Internet. The Georgia Institute of Technology offered a graduate-level course in the fall 2013 semester on ITS and the Internet. This course was taught by Dr. Kari Watkins from the Civil and Environmental Engineering Department and Dr. Hans Klein from the Public Policy Department. The course examined IT technology and policy in transportation, including Internet technology and the institutions and application areas of transportation. Topics included public-private partnerships in transit, traffic management, vehicle-to-vehicle networks, standards setting, and the role of the insurance industry.
The course began with an introduction of the topic of transportation and the Internet and discussed hot topics in each field. Students contributed to brief discussion of transportation-related applications or websites, facilitated by Dr. Watkins and Dr. Klein, where they presented on how the application worked on the simplest level, the back-end data required for the application to work, the user interface, and the broader implications of the application. The faculty then discussed the legal framework of transportation planning so students could understand the background and current institutions involved in transportation. The students also read articles regarding Web 2.0 and Google Fiber.
The course then moved to a discussion of Intelligent Transportation Systems (ITS). It began with a background of the ITS program in the US. Students were assigned sections of the current focus of the ITS program to research and make short presentations in class. The faculty led a discussion regarding the future of ITS, connected and autonomous vehicles, and the liability associated. Students learned about dedicated short range communication for the USDOT connected vehicles
18

program. Students wrote one of two term papers about the layered model of communications and applied this model to vehicle-to-vehicle communications; this paper was written in TRB format.
The course also had several guest speakers. Landon Reed presented on transit standards and standard development, mainly GTFS, SIRI, and TCIP. James Wong presented on networked government as it relates to traffic management centers and third-party data. Victor Wanningen presented a comparative analysis of the energy sector and the transportation sector in terms of intelligent systems.
Finally, the course discussed the Internet as a platform for data exchange, focusing specifically on open data and transportation applications. Students learned about social media applications to ITS and crowdsourcing as well as utilizing smart phones as invehicle platforms. Students wrote the second of the two term papers on a comparison of a public sector application to its private sector counterpart, also in TRB format. Students made lectern style presentations to the class to end the semester.
Overall, the course was highly rated by students, who came from various backgrounds, such as Civil Engineering, City Planning, and Public Policy. The course was taught in a seminar style and the faculty members were engaging and helped promote healthy discussion. The syllabus and other related course material is available in Appendix A. Original versions of these materials are available for use by academics and practitioners by contacting Dr. Watkins or Dr. Klein.
19

Chapter 3: Transit Information Standard Development
(Landon Reed and Dr. Kari Watkins)
Introduction
Passenger information for public transit, particularly in the form of real-time arrival predictions, has experienced a surge of growth in the past decade. While the first passenger information systems existed even in the early 1990s (1), the increasing diffusion of mobile smart devices has enabled new generations of applications that allow users to access real-time information with increasing ease and reliability. The benefits of providing this information, especially via mobile applications, are well documented. Such benefits include significant reductions in perceived and actual wait times (2), improvements in customer satisfaction (3), and increases in transit usage (4).
Smartphone market penetration, however, does not fully account for this growth in real-time information delivery. The market success of the standard format for schedule data known as the General Transit Feed Specification (GTFS), originally developed through a partnership between Google and Portland's TriMet, has led to an unprecedented adoption rate by transit agencies as shown by total unlinked passenger trips for agencies with GTFS in Figure 1. These agencies have committed to producing and maintaining their schedule data in standardized comma separated values (CSV) tables to display their system on Google Transit's trip planner and, increasingly, opening this data to other third-party application developers.
20

Figure 1: Growth of transit agencies with open data by passenger miles served (49)
While GTFS has emerged as a de facto industry standard1 for static schedule
information, there has yet to be a similar case for real-time passenger information, or the
current location of a transit vehicle and its consequent schedule deviance. Although the
menu of real-time data standards is almost identical in composition to the list of options
available for schedule data standards, a predominant alternative has not yet risen to the
1 Some may call attention to the difference between the use of the word "standard" to describe what actually is a specification (for a good description of this difference, albeit in the printing and publishing industry, see http://www.npes.org/pdf/Standards-V-Specs.pdf). While this is a valid semantic concern, the difference between standard and specification lies on a continuum. Specifications that have been widely adopted and are openly maintained begin to move into the realm of standards. For this reason, the words may be interchanged throughout this document. This is not to detract from the respectable and painstaking work of accredited standards bodies, but rather just a side effect of the ever-changing landscape of adoption and usage of standards and specifications.
21

top. This may be due in part to one or more of the following reasons: (1) the market for real-time information is not mature enough to warrant widespread adoption, (2) the available data standards do not meet the technical needs of agencies, or (3) the effects of lock-in and switching costs keep agencies fixed in contracts with vendors providing proprietary solutions.
Nonetheless, the market for standards that do exist for real-time transit passenger information in the United States is at a stage where the tipping point for adoption seems likely to occur over the next decade. The open standards for delivering real-time passenger information are (1) the General Transit Feed Specification for realtime (GTFS-realtime), the real-time counterpart of GTFS; (2) Transit Communication Interface Profiles (TCIP), the Federal Transit Administration (FTA) and the American Public Transportation Association's (APTA) decades-old project that includes specifications for all manner of technology systems in the transit industry; and (3) the Service Interface for Real-time Information (SIRI), a passenger information standard developed by the European Committee for Standardization (CEN), which has seen adoption in whole or part by a few agencies in the US. There are a bevy of other standards for delivering real-time information, but these are on the whole closed standards--generally controlled by proprietary interests without open forums for comments or appeals. Examples of other standards or specifications include the NextBus XML application programming interface (API), web services provided by many different automatic vehicle location (AVL) or ITS vendors (Trapeze, Clever Devices, Orbital, etc.), the OneBusAway API2, and many custom implementations (such as TriMet's web services API).
2 The OneBusAway API is not fully closed; but for the purposes of this research, it is not considered here. The primary reason for its exclusion is that most of the discussion and work surrounding the API has been related to a particular implementation of the standard. As the project grows into other regions (New York City, Tampa, Atlanta, etc.), there may be a cause to consider it under future research. Another reason for its exclusion here is that the author contributes directly to The OneBusAway Project and wishes to avoid conflicts of interest.
22

There are likely a number of reasons that exist for why a real-time transit passenger information standard has not yet reached a tipping point. This research aims to understand the theory on standards development processes and organizations in an attempt to better understand standards development for real-time transit passenger information and why widespread standardization has not occurred. It will examine other cases of competing standards and how these processes were structured. Importantly, it will reflect on standards theory and the role of policy in promoting successful standards.
Scope The literature review and case studies that follow in chapters three and four
represent an analysis of standards development with a particular and well-defined scope. The analysis will focus strictly on those standards development processes for real-time passenger information in the United States.
Real-time Passenger Information Transit Data Standards The scope of this work is limited in order to produce results that are relevant for a
particular subset of industry data standards and those organizations that develop those standards. The standards under examination in this research are those that convey passenger information in a real-time context. Such information includes data reported about transit vehicles pertinent to the vehicle locations, schedule adherence/deviance, service disruptions or changes, or even network congestion levels. These data may be used to convey information about transit service that aids travelers in decision making about their journeys.
It is worth noting that certain standards considered here, especially TCIP, contain standards for an entirely other set of information exchanges for the transit industry. GTFS-realtime, on the other hand, was designed and designated strictly for the
23

conveyance of real-time passenger information. As such, a strict "apples to apples" review is not possible unless only the real-time passenger information components of TCIP are considered. While the author recognizes that the real-time component of the standard does not exist in isolation, for the sake of simplicity, it will be compared strictly in this real-time passenger information context.
Another important consideration is that TCIP and SIRI were both developed for intra-agency interoperability, whereas GTFS-realtime was developed as a model for external data consumption by third parties. Although on the surface these models exhibit fundamental differences, the primary goal here is to consider how standards influence the ability of transit passengers to consume real-time information. The passenger information components of TCIP, SIRI, and GTFS-realtime all intend to serve this purpose, whether the ultimate vehicle be an agency-operated website or variable message signs, Google Transit, or any number of other web or mobile interfaces. Each of these data standards have the capability to deliver this information; this research will consider how the development of the data standard has hindered or helped to this end.
Process-oriented Analysis This research effort seeks to understand the evolution, history, and future of the
standards development processes of the major real-time passenger information data standards in the United States. By understanding these processes as well as the economic, political, and technical dimensions of these standards, the purpose of this work is to recommend a path forward for the industry in standards adoption and future standards development work, especially as it pertains to real-time passenger information. Rather than a substantive analysis of the content, format, and structure of the data standards, this research effort seeks to understand the formal approaches
24

taken by standards development organizations (SDOs) and the approaches' resultant successes and failures.
United States Focus While advanced traveller information systems (ATIS) have been deployed for both
transit and traffic systems across the world, this research focuses strictly on the United States context. Social and political organization varies country to country as do the makeup of SDOs and their relationship with governmental entities. Because of the complexity of such relationships in different contexts, this research will only consider real-time passenger information standards that have been implemented and used in the United States, particularly for those agencies that are members of the American Public Transit Association (APTA).
SIRI, which was developed through CEN, represents the convergence of a few European real-time information standards, most notably the UK's Real-Time Interest Group (RTIG) and Germany's Verband Deutscher Verkehrsunternehmen (VDV). It also draws on the basic conceptual framework put forth by France's TransModel, also a CEN European Standard. While the SIRI data standard was developed through a European SDO with solely European partners, a number of US agencies and real-time information vendors have implemented the standard, bringing it into the pool of other US data standards and into this analysis.
Open Standards As mentioned above, this research will consider only open standards for real-time transit passenger information. Any recommendations for policy or process are unlikely to impact a closed standard. Therefore, in order to pursue productive work, closed and proprietary specifications are wholly excluded from the case studies and consideration
25

as a possible filler for the real-time transit passenger information standards void. The permanence of proprietary specifications relies on the perpetuity of the firm that holds licensing, intellectual property rights, and general control of the standard. As such, a realistic, long-term solution will not include closed or proprietary specifications. Chapter three considers further the subtleties of open standards and will aid the reader in the understanding of this concept.
Background
The purpose and utility of real-time transit information has changed over time. Transit agencies originally installed systems that provided information on vehicle location for operational reasons--to assist with crucial functions such as dispatching. Today, these systems integrate with other technology subsystems such as automatic passenger counters (APCs), influencing the way in which an agency assesses its operations and even communicates with its customers, improving both the quality of service and the customer experience. This section will explore both the technical and historical basis of the technologies that provide this information and how some of these changes have occurred.
Real-time Transit Information Real-time transit information provides agencies, operators, and customers with
information about the current transit operations--whether it be a single transit vehicle, a route, or an entire fleet.
Automatic vehicle location (AVL) refers to, primarily bus, technology systems that determine the location of a transit vehicle or fleet of vehicles in operation. According to TCRP Synthesis 73, an AVL system is defined as:
26

"the central software used by dispatchers for operations management that periodically receives real-time updates on fleet vehicle locations. In most modern AVL systems, this involves an onboard computer with an integrated Global Positioning System receiver and mobile data communications capability" (5).
One of the primary technologies for early AVL systems installed in the 1970s and 1980s was the wayside signpost beacon system, which relies on a set of signposts installed at key locations on the transit system (sometimes coinciding with features of service like timepoints) and beacons that emit, usually, microwaves to indicate their presence when they approach a signpost. This technology, still used for transit signal priority, is increasingly being replaced by GPS-based systems, wherein each transit vehicle is equipped with a GPS receiver and radio-based mobile communications system.
Transit agencies rely on real-time transit information for a host of operational capabilities and improvements, beyond the information provided specifically for passengers. Updates on the location and status of vehicles can be integrated with a menu of other on- and off-board technology subsystems to provide functionalities such as onboard next stop announcements, automatic data input for headsigns, advanced communication with farebox systems to provide enhanced data on payments, stop-bystop boardings and alightings, schedule adherence for real-time predictions when linked with schedule data (provided through a number of different interfaces), improved transit signal priority (TSP) operation, and more (5). This abbreviated list provides a snapshot of the usefulness of real-time information updates on the location and status of transit vehicles in operation.
Though the menu of options for AVL systems is extensive, the reality of many implementations is that few transit agencies utilize many or all of these capabilities. In a survey conducted by Miller, et al., for TCRP Synthesis 73 (5), the researchers asked
27

transit agencies which aspects of the agency's bus AVL system are not fully utilized. The responses for this question are shown in Table 1. While the highest percentage of agencies had not fully utilized TSP (at 43.8%), the second highest response was Next Arrival Predictions at 34.4% of transit agencies (5). Over a third of agencies either are not providing or have not fully utilized arrival predictions for their transit systems. The low utilization of TSP can partly be explained by the high capital costs of installing wayside infrastructure and the coordination costs of working with other agencies to calibrate and manage traffic signals. Yet the low utilization of Next Arrival Predictions is not as easily explained by infrastructure costs.

Table 1: Agency responses to question on underutilized AVL functions (5)

Technology

%

Transit Signal Priority (TSP)

43.8

Next Arrival Predictions

34.4

Scheduling and Dispatch Software for Paratransit

31.3

Operations

Automatic Passenger Counters (APC)

28.1

Next Stop Announcements

21.9

AVL Software for Fixed-Route Operations

18.8

Other

0.0

While arrival predictions can be delivered with costly wayside digital signage, information delivery via websites, automated telephone systems, or mobile applications offers a low-cost alternative to this infrastructure. One possible explanation for this high response is that when the researchers administered the survey in 2008, these low-cost technologies were less available. This theory can be discredited by survey responses indicating that the earliest cases of agencies delivering next arrival predictions by signs or websites were between 1998-2000 at rates of 9.4% and 3.1%, respectively. Indeed, these low-cost methods were available, but this researcher posits that sufficient
28

dominance of a standard in the realm of real-time transit passenger information had not, and perhaps has still not, matured enough to make these low-cost alternatives to wayside signage economically viable. In the absence of reliable standards, market inefficiencies keep the costs of Next Arrival Predictions too high.
Beyond the underlying technologies and uses, the number of vendors involved in installing and developing these systems for agencies adds an entirely separate layer of complexity. Figure 2 shows the various vendors involved in equipment supply or technology integration mentioned in responses from 31 agencies to a 2008 survey question conducted for TCRP Synthesis 73 (5). The wide distribution of responses (note: these responses were not mutually exclusive, i.e., some agencies mentioned multiple vendors/suppliers) suggest that there are a number of both large vendors with multiple contracts across different agencies as well as many cases where smaller vendors may create custom solutions for individual agencies or, at most, small market segments. There are many technology providers for AVL systems and, based on recent evidence, few of these vendors use anything besides proprietary, closed standards for disseminating real-time passenger information within agencies or to third parties.
29

Number of agencies with vendor

25 20 15 10
5 0
AVL component vendors
Figure 2 Diversity of technology and equipment vendors for AVL systems (5)
The Need for ITS Data Standards ITS Architecture / Standards: Final Rule
Intelligent transportation systems (ITS) became a part of the federal agenda in the early 1990s with the passing of the Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991. ITS represent the efforts to integrate information technology into transportation infrastructure at any number of entry points, for example, private vehicles or public infrastructure like roadways. Table 2 shows the key activities of the ITS Joint Program Office of the USDOT in 2000 (6) and in 2013 (7).
30

Table 2 Comparison of key program interests for ITS in 2000 and 2013 (6, 7)

Date accessed

January 16, 2000

September 3, 2013

Question Answer (extract)

What are the key elements of the ITS metropolitan approach?
Traffic signal control

Freeway management

Transit management

Incident management Electronic toll collection Electronic fare payment

Railroad crossings Emergency response Regional multi-modal traveler information --

What are the current key activities of the Federal ITS Program?
Vehicle to Vehicle (V2V) Communications for Safety Vehicle to Infrastructure (V2I) Communications for Safety Real-Time Data Capture and Management Dynamic Mobility Applications Road Weather Management Applications for the Environment Human Factors Mode-Specific Research Exploratory Research
Cross-Cutting Activities

A comparison of the major activities across the years indicates not necessarily a distinct shift in priorities, but rather a shift in the way the organization addresses these priorities towards more complex and interactive systems. However, the disappearance of any explicit reference to "transit" may indicate a shift in priority to traffic and autos, especially with the ever growing interest in vehicle-to-vehicle (V2V) communications and unmanned autonomous vehicles (UAVs). Nevertheless, this may just as well be explained by the contemporary emphasis on multimodal applications rather than treating modes as discrete, unrelated subjects.
In the Transportation Equity Act for the 21st Century, enacted in 1998, legislators filed additional rules for ITS projects that were to be funded by the Highway Trust Fund. These rules specified that any major ITS project must "...conform to the national
31

architecture, applicable standards or provisional standards..." (8). This provision extends to any ITS projects funded out of the Mass Transit Account and, therefore, includes most projects that may impact the regional coordination of local ITS operations. It should be clarified that conformance to the "national architecture" in practice requires conformance to a regional ITS architecture, which is based on the National ITS Architecture, a much more expansive system than any region is ever likely to implement (9).
In response to questions posed during the legislation's comment period, the Federal Transit Administration (FTA) modified the final policy to alleviate concerns regarding "the premature use of required standards and interoperability tests..." Specifically, the FTA relinquished agencies of the need to use any standard that is not yet "mature" and has not been formally adopted by the USDOT. At the time of the modification's writing, the only required standards were those related to commercial vehicle operations (CVO) (10). According to a report published in 2010, no other ITS standard has yet to be formally adopted by the USDOT, so it holds that agencies are not formally required to utilize any standard. Nevertheless, the report notes that policy still encourages the use of those standards developed by recognized standards development organizations (SDOs), such as the American Public Transit Association (APTA) (11).
Branscomb and Keller (1996) offer an early summary of the challenges facing ITS standardization and, perhaps, partial explanation for why no standard has been formally adopted by the USDOT. In Converging Infrastructures: Intelligent Transportation and the National Information Infrastructure, they write:
"ITS standardization issues are complex relative to those in the traditional telecommunications environment because they span a broader array of technologies and systems. At the same time, however, the environment for standardization is
32

relatively weak. Telecom standards evolved with a common platform and a stable-- indeed regulated--competitive environment; ITS will consist of heterogeneous systems and a relatively independent set of players. In addition, many of the technologies for which standards will be most needed are nascent or immature at this time" (12).
Many of the same challenges exist nearly two decades later. Technologies and systems remain diverse and complex. Most of the policy efforts tied to standardization have been limited to light incentives, certainly not mandates. And, barring a few examples, standards in the transit industry still seem nascent and/or immature, a fact which is supported by the above mention of USDOT's hesitancy to formally adopt any ITS standard.
Despite this apparent stagnancy, a couple of things have changed dramatically. First, web and mobile platforms for personal information delivery have exploded, despite the survey responses from TCRP Synthesis 73. The personal computer and, more recently, the smartphone have enabled transit agencies--and anyone with an Internet connection--to communicate efficiently with larger and larger audiences. A separate, yet certainly related, occurrence is the emergence of the open data movement. The democratization of information and datasets have created an ever-broadening market of users and implementers who inject a distinct set of values, such as transparency, openness, and sharing, into these standardization processes. In order for standards to succeed in this new marketplace, the bodies that maintain these standards may need to demonstrate a renewed commitment to these ideals--both that the standard is developed/maintained and how new stakeholders might interact with the standard.
Open Data and Standardization
Executive Order (EO) 13642 issued by President Obama on May 9, 2013, has broad-reaching impacts for open data and data standards in the United States (13).
33

Proponents of open data, discussed in more depth in Chapter 4, affirm that government should provide its data freely and openly to private citizens and corporations in order to spark innovation and assist government in performing its various functions. Using its oftcited poster children of weather data and the Global Positioning System (GPS), the EO discusses the immense potential for entrepreneurial activity and economic growth when public data are made freely available. Importantly, it asserts that "the default state of new and modernized Government information resources shall be open and machine readable [emphasis added]" (13). By providing government data in machine-readable formats by default, the federal government is placing a new level of importance on the role of standardization in the most basic operations of government. Standardization, if not a prerequisite for the systematic provision of machine-readable data, is at the very least a logical conclusion for the effort.
This EO and the policy it represents are important for the future of transit data standards because it cements the pattern of growth and creation of niche data markets in sectors such as transportation, health, or education. With this growth comes the continued importance of data standards to convey this information in addition to the processes by which such standards are developed. While standardization efforts in ITS are over a decade old, the executive branch's relatively new open data policy allows an opportunity to revisit these efforts and investigate how this "open paradigm" might impact preexisting policy and methods. Certainly, most of the ITS standards have been developed to be open standards; however, properly functioning in support of open data poses new questions for these transit standards, particularly in how to handle an entirely new set of stakeholders.
34

Pluralization of Stakeholders
Just as the release of Global Positioning System (GPS) spurred billions of dollars in innovation and supported the spread of businesses around the globe, the opening of historically closed or unavailable datasets is spawning a new set of interests and stakeholders in transportation data from governments. According to a report released in October 2013, open data have the potential to unlock billions, even trillions, of dollars in economic value in the US. For the transportation sector alone, there is around $720 to $920 billion in latent value, suggesting that new stakeholders might be very important for the overall economy (14). These new interests not only have a stake in if/when an agency releases data, but also in how these data are provided once it is eventually delivered.
This new generation of stakeholders historically has had little influence on the development of ITS standards. This of course is a natural consequence of arriving late to the game, yet this is not to say that such parties have not been addressed. In a 2012 roundtable held by the White House Office of Science and Technology Policy (OSTP), application developers and other transit industry stakeholders met to address challenges facing the transit industry, namely "(1) a lack of consensus on standards for the exchange of real-time transit data and (2) a lack of 'clinical trials' of cutting-edge technologies in this area" (15). The direct outcomes of this meeting are not abundantly clear. In fact, that the meeting even took place at all is difficult to ascertain because it is only published on a few blogs. Nonetheless, the convening of such a meeting shows that the federal government is aware of the issues in adoption of current standards and bringing transit technology forward. As more and more agencies move towards an open data model, this pluralization of stakeholders opens up opportunities for transformative change in the public transit industry.
35

Efficient Competition and Innovation
The most fundamental motivation for pursuing transit ITS or any other set of data standards is to enable efficient competition and innovation. The economic arguments for standardization espouse the positive welfare benefits that widely adopted standards generate and, conversely, the failure of technologies and innovations to which incompatible standards can lead (16). Such positive benefits include network effects, the avoidance of lock-in, reduction in switching costs, and enabling new market entrants, all of which will be explored further in later chapters (1719). Put simply, standards lead to a more efficient arrangement of market forces and competition. While the success of standards may not be in the interest of existing firms within the industry, it is certainly in the interest of the general welfare of the public, who perceives such activity in the form of cost reductions and improvements in services.
In considering the value of standards to transit ITS, it is helpful to consider the genesis of GPS technology. Surely, if the federal government had delegated the management of GPS to local authorities, we would see the geographies of various jurisdictions encoded differently to serve different needs. A state government may choose to represent each point of latitude and longitude in reference to a coordinate system that distorts the state's geography the least. Or a local municipality may choose to represent every point in reference to the city center, a logical decision. Or an extremely flat county might choose not to represent altitude in its local GPS at all. In reality, we see different coordinate systems in use in nearly every jurisdiction around the country that hosts geographic data. But if the federal government had disjointed GPS--the foundational technology for pinpointing any user's precise location at any given moment--in this hypothetical way, there would be little chance of the technology having the lasting impact on the world that it has. This illustration is of course flawed
36

(the technology is for global positioning, not local positioning), yet in an age where technologies can transform the world in mere months given the right conditions and where data have been historically locked down so tightly, the example is not altogether unbelievable.
In sum, the landscape of transit ITS standards may be in a period of change. Thanks to a growing interest in the use of government data by a new set of stakeholders and the formal recognition of these efforts by the President, there is now more than ever a need to understand the impact that standards have on the transit industry. Understanding the economic and policy impacts that standards have is a crucial first step to understanding how individual standards develop and the environments in which they are created.
Literature Review
Standards Development Theory
Standards development processes, especially in the information technology sector have received a great deal of attention in the past couple of decades. Indeed, it is the success (or failure) of such processes that have led to the fruitful (or in some cases painful) growth of industries that rely on networking and data exchange protocols, i.e., the Internet. Standards development theory draws from the fields of economics, sociology, political science, business and information technology. This interdisciplinary topic area thus has many different contributors bringing a wide range of expertise and background case studies. Nevertheless, a review of such literature reveals common threads and theoretical underpinnings.
In an attempt to cover all relevant aspects of standards development theory for realtime transit passenger information standards, this section will consider:
the economic drivers for standardization processes;
37

the institutions that have historically steered standardization processes; policymaking surrounding standardization; the types of standards and the basic function each serves; and the definition of "open standards" development (as well as differentiation between "open standards," "open data," and "open source"). This literature review provides a set of objective criteria for understanding and analyzing the real-time transit passenger information standards development. This analysis will inform the economic viability of development strategies, the appropriateness of when and where government has intervened with various policies, and the conditions of openness for each of the standards. Previous work on transit interface standards has not taken this extensive look at the theoretical literature surrounding standards development, yet in order to move the industry forward on this issue, such a review is necessary.
Economic Dimensions of Standards There are a number of economic motivations for standardization in an industry.
Each of these impart externalities onto transactions and product decisions, which spur the economic viability of products and allow technological innovation to proceed at a strong pace.
Network Effects Some of the primary economic advantages offered by standardization are derived from what are known as network effects. Katz and Shapiro (20) define network effects as "the utility that a given user derives from the good [which] depends upon the number of other users who are in the same 'network' as is he or she." Economists have
38

established a number of types of network effects3 in the past few decades, all of which contribute to an understanding of how these market externalities impact standards development and implementation.
For understanding how network effects might apply to real-time transit passenger information, consider a transit agency in isolation. The agency may have an interest in providing real-time information to customers. Developing a system to deliver this information may take significant investment in labor and/or capital to build the system from scratch. In the absence of standardization, adding additional agencies to this model does not decrease individual agency investments to provide real-time information. However, standardization drives down these costs because the costs (and benefits) of development begin to be distributed across the network. The different ways in which these effects disperse are described below.
Direct Network Effects The most basic example of network effects and one of the most modeled in the field are direct network effects. Direct network effects account for the direct increase in value accounted for by an increase in usage. Such an effect is easily explained by common communications networks, such as increases in Internet users or the number of households with a telephone. As more individuals begin using a product, the value of that product, or consumption benefit, for existing users and each additional user rises. Both Katz and Shapiro (20) and Farrell and Saloner (19) discuss these basic effects in their seminal works that were both published in 1985. Indirect Network Effects
3Arun Sundararajan maintains a thorough listing of the various types of network effects on his personal web site (http://oz.stern.nyu.edu/io/network.html) hosted at New York University from which many of the literature references were extracted.
39

Indirect network effects contribute to consumption externalities, or the how the consumption of one good may depend on the market supply/availability of other supporting or interoperable goods. Katz and Shapiro also refer to this phenomenon as the hardware-software paradigm (20), which may be recognized today in the consumption patterns of smartphones. Indeed, the availability and abundance of "apps" or native applications--or even accessories like cases or peripherals--for a particular consumer smartphone often heavily influences the purchasing decisions of consumers.
The applicability of this indirect network effect model may be limited for the transit ITS industry because of the dominance of vertically integrated vendor solutions for hardware and software. However, the model may be considered for instances where passenger information standards have been adopted by a subset of transit agencies and mobile application developers. In this circumstance, consumers have come to enjoy the benefits of software variety and freedom of choice when a transit agency chooses a standard that allows for an array of software providers to enter the market.
Two-sided Network Effects Indirect network effects are sometimes referred to as one-directional cases of
two-sided network effects. Whereas indirect network effects refer to the scenario where a variety of software packages may influence the consumption of a hardware package, two-sided network effects include this scenario along with the reciprocal, where a variety of hardware options for a given software will impart benefits on the consumption of the software. Farrell and Klemperer list "credit cards, brokers, auctions, matchmakers, conferences, journals, computer platforms, and newspapers" among key examples of two-sided network effects (21).
Local Network Effects Local network effects provide a strong theoretical understanding for standards
adoption and development in transit ITS. These effects describe the effects that a small 40

subset of a larger network has on consumption decisions. The federal requirement for developing regional ITS architectures is a policy materialization of these effects. In other words, ITS decisions made by a transit agency in a given metropolitan area will be heavily influenced by the decisions of and existing infrastructure supported by agencies within that same region. Again, this effect is supported by both the theoretical arguments made by Sundararajan (18) and the policy mandates from USDOT (10).
Lock-in and Switching Costs Besides the benefits attributed by network effects, the costs imparted on consumers where standards do not exist in a market create an important motivation for the introduction of standards. These costs, known as switching costs, may keep a consumer locked in to a particular firm (or vendor) because the cost of switching firms is too high or, put differently, "when consumers value forms of compatibility that require otherwise separate purchases to be made from the same firm" (21). When considering technology systems in the public transit sector, switching costs may derive from the use of proprietary data formats and standards. Thus, switching from one technology provider to a competitor would require high costs to translate or convert data from one system to the new. Other examples of switching costs and lock-in "include the transaction costs of closing an account with one bank and opening another with a competitor, the learning cost incurred by switching to a new make of computer after having learned to use one make, and the artificial switching costs created by frequent-flyer programs that reward customers for repeated travel on a single airline" (17). Approaches to Standards Coordination The mechanisms by which a standard develops is an important determinant for coordination, or reaching a harmonic agreement within the industry. Farrell and Saloner
41

Consider three approaches to coordination for interface or compatibility standards: committee-based, market-based (or "bandwagon"), and hybrid coordination (22).
Committee-based Coordination Committee-based coordination relies on the action of some formal body to achieve standardization across the market participants, while market-based coordination is defined by a set of competitive parties each working independently of one another (22). There are many examples of committee coordination in standardization including any standard setting organization that openly allows industry participants to meet and develop a standard through a consensus-based process (e.g., ANSI, ISO, or CEN). The hybrid approach relies on a combination of both market agents working together in a formal committee approach, while simultaneously pursuing a market strategy for a standard. Farrell and Saloner conclude that, while it may take a significantly longer time, committee-based standard setting will more likely result in interface standards coordination. Though the authors do note that as this process takes longer and longer, the marginal benefits ("payoffs") for achieving standardization through committee begin to diminish rapidly (22). Market-based or Bandwagon Coordination Farrell and Saloner suggest that standardization occurs in the market-based or bandwagon coordination environment when there is a clear leader in the market (a "first mover") that pushes the market into standardization as a side effect of its leadership. They mark key examples of this pattern as when Home Box Office (HBO) adopted VideoCipher, a satellite signal scrambling system that once adopted by the entertainment giant brought widespread coordination across the industry. Another example of this bandwagon approach is with the pre-breakup telecommunications
42

company Bell. When Bell (the firm with the largest market share by far) made decisions on products or standards, smaller companies such as GTE were forced to follow.
The Hybrid Approach The hybrid approach to standards coordination describes when a firm decides to participate actively in a committee approach while simultaneously pursuing a marketbased solution (22). This approach could be considered either hedging activity or, more aggressively, covert deception used to make a move on the market with the committee's ignorance. Keil suggests that the hybrid approach--combining market and committee elements into a semi-open alliance of organizations--a model used in the standardization of Bluetooth, is used increasingly by firms to achieve rapid dominance of new technology markets (23).
Standards Stakeholder Models
As mentioned in Chapter 2, the role of stakeholders in the development of standards is an important one, especially as this group changes with the government implementation of open data policies. This section contains a few descriptions of stakeholder models, or the types of stakeholders involved with standards development and how their respective interests play out. The section provides a context for the importance of organizations, history, and structures in standards development.
Creators, Users, and Implementers Krechmer defines a model for stakeholders in open standards development that relies on three categories: creators, implementers, and users (24). This is perhaps the most basic hierarchical division of stakeholders, yet it helps to parse out interests in the standardization process. While implementers and creators have the most stake in this process, users have important interests as well that extend beyond the technical components. West (25) presents a model with more subtleties, which provides a good
43

description of stakeholders for understanding market forces in this research. Nevertheless, both models presented here prove valuable to understanding the interaction and importance of stakeholder groups.
Creators (Standards Setting Organizations) Standards setting organizations (SSOs) is a term that has been used to characterize any organization involved in the development of standards, from governmental to nongovernmental bodies and from corporations to non-profit foundations. In a 2002 critique on the evolving nature of SSOs, Cargill defines five types of SSOs: trade associations, Standards Developing Organizations (SDOs), consortia, alliances, and the Open Source software movement (26). Cargill traces the history of SDOs, the definition typically applied for more formally organized SSOs. He uncovers the acceleration of market demand for new technology standards and simultaneous retardation of SDOs' ability to deliver standards in a timely manner. This slowing pace of development originated with the growth of "anticipatory standardization," whereby shortened product cycles and rapid technology change forced organizations to develop a standard far in advance of when it was needed by the industry (26). This change began to bring about an increasing number of consortia, or alliances of companies with similar objectives, that retracted funding from SDOs, redirecting it towards their own consortia activity. While these consortia on the whole did not participate in anticipatory standardization, the model of standardization began to change towards "existing practice." In this model, a company would submit a specification
44

already in practice to be reviewed for standardization by a consortium. The revised and reworked specification would then be submitted to the industry as a standard, though as Cargill accurately notes, "[t]he ultimate authorization, of course, was the take up of the technology by the market (26)."
The other crucial piece of this creator segment of the standardization hierarchy comes from the influence of the Open Source Software (OSS) movement. This movement, formally initiated in the late 1990s, consists of a large, semi-organized network of individuals and organizations growing increasingly diverse, but with the common goal of creating and improving bodies of universally accessible and redistributable software (27).
Members of the OSS community often extend beyond the development of software into the realm of standardization. While it may be on the other end of the continuum from large SDOs, this largely voluntary community has made significant contributions to the development of important open source software projects. The decentralized nature of many of these projects shows important similarities to the successful set of Internet open standards, which are developed in part by the Internet Engineering Task Force (IETF) (28). The model of distributed networks of volunteer technical experts has and will likely continue to have real impacts on how standards are developed. The importance of this model is further discussed later in the section on Open Standards Development.
Implementers Implementers are those players in the standardization process that create new products that directly employ the standard under development (24). This group, therefore, has a uniquely strong interest in the outcome of a standardization process. However, it is crucial to consider how these interests differ from standards creators (such as an SDO) or the user of one of the implementer's products.
45

An implementer is concerned not with whether the standard is technically sound, universally accessible, or meets some other idealistic notion of fairness, but rather that the standard is accessible to him or her and meets the needs of his or her particular products and market segments (24). This description is not to vilify implementers. Some implementers may indeed have goals that the standard conforms to firmly held values, but if the standard does not meet an implementer's needs, it is not in his or her interest to support it. It is useful here to discard the notion that firms in the marketplace enjoy competition--firms would rather the playing game be tilted in their favor, but at the very least will suffer a level playing field.
Users Users of implementations of a standard have a stake in the standard's success. Truly, when a standard reaches widespread adoption, its users gain benefits from network effects, the freedom from lock-in, and stability in their investment. Krechmer writes that the openness of a standard is increasingly important to end users. This is understandable if we accept that openness implies: when multiple implementations of the standard from different sources are available, when the implementation functions in all locations needed, when the implementation is supported over the user-planned service life, and when new implementations desired by the user are backward compatible to previously purchased implementations (24). The model for open standards has an increasingly visible impact on the standardization process for creators, implementers, and users. West's Model West describes a stakeholder model in which there are five distinct groups with interests in open standards development. These classes are: "(1) technology providers, (2) incumbent vendors, (3) vendor challengers, (4) complement providers, and (5) users" (25). The model has similarities to Krechmer's simplified model. Technology providers
46

develop the technology on which the standard is based. Oftentimes, this group also accounts for the implementers in Krechmer's model.
Vendors consist of implementers who do not have control of the technology development but do provide products that implement the standard. This group consists of incumbents--those who lead the market and maintain a significant segment thereof-- and challengers--market leader competitors who wish to disrupt the control of the market. This challenger group sometimes will create standards alliances or consortia to gain control of the market or, perhaps more accurately, to level the playing field (25).
Complement providers are those who provide complementary products for a given standard. These providers' interests are driven primarily by volumes--they desire large market shares for their products with little regard for high profit margins. In other words, they are interested in providing products that piggyback on the successful implementations of a standard. Users, once again, make up the same group of stakeholders as in Krechmer's model. This group ultimately cares about the interoperability of the standard and the resultant benefits derived from achieving interoperability.
We can apply West's stakeholder model to the public transit industry, particularly as it pertains to real-time passenger information. Technology providers are those companies that develop and, more often than not, also implement AVL technology. Many of these same companies compose the group of incumbent vendors. Vendor challengers are more difficult to pin down in this model, but Google and its decision to lead the development of the GTFS-realtime open standard most accurately represents this model. Google has been a disruptive force in the provision of transit data (and a number of other sectors), most notably with the development of GTFS.
There are a number of other vendor challengers engaged in the GTFS-realtime "consortium," but the active members of this group mostly seem to be complement
47

providers. We can think of complement providers in this model as third-party application developers, looking to provide real-time passenger information via apps that piggyback off of information provided via AVL systems. They care not about developing a highcost, custom solution for a single agency, but rather reaching a large number of users-- what we will consider as agencies here.
The question of who the user is somewhat conflated because our public transit agencies are direct users, but ultimately their customers are the beneficiaries. So here we have two sets of users: direct (agencies) and indirect (transit riders). Considering this basic model of stakeholders in the transit industry will be important for understanding stakeholder relations and interests in the case studies in Chapter 4.
Public Policy and Standards Development
Government institutions have substantial influence over standards development not only through the institutions through which they act but also through the public policy they support. Greenstein and Stango note the importance of government decisions in backing standards because of the power to mandate compliance with a given standard. However, the incredible rarity of occasions in which these compliance decisions are reversed is just as important for understanding the role of government in standards development (29). The literature provides ample discussion of the benefits and costs of government intervention as well as the conditions under which intervention is most appropriate.
David and Greenstein, drawing on the work of Besen and Johnson on Federal Communications Commission (FCC) regulatory intervention, indicate the conditions under which different types of intervention may be appropriate. Key among their recommendations are "government should not mandate standards if these are likely soon to require revision... symptoms of ineffective or premature actions should not be
48

ignored--including negative industry reactions and continuing attempts to break from mandated standards... [and] sparse response to a [standardization] proposal may indicate premature action [by the intervening agency]" (30, 31). While the latter two recommendations may be applied retroactively to standardization proposals, the first applies to standardization processes where government has yet to intervene.
While the authors recognize the numerous arguments for intervention to achieve gains in efficiency, David and Greenstein note that there are issues that come with government activity in standards development. These issues nearly all stem from the role that stakeholders are able to play in the process. Typically, vested interests, or incumbent vendors, are the most well represented and gain the most influence in a standards development process. Consequently, old standards will be systematically protected while new stakeholders will likely not be fully represented nor even identified in the process (30).
Cabral considers ten different standards battles and the role that government policy has played and can play in favoring or supporting a competing standard. He considers two questions of import for policymakers: which standard to support and when to intervene. For the first question, Cabral argues that a patient policymaker should support the lagging standard, or the one that is likely to prove worthwhile over the long term but has yet to fully mature or see market dominance. The policymaker in a hurry, on the other hand, should back the current leading standard. As to when a policymaker should intervene, the answer is binary again: the patient policymaker should delay any action, the impatient should act now (32).
The definition of patience and impatience is, then, at the crux of this theory and how policymakers should react to standards battles. Cabral suggests that this depends on both the policy context, e.g., US vs. Europe vs. Japan, and the industry/technology in question. For example, a government might favor the more centralized, impatient
49

approach of choosing a product early over allowing competitive forces to work through markets (patient). When considering the technology in question, some product cycles are relatively short, which would favor an impatient approach to avoid lagging.
Farrell and Shapiro consider the differences between these policy contexts in the selection of high-definition television (HDTV) standards. Japan and Europe chose a much more centralized approach, demonstrating characteristics of impatience. Each chose a technology-firm combination very early on and supported it through the development of the technology. On the other hand, the United States utilized the resources of competing firms in the HDTV standard selection. Additionally, in the United States terrestrial broadcasting interests carried significant political weight, so displacing these providers by adopting a standard too early was out of the question for the FCC. These differences materialized in a long delay in standard setting and technology development in the United States, yet a side effect of this delay was an improvement in the ultimate technology outcome.
In the United States, the FCC allowed for competitive systems to develop in tandem until it chose a standard from a selection of proposals by 1993 (33). At this point, tests were prepared to determine which HDTV proposal was deemed best. The results of the February 1993 tests were, of course, inconclusive. In order to keep development costs down and avoid further competition, companies and organizations involved formed a Grand Alliance to cooperatively set the standard and build a working prototype. Eventually, this group submitted a proposal that is very close to what would be approved by the FCC in 1996 (34).
This case shows a very patient policymaker in the FCC, which chose to allow competing firms to generate multiple proposals. In turn, this led to these competitors allying themselves in order to reduce duplication of efforts and bring HDTV to the market more rapidly. So, the patient policymaker led to a better standard by creating impatient
50

market actors willing to collaborate. While this is just a single example, it demonstrates some of the reactions policymakers have in different environments and lays a foundation for understanding how policy context and technology influence patience.
De facto vs. de jure An important distinction in the world of standards development is de facto vs. de jure, or whether a standard is formally adopted/sponsored or not. The question of "who is the formal adopter/sponsor?" poses difficulties in itself. Yet, typically, de facto standards achieve widespread dominance by the action of markets without the formal requirement of a governing body, whereas de jure standards exist under the governance of an accredited SDO. The examples from FCC above primarily describe activities around quality or safety standards enforced by the regulatory body. However, this regulatory activity is less prevalent for ITS transit interface standards. Technically, there exists no de jure standard for transit ITS products because the USDOT has not formally adopted any standard, including the FTA/APTA TCIP. Nevertheless, the USDOT does support standards development activity through accredited standards bodies such as APTA, ITE, and ANSI. Fleming Waguespack (2005) Internet Engineering Task Force (IETF) is an example of de facto standards-setting body even though it is challenged by traditional standards and governmental bodies. The most popular product will also be the de facto standard, and setting a standard can offer a product a dominant market position. Thus, de facto standard setting in these cases is of enormous concern to firms in systems industries and will often be central to their business strategies (35).
51

Technical Dimensions of Standards Thus far, this review has covered "soft" or social dimensions of data standards.
These social components of economic and institutional analysis are critical to a complete understanding of the motivations and interests in standards development. It will be useful, however, to explore the technical dimensions of standards in order to refer to phenomena by their proper names.
In the taxonomy of standards laid out by David, there are three classes of standards: reference standards, which enable the accurate measurement and comparison of different products (i.e., benchmarking); minimum quality or safety standards, such as the expected lifetime or performance of an electronic component; and interface standards, those standards which allow a sprocket developed by Sprockets, Inc. to communicate with a widget manufactured by Widgets Corp (36). Other researchers' taxonomies include additional classes, such as variety reduction standards, which "limit a product to a certain range of characteristics such as size and quality level" (for example, reducing the number of types of screws) (37); however, this research will focus on the importance of interface standards to the functioning of passenger information dissemination and the market that supports such activity. Interface and Compatibility Standards
Interface, or compatibility, standards describe the functional or physical characteristics that are necessary for equipment or systems to exchange information successfully. The standards contained in this research (SIRI, GTFS-realtime, and TCIP) are all interface standards, defining the format, structure, and content of the real-time information exchanged by onboard AVL systems to central servers to third party consumers (either users or application providers). While the exact chain of
52

communication intended for each standard may differ, the basic function of compatibility exists throughout.
Interface standards for IT, while relatively new to the public transit industry, have been considered previously in academic literature. In 1998, Hickman reviewed the current state of the practice for interface standards. His review included a survey of 300 software and hardware product vendors in the transit industry. The resultant response rate of about 9% (only 27 fully usable responses) perhaps indicates a lack of interest in the topic matter, a lack of knowledge, or a desire to remain silent on the subject. Whether this response rate is indicative of a particular stance on the topic or simply the consequence of happenstance, Hickman does note that his sample may be seriously biased and should be "viewed with healthy skepticism" (38).
Open Standards Development
Standards development takes place in a variety of settings under different institutional arrangements and technical requirements. However, all of the standards considered in this paper have one thing in common: they all claim to be open standards. An open standard is simply a standard that is "not under the control of a single vendor and is easily available to those who need it to make products or services" (39). This is a rudimentary definition because there are many facets of openness, which will be considered below. This section will also explore related "open" movements and the interaction between these trends and open standards development. Components of Open Standards
There are of course, a wide array of definitions for what makes an open standard. Krechmer documents a few of these, which range from West's availability beyond the standard sponsor to Perens' definition which draws from the open source software movement. Perens emphasizes not just the development and availability of a standard,
53

but also the accepted practices and operating for a standard. His fundamental list of principles and practices include:
1. availability, 2. maximize end-user choice, 3. no royalty, 4. no discrimination, 5. extension or subset, and 6. predatory practices (40). Krechmer recognizes the importance of different stakeholder groups to open standards: if a standard is only open for users and not creators, it is not truly open. For creators, the development process must allow for open meetings, certain consensus criteria, and formal procedures, such as balloting. Implementers have market needs upon which an open standard must not impinge--namely, that the standard should not impose burdensome costs, keep them from innovation, or put them otherwise in a negative market position. Similarly, users consider a standard open when there are multiple implementations to access--such as the availability of GTFS from multiple transit agencies--and there is sufficient support for the standard. Krechmer's ultimate definition, therefore, defines ten requirements that draw upon the expectations of openness from each of these stakeholder groups: 1. Open meeting requires that all stakeholders can participate in meetings; different levels of barriers (economic, physical distance) can detract from an SDO meeting this requirement. 2. Consensus decisions on standard should be made by consensus, a term that has a range of meanings; however, Krechmer views compliance with this requirement to be binary.
54

3. Due process requires that "consideration be given to the views and objections of all participants" and that processes exist for participants to express such perspectives.
4. Open world suggests that any standard shall, in principle, be applicable to use cases around the world. In other words, it should not be restricted by national or political boundaries. However, because there are often regional or cultural issues involved with standards, the requirement focuses on the geographic coverage in which the standard operates.
5. Open Intellectual Property Rights (IPR) refers to the license that governs the use, redistribution, or commercialization of a standard for implementations. Krechmer scales this requirement in five levels from 0 to 4 ranging from 0 commercial licensing to 4 no copyright/patent protection.
6. Open change is a somewhat redundant requirement in which Krechmer bundles the first three requirements (open meeting, consensus, and due process). Nevertheless, the requirement does indicate an important characteristic that relies on the convergence of key principles and so may justify being addressed separately.
7. Open documents requires that documents for the standard development process are made open. This includes "work-in-progress documents" (e.g., draft versions of a standard, meeting discussions, technical reports, etc.) and "completed standard documents." Krechmer describes three states of open documents:
1. Work-in-progress documents are only available to committee members (standards creators). Standards are for sale. (Current state of most formal SSOs.)
2. Work-in-progress documents are only available to committee members (standards creators). Standards are available for little or no cost. (Current state of many consortia.)
3. Work-in-progress documents and standards are available for reasonable or no cost. (Current state of IETF) (24)
55

8. Open interface prescribes that standards support both backward and forward

compatibility. This category could be broken down into connectivity, or how devices in

different spatial locations interact; extensibility, allowing modifications to standards that

do not break compatibility; and adaptability, allowing for changes in communication

system.

9. Open access is a somewhat nebulous requirement that Krechmer seems to

attach more to safety standards than interface standards. Nevertheless, it could be

interpreted to indicate the degree of access users have to implementations of the

standard or the availability of conformance verification tools to verify compliance.

10.

On-going support requires that a standard be supported during the

four phases of its lifetime (following creation): fixes, maintenance, availability, and

rescission.

According to Krechmer, these requirements fully satisfy the Perens definition of

open standards, including both principles--One World holds that a single standard ought

to perform a capability globally, for all cases--and practices--Open Meeting requires

that any and all may play an active role in standards development. Table 3 shows how

the ten requirements of Krechmer's definition apply to the three stakeholder groups. The

table indicates that three requirements--One World, Open IPR, and Open Change--

impact all three stakeholder groups. Users and implementers rely on nearly all of the

same requirements, except that implementers do not rely on on-going support.

56

Table 3: Importance of open standards requirements to different stakeholders (24)

Requirements
1 Open Meeting 2 Consensus 3 Due Process 4 One World 5 Open IPR 6 Open Change 7 Open Documents 8 Open Interface 9 Open Access 10 On-going Support

Stakeholders

Creator

Implementer User

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

In addition to a robust definition of open standards, Krechmer provides an analytical framework for assessing open standards development. Because the author uses this framework for assessing passenger information standards in the chapter on case studies, Krechmer's ten requirements, and their relevance for transit ITS standards, will be further explored in Chapter 4. Related "Open" Movements
In recent years, a number of technology-centric movements labeled with the "open" qualifier have emerged. The author has cursorily reviewed open data with respect to the White House's policy stance and its potential impact on standards development. This brief section is to clarify this and other movements and their relevance for this research. Open Data
Perhaps the most recent open movement and the one most successful at capturing the public eye has been the "open data" movement. Open data refers to the idea that datasets, particularly those owned by the government, should be made openly
57

available to any private citizen or company that wishes to use them. In addition, the movement holds that governmental agencies should provide such data in machinereadable, common data formats so that they may be easily parsed by software developers, researchers, and any other interested party. Open data holds a strong connection to the world of open standards because the success of the movement relies on being able to build robust, repeatable applications that function for both Agency X and Agency Y. In other worlds, interface standards must be used by a large group of agencies in order for users to experience the benefit of network effects. Open Source
The open source software movement is a relatively new concept, but has already had profound impacts on the software development industry. Open source refers to a software development model that promotes free redistribution of software and software components, makes source code (not just compiled code) openly available, and allows derivative works (41). There are a variety of licenses under which open source software is published (42), ranging from the very permissive (for example, reuse for commercial purposes) to more restrictive policies on how source code may be used.
The roots of the term "open source" grow very much out of the world of standards. The term was coined in a Palo Alto, California, strategy session following the decision to publicly release the Netscape Navigator source code (27). Netscape was embroiled in longstanding "browser wars" with Internet Explorer (IE), which it eventually lost. The ultimate conclusion of these wars, however, would spark the open source movement and the eventual destruction of IE's hegemony by open source browser projects such as Mozilla Firefox and Chromium (the open source basis for Google Chrome).
This movement has since grown astronomically, especially over the past decade. Figure 3 shows the exponential growth in the number of source lines of code contributed
58

to open source repositories tracked by Deshpande and Riehle over the period of January 1995 to December 2006. While this study is a few years old, the trend line is unmistakable: the open source community is growing rapidly. According to the authors, "the total amount of source code and the total number of projects double about every 14 months" (43).
Figure 3 Growth of open source lines of code from 1995 to 2006 (43) While the open movements discussed here have distinct meanings, they do not
exist in isolation. It is likely that as open data and open standards proliferate, so too will the number of open source projects and lines of code dedicated to using these data and standards. This correlation is not a given, yet the interest in civic hacking (44) and viewing government as a platform (28) suggest that these movements will work together in concert and continue to exhibit this exponential growth pattern.
59

Real-Time Transit Standards Development
Methodology
The methodology presented here relies on the multiple case study to understand the standards development processes utilized by each data standard. One of the principle aims is to reach an understanding of how "open" each data standard is, or how well each data standard complies to the definition of an open standard. According to Yin, a case study is an empirical endeavor that investigates contemporary phenomena within the context in which they occur. A case study provides a method to observe both the phenomenon and the contextual details--which may be part of what the observer seeks to understand (45).
The multiple case study methodology used here relies heavily on document review and past surveys on agency attitudes and capabilities regarding the provision of realtime information to understand characteristics of the standardization processes and their impacts on agency adoption. Interviews were also conducted with members of the SSOs from each of the standards development processes. The final source of information is a collection of articles from a variety of peer-reviewed journals that contain data about various implementations of (1) products deployed by different vendors, (2) standards implemented in different use cases, and (3) opinions/perspectives on standardization and ITS for transit.
Justification for Case Study Methodology
The case study as methodology offers research on systems, processes, and institutions an important tool for understanding. Yin offers the following purposes for choosing this methodology in research:
1. The research seeks to answer a "why" and/or "how" question, 2. The research focuses on contemporary events, and
60

3. The researchers lack "control over behavioral events" relevant to the research (45).
The research objectives in this thesis are to understand why and how each of the real-time transit passenger information standards development processes function and to consider how the standards environment could be improved for the better functioning of real-time information provision. This is certainly a contemporary subject of review. While there are some historical considerations, each of these standards is actively evolving over time and each of the respective SSOs consider the future of the standards.
Finally, the researcher draws on insights from members of the SSOs and does not attempt to nor could he control the behavioral events of these bodies. Any analysis of standards development processes necessarily must draw on case study findings, lest the research be focused on developing economic models or theoretical insights. This research, on the contrary, seeks to understand specific real-world processes and institutions and their respective arcs of development.
Components of Case Studies
Interviews To gain insights into the history and evolution of the standards development process, the researcher conducted interviews with either members of the SSO or persons actively engaged in the standardization process for each data standard. The nature of these interviews were primarily informational, seeking specific facts about the operations and functioning of standards committees rather than opinions or speculations. The major categories for questions asked in the interviews are as follows: Interviewee's role in standard development History of standard development process Meetings, Consensus, and Formal Processes
61

Intellectual Property Rights, Global Availability Transparency, Interface, and Access Support for Implementers Many of the question topics aimed to understand the openness of the respective standard development process according to Krechmer's ten principles of open standards. Internal Review Board approval was obtained for the interview questions and consent from interview participants was obtained. Although these interviews were informational, in order to protect the participants pursuant to human subjects policies, their names are excluded from this thesis. Nonetheless, many parts of the interviews informed the case study analysis. Document Review The researcher extensively reviewed documents on the standards and their respective standardization processes. These documents include SSO and/or data standards websites, documentation on current and/or past versions of the data standards, and any publicly available meeting minutes or committee communications. Many of the most important of these documents are referenced in the bibliography and are available on the Internet. However, if at some point in the future, these are no longer available at the URLs provided, please contact the researcher4 for a copy of the reference material (given that the license governing the use and distribution of the content permits such sharing). Assessment of Openness Openness is an important characteristic for standard setting that the researcher has identified in the literature review. As mentioned above, many of the interview questions were directed at understanding how well the standard satisfied Krechmer's ten principles
4 This researcher may be contacted at lreed3@gatech.edu.
62

of open standards. A brief description of the most salient features of openness is provided for each case study and a comparative review according to Krechmer's principles is provided at the end of this chapter.
Review of Outcomes Achieving standardization requires more than simply developing a standard. This is only the first step in a process that, if successful, will lead to the widespread adoption of the standard, the proliferation of network effects to both firms and users, and an improvement in the functioning of the industry market. As such, it is important to review the present outcomes in adoption of each of the standardization processes as indicators of how successful each standardization process has been to date. This is, of course, an ever-changing situation as implementation decisions are made and procurement documents produced in agencies every day. However, there is value in ascertaining the current state of affairs in order to both predict future trends and understand the process that led to the present state.
Case Studies
GTFS-Realtime
Background History GTFS-realtime is the real-time complementary standard to GTFS, the General Transit Feed Specification, which contains static schedule information for a transit agency or collection of agencies. The history of GTFS-realtime is tightly coupled with that of GTFS. Portland's Tri-County Metropolitan Transportation District of Oregon, more commonly known as TriMet, worked with Google to originally develop GTFS. Bibiana McHugh is mentioned as having initiating conversations with Google, Yahoo, and Mapquest in a desire to make transit trip planning information as readily accessible
63

as driving directions on popular mapping services (46). Chris Harrelson, a Google employee, was already engaged in the integration of transit options to Google Maps. By December 2005, TriMet's schedule information was available on Google Maps as Google Transit (46).
A number of agencies followed TriMet's lead. Nearly a year later, Google announced that the company had added five more cities to Google Transit (47). A change proposal was later made in 2009, and shortly thereafter adopted, to rename the GTFS standard (it was originally known as the Google Transit Feed Specification) to more accurately capture its growing use in many other applications besides Google Maps (48). Indeed, the standard has since grown to be adopted by nearly 700 agencies worldwide .5 In the U.S., 272 transit agencies had adopted open data policies to provide their GTFS feeds to the public as of March 2013. Figure 4 shows this trajectory of growth and when Google decided to tackle the issue of providing real-time transit passenger information.
5 According to the website http://gtfs-data-exchange.com (accessed on November 7, 2013). This figure includes both official and unofficial feeds as well as some agencies that may have out-of-date feeds. Nevertheless, the scale of this figure is accurate.
64

Figure 4: Adoption of GTFS by U.S. transit agencies (49).
Once Google was in the business of providing scheduled transit information, the provision of real-time information followed a natural progression. In the summer of 2011, Google launched Live Transit Updates for Google Transit for Boston, Portland, San Diego, San Francisco, Madrid, and Turin (50). This service provides real-time updates on transit vehicle arrival times as well as service modifications/alerts within the Google Maps trip planning function.
The real-time arrival time updates for Live Transit Updates relies on a bulk-delivery data standard known as GTFS-realtime, which Google developed with the help of partner transit agencies listed above as well as a number of individuals involved in the development of applications for transit. The specification, in secret development for about a year before its release, was made open following its release. Thus, GTFSrealtime brought to real-time passenger information what it had done to static information
65

only a few years ago: introduced a robust open standard for moving data from agency and vendor coffers into the hands of third-party developers.
Scope Google developed GTFS-realtime in order for the company to consume real-time transit feeds in Google Transit. As such, the standard differs in two fundamental ways from TCIP and SIRI, the other two standards considered in this research, which were developed primarily for intra-agency interoperability and communication. First, whereas TCIP and SIRI each allow for payloads of data at the transit vehicle level, GTFS-realtime provides a data payload only for an entire fleet of vehicles, what is often referred to as a "snapshot" of the transit system. While some agencies might have hundreds or even thousands of active vehicles at any given moment, GTFS-realtime is able to efficiently handle these data because it utilizes the lightweight Protocol Buffer data structure up to 10 times smaller and up to 100 times faster than XML serialized data (51). This model differs from utilizing a transactional application programming interface (API) such as the representational state transfer (REST) model that many agencies choose to publish and SIRI has recently adopted as a transport architecture. These transactional models allow for a more active conversation between interfaces. For example, a client-based web application may make transactional requests to an API for the next real-time arrivals for a specific stop (the next five buses to arrive at 5th St and Main St). The second fundamental way GTFS-realtime differs from the others is that it operates on a strictly one-way communication model. That is, an agency publishes GTFS-realtime for external bulk consumption. TCIP and SIRI offer more capabilities for integrating real-time passenger information with operations. For example, TCIP was developed with the architecture of an entire transit agency in mind. TCIP allows for operational need to connect, for example, a bus AVL system to other on-board
66

equipment. Similarly, SIRI allows buses to communicate with one another to, for example, ensure that a timed transfer is made smoothly by informing Bus B to wait for the passengers of Bus A if Bus A is running late.
Although these models may differ fundamentally, the primary concern of this research is the delivery of real-time information on stop arrivals/departures, vehicle locations, and service alerts. All three standards perform this function, whether they function at the junction between bus and agency server, agency server and agency web/sign interface, or agency server and third-party interfaces. The open data paradigm has shifted many progressive agencies from keeping data within intra-agency networks to sharing these data outside agency walls. Whether agencies commit to a fully open or semi-open model, the need for an effective data standard for real-time passenger information remains.
Technical Documentation The documentation for GTFS-realtime (52) provides an overview of the standard, description and examples of the feed types, and a complete reference of the specification. The standard has categories for three types of real-time information: Trip updates delays, cancellations, changed routes Service alerts stop moved, unforeseen events affecting a station, route or the entire network Vehicle positions information about the vehicles including location and congestion level (52). These categories provide for most, if not all, of the crucial information about transit service that passengers might be interested in. Certainly, there are more complex pieces of real-time information that are left unaccounted for here, such as information about connections/transfers between routes or detailed data structures about transit facilities. The technical specifications for SIRI, discussed below, capture much more of
67

this type of information and allow for more transactional data exchange models. However, the bulk exchange model for GTFS-realtime requires the specification to be somewhat more minimal than it might otherwise be. This does, however, help the standard to maintain a limited scope and agencies to achieve implementations more easily.
Development Institutional Involvement The primary institutions involved in the development of GTFS-realtime are Google and the original six transit agencies who participated in the closed development process. Since then, the specification has been adopted by a few more agencies (although the precise number is difficult to come by). Google staff work actively to coordinate with agencies on bringing them onto Google Maps and, by extension, onto the GTFS specification. Evolution The history of institutional involvement for GTFS seems to have been instructive for Google with its foray into real-time data. The company developed GTFS with the benefit of transit industry expertise from a single agency. When the specification was released publicly, there were initially a number of changes proposed and adopted almost immediately. It is likely that Google revised its development strategy and institutional involvement to include additional partners partly because of this experience. Another possible explanation for this change in institutional involvement is that the company wanted to expand its reach for bringing the standard around the globe by releasing Live Updates for Google Transit with an international scope. Regardless of the reason, the development of GTFS-realtime included a broader group of stakeholder institutions, which has likely contributed to a decrease in post-release changes to the standard (see Figure 5).
68

Number of documented changes

12

10

8

6

4

2

0 0 1 2 3 5 6 7 8 9 13 14 17 23

Months after initial release

GTFS-realtime

GTFS

Figure 5: Number of documented changes for GTFS vs. GTFS-realtime (53, 54)

Another crucial piece of the evolution of GTFS-realtime is the growth in "repeaters" that exist for the standard, or small applications that convert a different specification to GTFS-realtime. Repeaters allow agencies that have real-time passenger information in one format to gain the benefits of an open standard like GTFS-realtime. Currently, the known repeaters for GTFS-realtime were developed for use in OneBusAway, the open source suite of tools for delivering passenger information. The repeaters include support for the NextBus, SIRI (Vehicle Monitoring and Situation Exchange), and Orbital OrbCad AVL (55). While this bandaid solution to interoperability is not perfect (especially for a proprietary format that could change at a moment's notice) and it may be impractical to consider for every possible proprietary closed format, it does begin to expand the sphere of influence of GTFS-realtime and, importantly, allows for easy integration with the SIRI open standard.

69

Openness GTFS-realtime is notable for the openness and transparency that governs it today. Nevertheless, the standard was originally developed in the product development shroud of Google secrecy for which the company is renowned (or notorious, depending on the perspective). Original participants in the development of the specification signed nondisclosure agreements in order to keep the details of the project closed. This is truly the antithesis of openness; however, a participant of the process notes that in the realm of standards development, the barriers to initial development and publication are high. This closed process allowed the participants to quickly develop the specification and deploy implementations in the absence of painstaking and meticulous debates with a wide array of stakeholders. With the release of the standard in 2011, Google removed the barriers to widespread participation. Open communication is maintained on a publicly-accessible mailing list (https://groups.google.com/forum/#!forum/gtfs-realtime). Change proposals, technical issues, and clarifications are all discussed on this forum by an active community of agency staff, Google staff, and transit application developers/enthusiasts. The general policy on changes to the standard is carried over from the policy governing GTFS. That is, in order for a change to the standard to be considered, it must see interest both from application developers and transit agencies. The policy is intended to keep the standard from becoming bloated with superfluous data and relevant for all stakeholders. As for intellectual property rights, the specification is published under the permissive Creative Commons Attribution 3.0 License (56) and all code samples are available under the Apache 2.0 License (57). Success As mentioned previously, the static GTFS specification has been adopted by hundreds of transit agencies around the United States and around the world. Because
70

the GTFS-realtime feed works in conjunction with GTFS, it stands to reason that many agencies will invest in making their schedule information work seamlessly with their realtime information. While this sounds simple on paper, in reality, many agencies that have AVL and scheduling systems will have different vendors providing each system. Applications that deliver real-time information along with scheduled information (e.g., to provide information on route geometries and stop locations along with real-time arrival times) require the reconciliation of object identifiers in schedule and real-time systems. In other words, trip identifiers or route identifiers in the schedule must match (or be translated to match) those identifiers in AVL systems. Nevertheless, GTFS and GTFSrealtime appear to be in a strong position to serve that role, especially thanks to the support of real-time "repeaters" that translate the NextBus API specification, SIRI, and others into GTFS-realtime (58).
TCIP Background History The development of Transit Communication Interface Profiles (TCIP) was initiated
by the USDOT's Intelligent Transportation Systems Joint Program Office (ITS JPO) in November 1996. Industry professionals came to the realization that in order for transit technology systems to move forward in a progressive and constructive way, standards needed to be an essential part of the conversation. The standard, funded by the ITS JPO and originally developed by the Institute of Transportation Engineers (ITE), switched ownership to APTA in 2001 primarily because of APTA's stronger expertise in the transit industry (59). It was under APTA that the bulk of the standard was developed.
71

Scope The primary goals of TCIP are to achieve intra- and inter-agency interoperability and to decrease the negative effects of vendor lock-in. These goals are in direct agreement with the federally-mandated concept of regional ITS architectures. However, another one of its goals according to an APTA presentation from 2010 is to lead to interoperability "between an agency and external Information Service Providers" (60). This goal of interoperability with Information Service Providers suggests that the TCIP standard might cater to the recent growth of application developers that have latched on to the open data movement in order to provide information to transit customers. This is indeed an important goal, but may be difficult for TCIP to fulfill simply because of the sheer flexibility and customization that the standard allows6. Technical Documentation The documentation of each version of TCIP (including the current version) is currently hosted on the APTA TCIP website in the form of zipped MS Word documents (61). The standard itself is expansive, providing XML-formatted schema for nearly every type of transit technology subsystem and business area imaginable including: Scheduling, Passenger Information, Onboard Systems, Common Public Transport, Control Center, Fare Collection,
6 TCIP provides an expansive "menu" of options that can be specified for a given product/interface. For example, there may be 40 different fields (some of which may be required) for a certain message type. However, one vendor in compliance with TCIP may specify ten of these fields for its product, while another vendor specifies ten different fields. Both may be TCIPcompliant, but the interoperability is not necessarily ensured. This is, of course, a concern with any flexible standard, but the breadth of TCIP makes it especially so.
72

Spatial Referencing, and Transit Signal Priority (TSP) (62). Figure 6 shows a diagram of the expansive TCIP Model Architecture. The standard provides building blocks from these schema out of which systems engineers can build interfaces that are compatible with one another.
Figure 6: Diagram of TCIP Model Architecture (60) TCIP allows for the construction of system interfaces through a hierarchy of data
"elements" that compile into "frames" which compose "messages" that are passed between interfaces in "dialogs" or data exchanges. Figure 7shows a diagram of this hierarchical organization. This extremely flexible system allows for an immeasurable number of combinations and permutations for systems to communicate with one another. In practice, there may be need for only a few sets of standard messages to
73

send between, for example, a CAD-AVL system and Web-based trip planner. The developers of TCIP have accounted for this by making standard message sets available through TIRCE (TCIP Implementation Requirements and Capabilities Editor), an application that allows users to build custom message sets and dialogs.
Figure 7: Diagram of conceptual hierarchy for TCIP building blocks (60) Development Institutional Involvement While the TCIP standard development process began under ITE, the standard
underwent the bulk of its development and refinement while under the direction of APTA. A series of technical working groups (TWGs) composed of a mix of transit agency staff and vendor representatives developed the definitions and schema for TCIP. A TWG
74

existed for each major business area with an additional one for Tools (TWG 4), for a total of 10 TWGs.
An examination of the Passenger Information TWG (TWG 2), for which real-time passenger information messages and elements are defined, shows the institutional makeup of those involved in the standard development process. Figure 8 shows the breakdown of institutional involvement in the Passenger Information TWG. The vendor category is comprised of consultants to APTA, technical staff, and managerial staff. The agency category is comprised of technical and managerial staff from transit agencies. The TWG category is made up of APTA staff.
Figure 8: Participants by sector in TCIP Passenger Information Technical Working Group (63)
From this chart, it is clear that vendors make up the largest bucket of institutions involved in the standard development process with 27 representatives; agencies make up the second largest group with eight representatives; and TWG staff and academia are the smallest groups with one and two members, respectively. Although, the number of representatives listed on a contact sheet for the TWG is a primitive means to begin to
75

understand the interplay and influence on the standard development process, in the absence of complete and organized minutes of past meetings, it offers a glimpse at how institutions were represented in this process. According to Lehr, there are many scenarios of strategic decision-making that occur within standardization committees. For example, new market entrants and entrepreneurs are more vulnerable to delays and so stable, incumbent firms may attempt to delay standardization outcomes (64). Nevertheless, this process necessarily incorporated vendor input because these firms often know many of the technical issues facing standardization firsthand.
Evolution Most of the development work for TCIP was completed around 2006. The standard moved from active development to a five-year review cycle at that time. A comprehensive analysis on the changes made to TCIP is more difficult than for GTFSrealtime or SIRI (see next section). The TCIP documentation is extremely lengthy, and each version is contained within a series of word documents. This document structure makes a comparison very cumbersome at best, impossible at worst. The versions are, however, labeled according to software numbering conventions and number at a total of fifteen versions (from version 1 to the current version 4.0). The most noteworthy change for this research appears to have come in TCIP version 3.0.5.2, which was issued on March 1, 2012 (65). In version 3.0.5.2 of TCIP, a GTFS timetable importer was included in the standard. While prior to this version TCIP has made reference to a number of other industryaccepted standards, these other standards have all been maintained by accredited SDOs. This is the first acknowledgement that, in some areas, de facto standards and specifications have an important role to play. Indeed, before GTFS, there were no de facto standards adopted so widely to be worth including. However, it appears that when hundreds of transit agencies (large and small) began to move towards a specification,
76

APTA took notice and decided to adopt the specification (albeit only as an importer) into its transit standard family.
Openness The standard development process for TCIP itself was open and transparent, allowing any interested party to be involved in the development or comment on version. APTA's standard development process is modeled after that of the American National Standards Institute (ANSI), a well-established voluntary consensus standards development organization, whose membership comprises "more than 125,000 companies and 3.5 million professionals" (66). When it comes to transparency, though, there are some issues related to communication of information regarding the TCIP standard. On the one hand, there is a wealth of information available on the standard's website. Such information includes all previous versions of the standard, archived meeting notes, free support tools for working with the standard, TWG member lists and meeting attendee lists, a database of comments on the standard, and more. While the number of archived documents is impressive, the organization of the material is confusing. Just as the documentation for changes between versions is buried deep within large MS Word documents, so is the information contained within these archives. The content is searchable via a well-indexed search engine, but the organization of the website is poor and nearly all content is in the form of sizable MS Word documents that must be downloaded and parsed through. Success Measuring the success of TCIP by the number of implementations for real-time passenger information would suggest that the standard has achieved less than it truly has. There is no good indicator of how many agencies use TCIP to communicate realtime passenger information either within an agency or to a third party. The only well-
77

documented instance of TCIP used for real-time passenger information is the pilot project developed at LYNX (67), the Orlando-area system operated by the Central Florida Regional Transportation Authority. This implementation of TCIP, however, will likely be discontinued in the near future according to the interview conducted for TCIP. This is not to say that the standard is not used in other business areas and for related purposes. There have been a number of other pilot projects around the country, including at King County Metro, Maryland Transit Administration, and Chicago Transit Authority. In fact, New York City Metropolitan Transportation Authority utilized modified parts of the standard for a recent project7 to deliver real-time information to customers (68). Additionally, a recent Transit Cooperative Research Program (TCRP) synthesis on electronic passenger information signage in transit reported that six other agencies in the U.S. (not counting NYC MTA) utilized TCIP for real-time passenger information (69).
While there are a number of projects that draw on TCIP, the standard is far from achieving its goals of providing intra- or inter-agency interoperability. While these goals might have been achieved in a few cases around the country, TCIP has seen nowhere near the adoption rate of GTFS. Based on the integral relationship between GTFS and GTFS-realtime and other factors discussed in the GTFS case study, this author conjectures that the same dominance will hold true in time for GTFS-realtime. While TCIP may continue to play an important role in ensuring interoperability between subsystems beyond real-time passenger information and in enabling the pursuit of custom solutions (such as with NYC MTA), it is likely that it will be dwarfed by GTFSrealtime as it continues to grow into new markets.
7 The real-time information system is known as MTA BusTime (http://bustime.mta.info/).
78

SIRI
Background History Developers of the first version of the Service Interface for Real-time Information (SIRI) began working on the standard between 2004-2005 and the standard officially emerged as a technical specification under the European Committee for Standardization (CEN) in October 2006 (70). The standard is a result of the collaborative efforts from "equipment suppliers, transport authorities, transport operators and transport consultants from eight European countries" (71) including the Czech Republic, Germany, Denmark, France, Norway, Sweden, and the United Kingdom. SIRI draws heavily from France's TransModel for its conceptual framework, and the UK's Real-time Transport Interest Group (RTIG), Germany's Verband Deutscher Verkehrsunternehmen (VDV), and the EU Trident project provided valuable starting points for the development of the standard. Scope The development of SIRI brought together a number of national transit data standardization programs in order to more effectively address standardization at a broader scale. According to SIRI documents, the primary goals for developing the SIRI standard were to give purchasers of real-time systems "a straightforward, watertight way of procuring different components of a public transport information system from different suppliers" and to provide suppliers of such systems "a Europe wide market, ensuring that their systems can be used in every country without needing to implement different interface standards in each region" (71). Thus, the benefits were perceived to be directly attributable back to purchasers (or transit agencies) and suppliers (ITS vendors). An added benefit was the opportunity to
79

update existing standards (whether at the national level or for proprietary systems) to account for emerging technologies (71). So, whereas in the U.S., TCIP was the first standardization attempt (outside of proprietary specifications), SIRI was a "next generation" standard for a few nations that had already implemented national standards.
Technical Documentation Technical documentation for SIRI is available in English on the SIRI website in the form of a white paper (71) and, far more extensively, as a handbook (72). As with TCIP, SIRI extends far beyond the provision of passenger real-time information (though perhaps not quite so far as TCIP). Among its ten services shown below, or functional data categories, those in bold italics are those which are typically considered under the umbrella of real-time passenger information: Production Timetable (PT) provides information on expected (or scheduled) transit service for a day in the near future Estimated Timetable (ET) provides information on real-time deviations for the current day, or only those trips currently in operation Stop Timetable (ST) and Stop Monitoring (SM) gives scheduled information (ST) and real-time deviations (SM) at the stop level Vehicle Monitoring (VM) sends real-time information on the location of a transit vehicle Connection Timetable (CT) and Connection Monitoring (CM) gives scheduled information (CT) and real-time deviations (CM) to inform a departing vehicle on the need to wait for an arriving vehicle at a stop or station serving multiple routes General Message (GM) exchanges basic text messages between entities Facilities Management (FM) provides information on the status of facilities, such as elevators or escalators that are out of order
80

Situation Exchange (SX) exchanges structured messages between entities (68) While the Estimated Timetable, Connection Monitoring, and Facilities Monitoring services all provide real-time information that may be of value to the operations and even some customer use cases, they are not necessarily within the scope of this research. Stop Monitoring and Vehicle Monitoring, however, fall well within the definition of providing schedule deviation/adherence and vehicle locations. Development Institutional Involvement SIRI is the result of collaboration between a number of firms and governments throughout the European Union. Working group meetings for the standard are attended by representatives from each member country to CEN, although historically the most participation and interest have come from Germany, France, the UK, and Scandinavian countries. As mentioned above, a few national standards already existed from which SIRI draws a great deal. Because these standards already existed, some interesting accommodations were made in order to satisfy the interests vested in these preexisting standards. For example, in order that previous implementations of the German VDV standard might not be broken, two separate XSDs (XML schema definitions)--a nested and flat version--were maintained for some time. This is a peculiar example of how institutional and political values can outweigh the purely technical in standard development. Evolution Like GTFS and GTFS-realtime, a well-organized set of versions and their respective changes is maintained on the SIRI website (73, 74). A list of all changes made since version 1.2 (April 7, 2007) is maintained there, along with--beginning with version 2.0-- the country code of who initiated each change (e.g. Germany (DE), the United Kingdom (UK), France (FR), etc.). The SIRI standard began as a CEN technical specification, a
81

"normative document ... that would not gather enough as to allow agreement on a European Standard... or for providing specifications in experimental circumstances and/or evolving technologies" (75).
The most recent version of SIRI (2.0) was drafted into a proposal in order to become the more robust and rigorous European Standard (EN), a cornerstone of the concept of the Single European Market to facilitate effective trade both within and beyond Europe (76). This continued work and development on SIRI signal the standard's continued importance in European markets and even in the US, where the NYC MTA heavily incorporated the standard into its MTA BusTime project mentioned in the TCIP case study above.
Openness Much like TCIP, SIRI is developed within the confines of a formal, accredited SDO, the European Committee for Standardisation. As such, the standard development process is open and consensus-based, relying on a set of protocols that have been established for the review, adoption, and maintenance of many standards under CEN. Nevertheless, there are components of the SIRI standard that present barriers to open participation and implementation of the standard. For one, meetings for the standards are only open to participants of national committee members. Others may participate as observers, but only on an invitational basis. Further, while the license restricting the use of the standard only requires that copyright holders be acknowledged, formal standard documentation must be purchased via the national member sites (e.g., via VDV's website)8 and reproduction of any part of supporting standards produced by nonmembers is prohibited without permission from these copyright holders. These barriers
8 Purchase of the SIRI specification was confirmed by an interview with a participant in the SIRI standards development process. While there exist sites that host what appears to be the complete SIRI documentation free of charge (http://www.siri.org.uk/), the researcher could not locate the national member sites where documentation or schema were available for purchase.
82

to implementation and participation are minor, but remain impediments to becoming a fully open standard.
Success The continued and active development on SIRI points to its success as a standard, especially in European markets. However, the standard would not be under consideration had it not seen some interest and adoption in the U.S. market. NYC MTA is one of the agencies that continues to push the evolution and development around SIRI, having adopted it for MTA BusTime and pushing to add JSON (JavaScript Object Notation a lightweight, web-ready alternative to XML) formatting and modern web service transport methods to the standard (77). There are at least five other U.S. transit agencies reporting usage of SIRI in a recent TCRP Synthesis on the use of electronic passenger information signage in transit (69). Compared with the usage of either TCIP or GTFS-realtime, this is certainly a strong showing, especially given that this standard was imported from the European Union.
Comparison of Standards and Standards Development Processes
Assessment of Openness The framework used here to assess the openness of the real-time standards
considered in the case studies draws heavily from Krechmer's ten requirements of open standards. While the categories were interpreted slightly differently than his original descriptions to account for some of the idiosyncrasies of the requirements and to apply them more directly to this case, the open standard requirements remain largely unchanged.
The three case study standards (GTFS-realtime, TCIP, and SIRI) were each given a scoring for the ten requirements. Table 4 shows the scoring of these categories broken
83

out. The scoring methodology was taken directly from Krechmer, with a few modifications for this specific context.
The three open standards are considered alongside the NextBus API specification solely to compare with a closed specification from the industry. While TCIP and SIRI perform nearly identically in every category, GTFS-realtime earns higher marks in open meetings, open intellectual property rights (IPR), open change (a direct representation of its stronger performance in open meeting), and open documents. NextBus, on the other hand, being a closed standard shows a low openness index, although it does earn a few marks in the open world, open documents, and on-going support categories.
Table 4: Openness index scores for real-time transit passenger information standards
The results from the above table suggest that GTFS-realtime is a more open standard than either TCIP or SIRI, which are both managed through accredited SDOs. What explains this finding? Krechmer defines open standards as understood from the lens of open source software. This is a very democratic and distributed perspective that values not just consensus-based processes, but also the openness that is ascribed to fully open meetings that are held and recorded for posterity online. It also depends on clear, complete, and available documentation. It is in these areas where GTFS-realtime excels most. Any discussion of the future of the standard is discussed online in an open
84

forum. The IPR licensing is clearly stated and defined on the GTFS-realtime documentation (whereas with the others it is somewhat obscure). The documentation is fully available online and presented in a coherent, concise way.
Certainly, there may come a time when Google decides to move away from providing transit information (though this appears unlikely given its investment in the product worldwide). Yet because GTFS-realtime is so well documented and the content is clearly licensed, GTFS-realtime could easily spin off and continue to develop if the adoption and interest were great enough. It is for these reasons that GTFS-realtime scored higher on the openness index and perhaps why the standard may continue to flourish.
Implementations Each of the case studies examined the success of implementations for each of three
standards. According to data compiled from multiple sources, there appear to be similar levels of adoption for the standards (69, 78). Figure 9 below shows data from the 2013 APTA Survey on real-time information provision, indicating that the closed NextBus specification seems to hold the largest market share9. Even comparing with data from TCRP which suggests that TCIP has seven U.S. implementers and that SIRI has six, this observation holds true.
9 It is also worth noting that, although the survey indicates that 12 APTA member agencies have implemented NextBus, the NextBus website (https://www.nextbus.com/agencies/ accessed on August 2, 2013) reports that approximately 80 U.S. agencies have NextBus real-time systems (this includes APTA member agencies, some of which are duplicated in the list, as well as small university or circulator systems). This suggests remarkable rates of adoption for NextBus and is important to consider, yet this analysis will take into account only those agencies within the scope of this research, i.e. APTA member transit agencies.
85

Figure 9: Adoption of real-time data standards (78)
An important caveat to the standards' levels of adoption is a look at how these adoption levels have grown over time. This is, of course, a rough and imprecise measure because there are a variety of complex and difficult-to-measure factors that influence standard adoption (network effects, lock-in, etc.). Nonetheless, Figure 9 gives a picture of how quickly these different standards have seen adoption since their inception. Table 5 shows the average number of agencies that have adopted each standard per year. The year of inception is based upon the date that documentation was first made available. For GTFS-realtime and SIRI, there is a strong confidence that the year of inception is accurate. However, for NextBus and TCIP there may be instances where implementations were in place before the year shown.
86

Table 5: Average adoption rate for (agencies per year) for real-time standards (69, 78)
The above table shows that, even though it is relatively new, GTFS-realtime has the second highest number of agencies with implementations and the highest adoption rate (average agencies per year). This finding holds true with reasonable expectations for GTFS-realtime based on its integral relationship to GTFS, which hundreds of agencies have adopted in a period of approximately 7 years (estimated adoption rate of approximately 40 agencies per year). Assuming that Google continues to utilize GTFSrealtime for its products and the standard review process remains open to full public participation, it is likely that this adoption rate will continue to increase.
Recommendations
Moving Ahead for Innovation in the 21st Century
Effective real-time passenger information systems are crucial to satisfying customers' expectations and demands. Transit riders are adopting smartphones and still waiting for the bus. Budget-constrained agencies can deliver this information with relatively little infrastructure by making use of often pre-existing AVL systems and pursuing the open data policies already adopted by President Obama's administration. There are certainly costs associated with this approach, especially if AVL data are contained within a proprietary format. Nevertheless, the open standards that have developed over the past couple of decades allow a path forward to break vendor lock-in and reduce switching costs in the future.
87

While Moving Ahead for Progress in the 21st Century (MAP-21) addresses ITS in general ways and allocates some funding for ITS (79), there are some opportunities to address transportation technology and policy in the next-cycle authorization bill. MAP21 funding ends with FY 2014, so the next authorization bill will likely be introduced sometime before the current fiscal year ends. The President's Executive Order (EO) on open data for federal agencies offers an opportunity for the USDOT, specifically the FTA, to couple ITS improvements at the local level with open data initiatives. The framework to pursue these initiatives is in place--thanks to progressive agencies such as TriMet and others--should Congress find that such a policy is in the nation's best interest. Open data, besides being a force for government transparency and cost effectiveness, provides sparks for innovation in both the public and private sectors.
One major criticism in this paper of TCIP is that documentation on the standards development process and the standard itself is difficult to consume. As mentioned above, understanding the changes between versions of the standard is difficult because there is no list of versions and their respective changes over time. If this is difficult for the researcher, it is almost certainly difficult for any organization interested in implementing the standard. Therefore, another recommendation that follows the aim of transparency from the open data executive order is to substantially reorganize this content to improve not only the comprehensibility of the information therein, but also to simply improve the transparency of the project generally.
Predictions for Continued Trends
Based on the historical success of GTFS and the indirect network effects that bundle the static specification with its real-time component, there will likely be widespread adoption of GTFS-realtime in the near future. The 2013 survey on real-time arrival information by APTA (78) and TCRP Synthesis 104 on electronic signage by
88

Schweiger (69) mentioned above both capture a great deal of valuable information about the current market for real-time information.
One point drawn from the market analysis provided by the APTA survey is that there is immense demand by agencies to share real-time passenger information with their customers. Currently, only 37% of agencies are providing real-time information via an API or web or mobile application. For agencies without AVL systems, the vast majority of them (92%) are interested in installing AVL on their vehicles. Even of those agencies that have AVL systems already, 47% currently do not provide customer-facing real-time arrival times.
The benefits of public-facing (especially mobile) information systems have been well established (see Chapter 1), so it is likely that the agencies with AVL but without publicfacing systems will soon move forward with a public-facing solution. In fact, Figure 10 shows the reasons agencies are not providing arrival times to the public. While 8% of these agencies have projects in progress and a handful of others have organizational or technical restrictions, over 20% simply are constrained by technical ability or funding constraints. As open standards diffuse into the market, economic theory dictates that the cost of implementation will decrease, making feasible solutions a realistic option for more and more agencies.
89

Figure 10: Reasons given by transit agencies for not providing public arrival times (78)
By cross-referencing data sources that capture the usage of real-time transit passenger information standards, it appears that SIRI, GTFS-realtime, and TCIP all have a similar number of implementations in the U.S. However, the adoption rate for GTFS-realtime far outpaces that of either SIRI or TCIP (and even beyond that of NextBus, a popular proprietary solution). Anecdotal evidence from open source repository hosting applications such as GitHub (https://github.com) suggest that software development is most active around GTFS-realtime. While this should not serve as concrete evidence of adoption or even transit agency interest, it does bring up the question of how open movements (open standards, open data, and open source) overlap and reinforce one another and how this might apply to the case of real-time transit passenger information.
Federal Policy Recommendations
To date, there has been little visible response from the federal government to the development of alternative de facto standards for passenger information such as GTFS and GTFS-realtime. True, GTFS was incorporated into TCIP in version 3.0.5.2 of the
90

standard that was issued on March 1, 2012. However, it is unclear how effective the inclusion of this GTFS timetable importer has been for the proliferation of TCIP and, consequently, how effective such action would be for including translators or importers between GTFS-realtime and TCIP or SIRI and TCIP. It seems that the federal government could take one of a few alternative paths of engagement to respond to the likely proliferation of GTFS-realtime or the possible proliferation of SIRI in the United States. The paths listed here are as follows:
1. Achieve Interoperability work to develop translators or importers for de facto standards to keep TCIP relevant (as with static GTFS). In 2012, APTA released a new version of TCIP that included the functionality to import static GTFS "timetables" into TCIP-formatted messages. This could be an approach for keeping TCIP interoperable with real-time passenger information provided by agencies with GTFS-realtime, SIRI, or any other open standard.
This path is not recommended by this researcher because the cost of the approach is shouldered by the public sector rather than developers or vendors that otherwise might be incentivized to shoulder the development work themselves.
2. Provide Guidance to or Incent Vendors/Agencies shift focus to providing guidance on the development of open systems and use of open standards where realtime passenger information is concerned. Incentivizing vendors or agencies to provide open standards is listed as one of the FTA strategies to study in a 2011 FTA report prepared by the Volpe Center (11). The status of this program is currently unknown. However, the approach listed in this document promoted incentivizing only the adoption of TCIP. A more flexible approach would be to incentivize the adoption of any one of a set of open standards (perhaps any one of the three standards studied in this research). Such an action would (a) encourage a flexibility of approaches that would all be open, (b) allow market forces to shape an efficient outcome, and (c) possibly spur the market
91

of vendors or civic hackers to further develop translator/repeater to convert from one standard to the next.
This path is recommended because it draws a balance between cost effectiveness and ensuring the promulgation and (possibly) eventual interoperability of all open standards concerned. In this approach, there may be costs involved with incentives provided (whether they be financial or not), but these costs are likely to be less than Approach 1 and have the added benefit of engaging all stakeholders actively. Additionally, this path provides opportunities for the TCIP standard to be adopted for other functional areas within transit agencies. If GTFS-realtime in fact becomes a de facto standard for real-time passenger information (just as GTFS has already become), agencies may find greater benefit in TCIP if the standard is compatible with GTFSrealtime.
Follow Existing Path (Do Nothing) do not respond to the high adoption of realtime passenger information standards; let the market manage the adoption of standards and rely on regional ITS architectures to guide this process. This path is not recommended because it ignores the clear response of agencies to adopt open standards, whether TCIP or not. This policy response does not work to affect change or assist agencies or vendors that are interested in supporting open standards and, in turn, promoting the goals of regional ITS architectures to intra- and inter-agency interoperability as well as interoperability with emerging technologies and systems.
Conclusions
This research has addressed the history and background of federal ITS policy and the role of real-time transit passenger information. A comprehensive literature review of standard setting theory has helped to frame the multiple case study approach to understanding and reviewing the standard development processes for and institutional
92

influences on GTFS-realtime, TCIP, and SIRI--the major open standards used in the U.S. for the delivery of real-time transit passenger information. Among the impacts analyzed here are the effect that the standard development processes have had on the adoption and diffusion of the standards, or the "success" of each standard. Federal policy recommendations on the role of government in this area of growing importance are provided here as well.
Key Findings
A crucial finding of this research is that standards that open themselves to participatory and democratic processes (characterized by clear documentation, open communication--e.g. via mailing list--and rough consensus) may begin to play a larger role in technology and society. This has been demonstrated by Krechmer and others (24, 25) with the influential role that Internet Engineering Task Force (IETF) has played in building standards for the Internet--a process which is not without criticism or issues of its own (80)--one of the most important technology systems for today's economy and society.
These case studies also suggest that early, on-the-ground implementations of standards are critical to achieving adoption. Much like IETF, GTFS-realtime began as an invitation-only group in order to get rough installations of the standard implemented and working before opening the standard to the general public. This model is unable to account for the complex and comprehensive standards that may result from committee, but perhaps the committee approach is not always the most effective way to see standardization occur in an industry--unless broad consensus is met on implementation of the standard as with HDTV in the U.S.
As a strategy to achieve interoperability in this important area of transit ITS, the researcher recommends an incentive strategy for the federal government to promulgate
93

open standards for real-time transit passenger information. By incenting vendors and agencies to adopt any open standard (not just TCIP), the FTA would (a) encourage a flexibility of approaches that would all be open, (b) allow market forces to shape an efficient outcome, and (c) possibly spur the market of vendors or civic hackers to further develop translator/repeater to convert from one standard to the next. Such an approach would be cost-effective, engage the broadening base of stakeholders, and embrace the language supporting open and machine-readable government information in President Obama's Executive Order 13642. Future Work
Future work should include a comprehensive and systematic survey of transit operators, vendors, and the emerging group of contributors to transit web and mobile information systems. In addition to confirming the exact interfaces and standards implemented (in past surveys, responses sometimes indicate contradictory or confusing results), the survey should quantify perceptions and attitudes about open and proprietary standards. Commendably, APTA has begun to do this with their 2013 survey (see Figure 11), yet a cross-sectional look at not just agencies, but also vendors and other contributors, will help to clarify a complete vision of the state of standards development and adoption for real-time transit passenger information.
94

Figure 11: Issues agencies have with adoption of open standards for real-time data (78)
This proposed survey could tap the members of mailing lists maintained on Google Groups dedicated to the discussion of these specific standards (such groups currently exist for GTFS-realtime and SIRI) and the development of transit applications generally. It would be instructive, too, to revisit the vendor perspectives on open standards explored by Hickman in 1998 (38). While this research considered only APTA member transit agencies, expanding the scope to all transit operators in the region (including small circulators and university systems) would help to clarify the overall picture of perspectives on open standards.
Another future research area that may already be underway at FTA is to understand what kind of incentive structure would best spur agencies and vendors to adopt open standards. Currently, the research scope for agency and vendor incentives at FTA only allows for TCIP; however, it is crucial that other open standards for real-time transit passenger information be recognized as an integral pieces to a larger puzzle. The
95

comprehensive survey work described above would help to clarify the type of incentives needed to move the industry toward open standards.
While such research would be valuable to understanding motives and market forces currently in play, the next few years of standardization may obviate the need for such research. As open standards spread in the United States and the demand for real-time transit passenger information grows stronger, the industry may reach the tipping point of de facto standardization, enabling an efficient and effective marketplace for both purchasers and suppliers of real-time systems. The adoption of a standard by an industry and even a single agency is a complex phenomenon, full of many difficult to measure externalities. However, the open standards marketplace and the standards themselves can be made more efficient and effective through greater transparency and the further democratization of the standards development process.
96

Chapter 4: Traffic Management Centers and Third-Party Data
(James Wong, Bingling Zhang, Dr. Kari Watkins, and Dr. Hans Klein)
Literature Review
Traffic Engineering
Traffic Flow Fundamentals Traffic flow theory is largely described by what's known as the fundamental
equation, developed by Bruce Greenshields in the 1930's, which describes traffic speed on a facility as a function of that facility's density and carrying capacity. The image in Figure 12 shows three relationships between speed and density, speed and flow, and flow and density. The top-right quadrant describes traffic flow as a function of speed; as speed on a highway increases, more vehicles can travel on it (increasing flow), but at a certain point the number of vehicles exceeds the capacity of the highway, leading to congestion (decreasing flow). In the top-left quadrant, the relationship is between vehicle density and speed. As density increases and more vehicles are on a segment of highway, the speed that they will travel decreases. The third, and more abstract of the relationships, is in the bottom-left quadrant which shows that traffic flow and density are related in that increasing density will lead to increasing traffic flow, until a point where the density overwhelms the facility and traffic flow begins to decrease. Consider stopped, bumper-to-bumper traffic; density is very high, speed is very low, and traffic flow is nearly zero. On the other extreme, if only two cars are on a highway, they will travel with very fast speed, extremely low density, but similarly the flow is very little because there are so few vehicles.
97

Figure 12: Relationships among traffic speed, flow, and density (81) While the parameters that shape these relationships have been the subject of an
entire academic field, the fundamental relationship is foundational to the in-depth study of traffic flow.
Federal Legislation for ITS and Traffic Management
Federal policy and legislation have been important to the growth and development of traffic monitoring in its modern form. Traffic management centers and ITS infrastructure grew out of two decades of federal policy that supported Intelligent Transportation Systems (ITS), expanded real-time traveler information, and encouraged operational improvements in lieu of capacity-adding projects. This section includes some of the pertinent legislative elements that pertain to these policies.
In 2005, the United States passed the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). One of the programs listed under Congestion Relief was the Real-Time System Management Information Program. Under this program, the Federal Highway Administration (FHWA) would establish a system for all states to use that gave them the capability to monitor and
98

share real-time information on traffic and travel conditions for major highways in the country (82). The legislation outlined five specific conditions to monitor including road and lane closures, adverse weather conditions, congestion, travel times in congested metropolitan areas and transit service disruptions in metropolitan areas (83).
In November 2009, the U.S. Government Accountability Office (GAO) produced a report that described an evolving technological landscape for the collection and dissemination of real-time traffic information beyond traditional methods like inductive loop detection. The GAO found that the future of real-time traffic information relied on partnerships with the private sector (84).
The 2010 rule on the Real-Time System Management Information Program set 10 minute reporting standards for traffic incidents in metropolitan areas and 20-minute standards in non-metropolitan areas. For all information dissemination, the rule requires 85 percent accuracy with 90 percent availability; it specifically excludes any mention of coverage. The rule also allows for states to use funds to enter into agreements with private data collection companies to access and share real-time data (85).
The latest legislation, Moving Ahead for Progress in the 21st Century (MAP-21), was passed in June 2012 and represents a significant shift in federal policy towards performance management. That policy foundation is addressed with increased guidance for traffic congestion monitoring. Among the key elements that pertain to this discussion is language that gives US DOT the ability to address data standards: "the Secretary shall establish the data elements that are necessary to collect and maintain standardized data to carry out a performance-based approach." (86) During the current rule-making process, it is possible that the next federal rules would mandate certain data elements that necessarily require infrastructure-based traffic monitoring.
99

Brief History of Real-Time Traveler Information and Traffic Management Real-time traffic management has evolved rapidly over the past two decades.
According to an NCHRP report, traveler information was disseminated primarily through commercial television, radio broadcasts, and newspapers prior to the introduction of the internet in the mid-1990s; while changeable messages signs and highway advisory radio technologies were available, few were utilized. It goes on to say that the industry shifted dramatically when public agencies could provide traveler information using websites, which was both low cost and had the ability to reach a large audience. While television and radio remained primary sources of traveler information, the internet allowed for the development of real-time traveler information and traffic management technologies. With funding from the USDOT and the FCC designation of 511 as the national traveler information phone number, state DOTs have been successful in launching this system nationwide. Currently, real-time traveler information is also provided through dynamic message signs, navigation systems, and private sector applications (87).
Traffic Management Centers
Core Functions Traffic management centers (TMCs) are usually publicly operated facilities that
aggregate incoming data from traffic sensing equipment for a region and provide the foundation for traffic incident management. The core functions of TMCs typically include traffic incident management (TIM), traveler information and emergency operation management.
According to the 2010 Traffic Incident Management Handbook (88), the formalization of TIM began in the early 2000's and continues today. The motivation on the federal level is that congestion is considered an economic encumbrance, limiting the
100

ability for Americans to efficiently travel and adding to business costs. TIM is seen as an important method to mitigate congestion, in particular non-recurring events, which has economic benefits. The performance of a TIM program is measured based on roadway clearance time, incident clearance time and the number of secondary incidents incurred. All forms of TIM begin with some version of incident detection, notification and verification.(88) This is the first step in the process for most TIM strategies and is the primary function addressed by traffic monitoring equipment and ITS. Traffic managers are largely looking for anomalies in expected traffic patterns, for example, a slow-down in traffic on a segment of freeway with no entry/exit ramps. By identifying changes in speed, operators or algorithms can see where potential incidents are occurring that would then require verification.
Data Requirements
While the TIM Handbook is a guidance document shared and used by many agencies, it only picks up with instructions for responding to incidents; it makes no specific mention about best practices in incident notification.
In a 2007 study on the effectiveness of automated incident detection, researchers found that the technology is still in its early stages and unreliable for exclusive use. In that study, they found that almost all of the algorithms were based on relatively standard data sourced from various technologies: vehicle counts, average vehicle speed, and vehicle occupancy (which is a proxy for density). (89) Notably, the researchers predicted that since the technology had not yet been widely adopted and still had time for maturing, that "future detection systems should [be] designed to take advantage of vehicle-based sensors should such systems become widely available in the future." (89)
Many other studies and articles use the count-speed-occupancy dataset as the basis for traffic data collection and use (8993). The recent trend in research has been
101

to find alternative methods, technologically, to achieve speed data using probe-based vehicles. The authors are unaware of research articles that specifically discuss whether or not TIM, transportation planning or other traffic engineering tasks can be reasonably accomplished using only speed data.
The focus of this research has primarily been on operational use of traffic data, as opposed to its use for planning purposes. A review of traffic data users in 2002 found that "the most notable difference between operational and [planning uses] of traffic data is the emphasis on speeds and occupancies in the former and on volumes in the latter." (94) This is an important distinction, especially as it pertains to accuracy, because planning tasks often require single-deployment traffic studies where a 24 or 48-hour count provides the insight for as much as a year's worth of data; the review identified a +/- 10% accuracy (for volume) as sufficient for planning purposes based solely on the fact that temporal adjustments are made for data samples. The paper also admits that operations staff have limits on their need for accurate data: "In truth, the current generation of operational strategies do not require extremely accurate data operators typically need to know where the big problems are and their responses are geared to this."(94)
Existing Technology The vast majority of traffic sensing technology has been through the use of
infrastructure-based monitoring equipment in the field that communicates to a TMC or other central facility. These include loop detection, microwave radar, video detection and most recently, wireless in-ground sensors. Despite their varying technology, each of these is designed to provide operators with the count-speed-occupancy data described earlier.
102

The advances in technology and, more importantly, their broad deployment throughout major metropolitan areas have changed the way traffic incidents are detected. When describing the state of the practice in highway traffic operations and freeway management a decade ago, an author summarized as follows:
"The state-of-the-practice for electronic data collection is to measure traffic flow characteristics at discrete points throughout the network. Visual surveillance is typically performed via field-located cameras that are viewed by operators at a TMC. The most common method of detecting incidents is through motorists calling 911 or a specific call-in number set up for this purpose. Operators at a TMC generally use their traffic monitoring capabilities, especially visual surveillance, to verify incidents. Emergency responders, when on the scene, provide the best verification of incidents and the response needed." (95) In modern practice, agencies are far more reliant on use of speed maps with data generated either internally or externally to alert them to incidents that require attention. An important concept to understand is the difference between using infrastructure and probe-based sensing technology, particularly as it relates to the data output. "...Lagrangian sensing specifically refers to measurements performed along a sensor's trajectory, which it usually cannot control. Examples of this are smartphones traveling onboard cars to follow highway traffic flow. This is in contrast to Eulerian sensing, in which sensors are fixed (for example, video cameras or loop detectors along highways) and monitor a specific control volume in a static manner."(96) Eulerian data are ideal for reconstructing vehicle density in models because it is better equipped for a nearly complete penetration rate. Since the 1950s, this has been the primary mode of data collection and traffic-state modeling for transportation. Lagrangian data are starting to emerge in transportation, but only recently it has begun to see studies on the subject. The recent advances have been primarily in building a
103

probe system (96100), rather than focusing on data aggregation and applications. The key question that motivates much of this research is how the use of speed data alone impacts the effectiveness of TIM and operations.
Changes in Traffic Sensing Technology The technology behind probe-based systems has been developing rapidly over the
past few years. There are two primary methods that are used: one uses infrastructurebased readers to capture data on specific vehicles when they pass a point on a facility, the second reads the position of a mobile device wirelessly and aggregates that data to recreate a speed vector for a highway. The `position' of the mobile device may be generated using GPS-based systems, such as a navigation device, or it can also use cell-phone tower triangulation. The focus of this work is on GPS-based data from mobile devices.
Media Access Control (MAC) address matching can provide useful traveler information at a low cost. There exist some technologies, such as Bluetooth devices, have a specific MAC address, which can be detected by Bluetooth stations along a corridor; travel time can be determined by calculating the time it takes two separate Bluetooth stations to detect a specific MAC address. Schneider recommends that Bluetooth stations should be located one to two miles apart to optimize the amount of MAC address matches; the MAC address must be detected by at least two Bluetooth stations to provide a travel time (101). While Bluetooth signals do not require a line of sight, the signal may be negatively impacted by physical barriers and other devices, such as cordless phones and microwave ovens (102). However, since Bluetooth travel times are calculated between two Bluetooth stations, it is difficult to reconstruct speeds at specific points along a corridor.
104

The use of GPS technology in traffic management has the ability to exploit the accuracy in position and velocity of existing cellular networks which provide extensive coverage. One method of collecting traffic management data is using virtual trip lines geographic positions that indicate where GPS devices should provide updated locations (97, 103). In the Mobile Century field experiment, 2-3% of the total traffic flow contained vehicles equipped with GPS devices; the experiment indicated that traffic data obtained from GPS devices provided sufficient speed and position information (97). With the GPS system, transportation agencies have minimal installation or maintenance costs (97).
Networked Government
Networked government is the collaboration of government agencies with other public and private sector organizations in order to promote the public interest. In the United States, networked government has its roots in Franklin D. Roosevelt's New Deal in the twentieth century, expanding the government's influence (104). With World War II underway, the federal government began to enter in contractual relationships with military equipment vendors; this began the private sector influence on government policy. Networked government led to public-private partnerships that eventually led to the development of the space program, the interstate highway, Medicare, and Medicaid among others (104). The increasing demand for governmental services and decreasing desire for expanding the government has led to increasing connection with the private sector.
Networked government is common in the transportation industry. Agencies often work with the private sector to plan, design, and build transportation infrastructure. Traffic management centers purchasing third party data is an example of networked government, where the public and private sector collaborating to promote safer and more efficient transportation system for society.
105

Public Private Partnerships The growing popularity of public-private partnerships (PPPs) in the United States
has created an uncertainty in the definition of public-private partnership. Forrer (2010) offers the following definition:
"Public-private partnerships are ongoing agreements between government and private sector organizations in which the private organization participates in the decisionmaking and production of a public good or service that has traditionally been provided by the public sector and in which the private sector shares the risk of that production." (105)
Public-private partnerships are long term relationships between the public agency and one or more private organizations that are mutually advantageous and optimize on the strengths of each organization (105, 106). With a growing number of partnerships, accountability becomes ambiguous. Government agencies and private organizations often have different objectives when working on the same project; while the government agencies may be focused on promoting common good, the private organizations are often motivated by an increasing return on their investment (105). Unlike contractual relationships where accountability is often one-way, public-private partnerships require mutual two-sided accountability.
In the transportation industry, public-private partnerships occur frequently in financing and building toll roads. The private sector provides a financial commitment to building the facility and will ultimately share in the revenue streams, if successful, over a specified payback period. Obtaining third-party traffic management data can be described more accurately as a contractual relationship rather than a public-private partnership. The government is purchasing the data directly from a private sector vendor, with clear specifications as to the product being purchased. While this
106

relationship is long-term, the private sector does not participate in the decision-making process of the traffic management center and does not share any risk or profits.
Risk Assessment Utilizing third-party data changes the risk structure of the TMC. TMCs expose
themselves to risk when they are not self-contained; for example, TMCs cannot control the operations and maintenance of external power sources or leased communication fibers. In purchasing third-party data, TMCs experience additional external risk; TMCs cannot control the reliability and accuracy of the data. If third-party data were to completely replace infrastructure-based data, the TMC would be wholly dependent on the third-party to provide complete and accurate data.
Methodology
Researchers conducted a web-based survey to solicit responses from TMC managers around the United States. The TMC Manager title is loosely described in this research as an individual who has managerial or supervisory roles within a TMC or who plays a role in the planning and decision-making surrounding TMC facilities and operations. These individuals were assumed to have a working knowledge of the concepts discussed in the survey such as traffic monitoring data, incident response and traveler information.
The survey itself was developed with the help of four semi-structured interviews with TMC managers representing a variety of facility types. Researchers asked a set of draft survey questions to the interviewees without specific answer choices in order to collect an array of potential responses. While reviewing responses, researchers paid particular attention to language and jargon to ensure that questions adapted for the actual survey would be understandable by industry experts. This process allowed researchers to
107

evaluate questions based on the variety of responses and the potential to gain new insights from the outcomes.
The survey was initially distributed to an e-mail listserv from the TMC Pooled Fund Study Group (a collective representing about 30 TMCs that contribute funds towards research in the field). In addition, researchers used the Research and Innovative Technology Administration (RITA) Intelligent Transportation Systems (ITS) Deployment list of TMCs in the United States and found contact information for managers where available. If an individual's name or e-mail was not readily available, an e-mail was sent to the general contact e-mail address. In total, 58 e-mails were sent out. Of those surveys distributed, respondents provided 28 completed or sufficiently completed surveys.
The survey design incorporated four main subject areas for which researchers sought feedback: existing procedures, existing risk, attitudes towards third-party data and open responses. In the first section, existing procedures, respondents were asked about the kind of equipment and data currently in use at their TMCs along with general profile information about the TMC. The existing risk section included questions about systematic vulnerabilities to existing equipment and infrastructure. The third-party data section addressed actual or potential use of third-party data in the context of data types and applications, data reliability, vendor trust and cost structure. The final section provided open-ended prompts for additional information for some of the earlier questions that respondents may have wanted to further clarify.
108

Survey Results and Analysis10
The first result concerned the quality and quantity of survey responses. Although respondents worked at a mix of organizations (i.e. department of transportation, transportation management association) responsible for traffic management, almost all worked in TMCs (26/28, 93%), a positive indication that the targeted e-mail distribution was successful. As expected, due to the large-scale investment needed for TMCs, most respondents worked at facilities that managed either freeways only or freeways and arterials (21, 75%); fewer facilities were designed primarily for arterials and local roads (5, 18%). Almost all respondents provided services in urban areas (25, 89%) where congestion is most likely to require management, with several others reporting that their organizations monitor suburban (12, 43%) and rural (8, 29%) areas as well. These facilities have a variety of functions from traffic monitoring and incident response to traveler information and performance reporting.
Respondents were asked about the functions of their facilities, and as shown in Figure 13, they consistently ranked incident response and traveler information as primary. Traffic monitoring was the highest ranked response for both primary and secondary functions, with greater emphasis on the use of live video monitoring over speed and volume detection. The response profile here shows that the key roles of TMCs encompass far more than traffic monitoring, specifically incident response. Projects that identify new traffic monitoring technology or sources would only impact one of several functions that TMCs serve.
10 Additional survey questions and responses can be found in Appendix B
109

Traffic monitoring: speed, volume

16

12

0

Traffic monitoring: live video

Incident Response: detection,
confirmation

Incident Response: clearance/response

Traveler Information (HAR, changeable message signs etc.)
Performance monitoring/data reporting (excluding
HPMS)
Performance monitoring/data reporting (HPMS)

6 2

24

4

0

23

5

0

20

5

3

22

5

1

18

4

16

10

Other

10

2

13

0

10

20

0

10

20

0 10 20

Primary Function

Secondary Function Not a Function

Figure 13: Primary and secondary functions of traffic management centers

Respondents also identified the various data products required to perform these functions. Live video is the only type of data considered important or necessary by all respondents. Almost all respondents (25, 89%) report the use of closed circuit television (CCTV) cameras. Most agencies also consider travel time, speed, and traffic count data to be important or necessary. This is the industry standard information for traffic monitoring. After live video, the most popular devices (radar/microwave, inductive loops,

110

video detection and wireless "pucks") all have the ability to report traffic counts, spot speeds, and density.
What kind of end-point equipment do you currently use for traffic monitoring?

Closed-Circuit Television (CCTV) Radar/Microwave Inductive loops Video Detection Wireless "Pucks"
Bluetooth sensors (any brand) Other
Aerial detection 0

Devices with similar output data capabilities including counts, speed and density.

5

10

15

20

25

30

Figure 14: Types of end point equipment used for traffic monitoring
Existing Risk
Departments of transportation are often in control of all the hardware, software, and personnel that operate a TMC, which allows them to be self-sufficient and to operate their systems independently. However, as the network of participants in traffic management grows, other entities become relied upon.
To understand the level of involvement by third parties, agencies were asked whether there were major infrastructure elements that they didn't own or operate directly. Responses showed that a majority of agencies use at least some power infrastructure (18/24, 75%) and some communications infrastructure (14, 58%) from a third party

111

(often a utility company.) Endpoint equipment is less likely to be owned by third parties (5, 21%). Very few agencies (4, 17%) claimed to be fully independent. According to these results, public agencies are already making use of third parties and relying on them for owning or operating infrastructure critical to traffic management.
When asked about their existing vulnerabilities, respondents revealed that agencies tolerate certain risks already. As shown in Figure 15, high-impact, low likelihood events (like major natural disasters) are not always protected against. The other source of major operational impacts is from power outages (near endpoint equipment) and communication network delays. Power outages and communications delays are both indicative of risks that agencies are exposed to in part because of their reliance on third party-supplied infrastructure.
112

Please indicate those events that you consider your system vulnerable to and the potential impact it would have on the system as a whole.

Uncommon natural events (flooding, hurricanes, etc)
Communication network delays
Power outage near end-point equipment
Power outage at our TMC (more than a few hours) Power outage at our TMC (less than a few hours) Common natural events (fog, heat wave, winter storms)
Other (please describe)

0%

25%

50%

75%

No impact (not vulnerable to this event)

Low impact (only specific devices affected)

Moderate impact (several devices or long term outage)

Major impact (large or system-wide unavailability)

100%

Figure 15: Risks and potential impacts on the traffic management system
Real-Time Third-Party Data
The third-party data section of the survey was divided into two mutually exclusive lines of questioning based on whether or not the TMCs used real-time third-party data as part of their existing standard procedures. For those that did use it, questions were asked of their experience with the data so far; those who did not were asked what their hypothetical ideal situations would be. According to their responses, only one quarter (6/24) of respondents reported that their facilities use a real-time third-party data product as a standard procedure. Examples given in the question included INRIX, Nokia/NAVTEQ and TomTom. A second question asked about the use of free online
113

traffic maps like Google Maps and Bing Maps. Contrary to expectations, only a small number of respondents indicated use of those free services as a standard procedure (5/22) or even casually (4/22). Hypothetical Use
Due to the limited response rate for respondents using real-time third-party data as a standard procedure, the analytic focus is on the hypothetical questions provided to those respondents. These responses are attitudinal and help to suggest what the industry wants to see in the future.
Respondents here expressed a number of concerns about third-party data. Already in discussions in the survey development process, a number of interviewees noted industry-wide concerns about the use of historic data by third-parties in lieu of real-time data, which led to skepticism about third-party data. This proved to be a larger trend; as shown in Figure 16, the most cited reason for not using third-party data was that TMC managers felt they could not be sure if the data were real-time or historic (6, 33%). Other more prevalent responses included the cost of the data (4, 22%) and inconsistent results from trial deployments (4, 22%). Still, some agencies have simply not considered using third party data (5, 28%).
114

If you have considered using third-party data in traffic monitoring but chose not to, why not?
We can't be sure if what we're seeing is real-time or historic.
We have not considered using third party data. We tried it and got inconsistent results where we
knew the traffic conditions. It costs too much. Political reasons.
Contractual issues. The technology isn't yet proven by research or
other agencies. We can't share what we buy with the public.
We can't see the raw data. We don't understand how the technology works.
01234567
Figure 16: Reasons traffic management centers choose not to use third-party data
Since these respondents did not make use of real-time third-party data, they were asked what kinds of data they would consider purchasing or using in the future. Figure 17 shows that they were most interested in using travel time (11, 61%), speed (7, 39%), and volume counts (5, 28%) from third-party vendors and least interested in traffic density (1, 6%) and vehicle classification (1, 6%) data. This response is promising for probe-based technology, which can generate travel time and speed, but not volume or occupancy data. The lack of volume/traffic count data is likely the most challenging hurdle for probe data to overcome, since penetration rates are typically too low to extrapolate to reliable vehicle counts.
115

What kind of information would you consider purchasing or using from a third-party vendor?

Travel time on a segment (measured at two points)
Speed of traffic

Traffic volume/counts
We would not use or purchase third-party traffic data
Automated incident detection

Traveler reported incidents/congestion

Live video stream

Vehicle classification

Traffic density

0

5

10

15

Figure 17: Information traffic management centers' would consider purchasing from a third-party vendor
With regard to the role that third-party data can play in the future, it appears that most TMCs are considering uses of probe data that minimize risk. Respondents were allowed to select multiple answer choices. Most respondents would be willing to rely on third-party data to extend their existing coverage areas (12, 67%) or they would be willing to use it to supplement and verify data generated with existing infrastructure (11, 61%). The respondents were far less interested in using third-party data as a primary source of information for traffic monitoring (2, 11%). The latter suggests that the industry feels the need to test out third-party data rather than use it immediately to replace existing investments.
A separate question asked whether respondents were willing to forego major investments in infrastructure replacement or expansion by using third-party data and responses were split at 50% (9) reporting yes and 50% (9) reporting no. The industry is

116

not in agreement on whether or not to use third-party data to forego major infrastructure investment.
In order to build trust in the data and move closer to that potential reality, TMCs indicate that the most important assurance that data are accurate will come from testimonials from peer agencies (13, 72%). This presents a challenge for third-party data providers, because it requires a certain level of commitment from a skeptical industry before they can start to demonstrate their abilities.
Respondents were next asked about their perception of third-party data, specifically regarding what levels of transparency and data resolution would be needed to build more trust in the data. When asked about the level of conceptual and technical understanding that respondents want to have about third-party data collection (Figure 18), a majority of respondents (15, 83%) wanted to understand both the conceptual and technical details. Fewer respondents (3, 17%) were satisfied knowing the concept without the technical details, and no respondents reported that they would be satisfied without knowing how the data were collected. Similarly, the next question asked about what level of transparency in the data would be desired. Most respondents reported that when data were provided to them by a third-party, they wanted to be able to look at more disaggregate data in some form. Again, there were no responses for the least transparent answer: "I don't need to see detailed data." The responses to both questions suggest that agencies want a good deal of transparency and understanding of any data products that they purchase or use.
117

How well would you want to personally understand how the third-party data is collected?

How transparent do you want a third-party data provider to be about the technology and data they share with you?

20

15

10

15

5
3 0

I want to understand the 20 concept and the technical details.

15

7

I want to understand the

concept, but not the

technical details.

10

I don't want to

9

understand either the

5

concept or the technical

details. (No responses,

not shown)

2 0

I should be able to check or verify every data point that feeds into the output we receive.
I should have easy access to more refined data than what we usually have if I ask for it.
I would allow the vendor to charge for more detailed data requests.
I don't need to see detailed data. (No responses, not shown)

Figure 18: TMC required understanding and transparency of third-party data

Finally, on the topic of cost structure, the respondents were allowed to select multiple responses. The variety of answers suggests that the TMC preference for structuring the cost of third-party data is dependent on the situation and on the TMC setup. As shown in Figure 19, the most preferred cost structures indicated on the survey are paying a flat fee for all data at all times (5, 28%), a fee based on time, such as cost per month (5, 28% ), and a fee based on facility coverage, such as cost per mile (4, 22%). Additionally, several respondents indicated they would prefer third-party data be provided for free (3, 17%). The lack of a consistent answer in how to structure the cost of third-party data suggests that there has yet to be a regular market for this data product for TMCs.

118

Flat fee for all data at all times

28%

Fee based on time (i.e.- cost per month)

28%

Fee based on facility coverage (i.e.-cost per mile)

22%

Provided for free
Fee based on staff support (i.e.-cost per hr of support)
Provided as a bundle with other services

17% 11% 11%

I don't know

11%

Other

6%

Fee based on installation 0% 0.0%

10.0%

20.0%

30.0%

Figure 19: TMC Cost Structure preferences for third-party data

Discussion and Conclusions
The purpose of this survey was to gauge the attitudes of TMC managers about probe-based traffic monitoring technology and institutions. The vast majority of agencies continue to use infrastructure-based systems that take speed, volume and occupancy measurements at strategic locations along major highways. There is diversity in sensing technology (e.g. - loops, radar and in-ground "pucks"), but the fundamental output data are largely consistent among agencies. The most consistent technology, however, is CCTVs, which provide live video feeds in specific locations. According to many respondents, some of the most important functions of their TMCs, such as incident detection, rely on live video, something that probe data are not able to generate. This is
119

an important takeaway that underscores the continuing importance of certain elements of the TMC.
One of the questions posed at the onset of this research was how willing agencies would be to rely on third-parties organizations that specialize in aggregating and delivering data sourced from probe devices. While most TMCs already rely on thirdparties for power and telecommunications, these industries are highly regulated and poor performance is easily detectable. Additionally, the risks inherent in major weather events that happen with some seasonal regularity show that outages may occur. The analogous event using third-party probe-data is the potential for outages if system errors happen within a third-party's internal operations.
The structure and magnitude of costs for data are unclear, since there isn't a long history of prices paid throughout the industry. Without a market-accepted method for determining prices for probe traffic data, agencies are vulnerable to overpaying. The agencies provided varied answers with no consensus on an ideal cost structure, indicating an opportunity for them to better communicate with one another to leverage their purchasing power and influence in the industry.
Perhaps the most definitive conclusions based on response consistency are that the data and method of data collection need to be more transparent. Respondents are concerned about the use of historic data in lieu of real-time data and this was highlighted as a reason not to use third-party data. As members of a highly technical profession, TMC managers are interested in knowing exactly how data are collected and aggregated, and they are interested in having extremely granular data to check from third parties. An increase in transparency is crucial for broader adoption of third-party probe data.
120

Chapter 5: Intelligent Systems in the Transportation and Energy Sectors
(Victor Wanningen and Dr. Hans Klein)
Introduction
Recent innovations in computer and network technologies enable physical objects, devices, and systems to become "smart" or "intelligent", interacting with each other in networks. This state of affairs is congruent with the vision of the "Internet of Things" (IoT) that in the near future all our devices are "smart" and able to communicate with each other and exchange information (107). "Smart" or "intelligent" things, devices, and systems therefore refer to these objects being "communication network enabled" or just plainly "networked."
This trend in "networked" systems is also visible in the transportation sector and in the energy sector (108). In the transportation sector, the IoT manifests itself in terms of intelligent transportation systems (ITS) that encompass the network infrastructure of highways and surface roads, enriched with a digital two-way communication network infrastructure. ITS is defined as: "a broad range of advanced communications technologies that, when integrated into transportation infrastructure and vehicles, relieves congestion, improves safety, and mitigates environmental impact" (108). In the energy sector, the IoT manifests itself in terms of smart grid systems that encompass the network infrastructure of generation, transmission, distribution, and consumption of electric power (electricity), enriched with a digital two-way communication network infrastructure as well. Smart grid systems are defined as: "The electric delivery network, from electrical generation to end-use customer, integrated with sensors, software, and two-way communications technologies to improve grid reliability, security, and efficiency (108).
121

In the National Broadband Plan, smart grid systems and intelligent transportation systems are mentioned together in Chapter 12 entitled: "Energy and the Environment" as strategic areas for broadband development, implementation, and innovation (108). This observation raises the central question that this paper attempts to answer: "what are the similarities and differences between the transportation sector and the energy sector; intelligent transportation systems and smart grid systems respectively?"
In order to answer this question, this paper develops a conceptual frame of reference that is used for the comparative analysis of the two sectors. This conceptual frame consists of the governance framework by Ostrom (109), supplemented with the layered model of internet connectivity (110, 111). This hybrid conceptual framework is used as a heuristic to survey the similarities and differences between the transportation sector and the energy sector. More specifically, the comparative analysis will focus on the characteristics of operational domain, the involved institutions and their prescriptions, and the innovative "networked" application areas in intelligent transportation systems and smart grid systems. The comparative analysis draws on reviewed literature and other written material, mainly websites, on intelligent transportation systems and smart grid systems.
In the following sections, this paper first develops the hybrid governance-layered Internet connectivity conceptual frame of reference. Then, this paper presents the results of the comparative analysis based on the application of the heuristic frame of reference to the transportation and energy sectors. Finally, the conclusion summarizes the main findings and outlines avenues for further research.
122

Conceptual Frame of Reference
Governance Framework
The Institutional Analysis and Development (IAD) framework was originally developed to conceptualize governance of common pool resource (CPR) systems in order to address the tragedy of the commons; the over consumption of a common pool resource (109). A way to prevent the tragedy of the commons from happening is to get all the appropriators and contributors of a CPR system together in a collective choice situation to interact for establishing, imposing, and enforcing new rules to govern the CPR system.
In the IAD framework, governance is unpacked in terms of nested multi-tier, multicontext, and multi-actor institutions. Broadly defined, institutions are "the prescriptions that humans use to organize all forms of repetitive and structured interactions including those within families, neighborhoods, markets, firms, sports leagues, churches, private associations, and governments at all scales" (109). In addition, Crawford and Ostrom (112) provide a more narrow definition of institutions as the "enduring regularities of human action in situations structured by rules, norms, and shared strategies, as well as by the physical world. The rules, norms, and shared strategies are constituted and reconstituted by human interaction in frequently occurring or repetitive situations". Essentially, institutions are collective choice situations for the involved stakeholders that are structured by procedural prescriptions. In addition, institutions develop substantive prescriptions (rules) to govern an operational domain, like a CPR system.
An analyst can flexibly use the IAD framework to conceptualize and contribute to solving policy issues in any kind of setting in which there is a collective action problem or where there are issues with specific policies that are implemented or need to be changed. By the same token, the main concepts of the IAD framework were utilized to
123

conceptualize sector governance for the operational domains of the transportation and energy sectors that are in a "networked transition" to intelligent transportation systems and smart grid systems respectively. To realize the "networked transition" of these two operational domains, there are different types of institutions (procedural processes) involved that develop different types of prescriptions (substantive rules). Different stakeholders need to come together to collectively make new rules (collective decisions) to govern the "networked transition" in the respective operational domains. For example, public policy institutions develop new legislations, the regulatory and administrative agencies develop new operational rules, and the technical standard setting bodies develop new technological standards for the operational domains that are in "networked transition".
Besides focusing on the characteristics of the operational domains and the involved institutions and prescriptions, this report will focus on the innovative "networked" application areas in intelligent transportation systems and smart grid systems. In order to better be able to unpack these networked applications, this report will discuss the layered model of Internet connectivity that will supplement the governance framework in making up the conceptual frame of reference for the comparative analysis of the two sectors.
Layered Model of Internet Connectivity
To realize Internet connectivity in computer networks (inter-networking), the communications protocols of the layered Internet protocol stack need to be implemented in the network components, such as computers (hosts) and other network components such as routers, gateways, and switches (110, 113, 114). The Internet protocol stack is a free and open collection of communication protocol standards that implemented the end-to-end principle that allows for intelligence and innovations in network applications
124

and services to take place at the edge of the network. In other words, the network architecture serves to transport user data from point A to point B in the network.
The Internet protocol stack defines four abstract and modular layers to implement a packet-switched network. Each layer has its own functions in offering services to the layer above, makes use of the services offered by the layer below, and has its own communication protocol standards for the transport of data packets from the source to their destination in the network. The top layer is the application layer that is responsible for application-to-application connectivity, e.g., the Simple Mail Transfer Protocol (SMTP) for email or Hypertext Transfer Protocol (HTTP) for web. The second layer is the transport layer that is responsible for host-to-host connectivity, e.g., the Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) (115). The third layer is the network layer that is responsible for network-to-network connectivity; for the Internet this is the Internet Protocol (IP). Finally, the bottom layer is the link layer that is responsible for host-to-network connectivity, e.g., physical (tele-) communications media like WiFi, Ethernet, second generation Global Systems for Mobile Communication (2G GSM), third generation Universal Mobile Telecommunications System (3G UMTS), and fourth generation Long Term Evolution (4G LTE).
The layered Internet protocol stack is a collection of standardized communication protocols for each functional layer, together forming the communication standards for the Internet. In addition, the Internet protocol stack has an hourglass shape, having many communication protocol standards in the application layer and link layer and only TCP/IP and UDP/IP at the transport/network layer. Computer hardware and software (programming code) implement the protocols, actually realizing Internet connectivity between hosts in global computer networks. The beauty of the layered model is that as long as the Internet protocol stack is implemented, interoperability among hardware and software platforms of different vendors is realized. Consequently, the Internet is a
125

collection of heterogeneous networks in terms of ownership "public or private" and in terms of communication links "wireless or wired", and heterogeneous devices in terms of different combinations of hardware and software platforms that implement the internet protocol stack. Taken together, these networks and devices are creating global ubiquitous Internet connectivity, allowing Internet applications to exchange data on behalf of their users and/or processes.
To illustrate the workings of the Internet protocol stack, take for example, a Dell desktop computer that runs the Microsoft Windows XP operating system (OS) that implements the lower three layers: TCP/IP on top of an Ethernet local area network (LAN) communication link. An email application (computer program) that implements the Internet Message Access Protocol (IMAP) and the Simple Mail Transfer Protocol (SMTP) email protocols, and runs on top of the OS platform, allows emails to be exchanged (sent and received) with other email client applications across the Internet. An email client application program ("app") that also implements that IMAP and SMTP email protocols, but which is running on a Samsung smart phone that has the Google Android OS platform that implements the TCP/IP layers on top of the 3G UMTS cellular communication link, is able to send and receive emails with the email client on the Windows platform.
Another key property of Internet protocol stack is that the transport and network layers (TCP/IP or UDP/IP) treat the link layer as a "black box." In the example above, the Samsung smart phone can seamlessly switch between the wireless WiFi LAN link and the 3G UMTS cellular link when using the email application that runs on top of TCP/IP implemented by the mobile OS platform. However, the only thing to consider is that each link has different properties, e.g., speed, bandwidth, and range that affect communication. For instance, the Google navigation application ("app") for the Android OS platform offers navigation services by using the combination of the GPS module in
126

the phone and the cellular link in the smart phone to download maps and traffic information. Since navigation is meant to improve mobility, e.g., when driving from A to B, the WiFi LAN link is not suitable as the communication link because its range is too short. Thus, it is apparent that apps implementing application layer protocols, the platform OS implementing TCP/IP, and the underlying telecommunications link implementing 2G EDGE, 3G UMTS, or 4G LTE work together in realizing network application functionality. This is taken into account when designing and programming network application and services. There has been a trend towards mobile internet connectivity that allows for various mobile and wireless Internet application and services. Examples of mobile smart phone OS platforms that implement TCP/IP on top of wireless cellular links are: Blackberry OS, Apple iOS, and Windows Mobile OS.
The four-layer Internet protocol stack was not always the dominant standard for Internet connectivity. The main competitor of the TCP/IP Internet protocol stack was the International Organization for Standardization (ISO) Open System Interconnection (OSI) reference model that specifies 7 layers: application, presentation, session, transport, network, link, ad physical, and their associated communication protocol standards (111, 116). From history, we know that the four-layered Internet protocol stack became the de facto standard for the Internet and not the seven layer ISO OSI reference model and its associated communication protocol standards. The main reason why the TCP/IP Internet protocol stack became the de facto standard for the Internet is the social processes for its design and implementation. On the one hand, there is the formalized, politicized, top-down, and control paradigm of standard setting and implementation by the European standard-setting bodies ISO, International Telecommunication Union (ITU), Postal Telephone and Telegraph (PTT), and governments. On the other hand, there is the American informal, bottom-up, community and consensus based paradigm of the TCP/IP Internet community consisting of the Internet Architecture Board (IAB),
127

Internet Engineering Task Force (IETF), and Internet Society (ISOC), supplemented with vendors developing and implementing the TCP/IP architecture. These Internet community actors operated on the motto of "rough consensus and running code" which was persuasive because the TCP/IP Internet architecture was open, free, and actually working and hence widely adopted. Consequently, TCP/IP architecture pushed the more formal, rigid, and cumbersome ISO OSI architecture as well as the other nonopen/proprietary architectures by IBM and Digital Equipment Corporation (DEC) out of the market.
In conclusion, the four layered TCP/IP Internet stack and not the seven layered ISO OSI reference model has been implemented in realizing Internet connectivity in contemporary global and increasingly mobile computer networks. Hence, for the comparative analysis of the transportation sector and the energy sector, the analysis will draw on the TCP/IP Internet protocol stack. More specifically, the analysis will focus on three layers for simplicity because TCP/IP and UDP/IP of the transport and the network layers usually go together forming the hourglass of the Internet architecture. In other words, for the purpose of the comparison in this paper, there is no additional analytic benefit of using all 4 layers of the Internet protocol stack. The three layers that will be used for the comparative analysis are the application layer ("applications"), the transport/network layer ("networks"), and the link layer ("links").
Conceptual Heuristic for the Comparative Analysis
The conceptual frame of reference consists of the IAD governance framework, supplemented with the layered model of Internet connectivity, and serves as a heuristic for the comparative analysis of the transportation sector and the energy sector that are in "networked transition" to intelligent transportation systems and smart grid systems respectively.
128

The IAD framework provides the governance concepts that serve to compare the governance of the operational domains of the transportation and energy sectors. The operational domains will be compared on their characteristics: high-level functions, policy drivers and challenges, stakeholders, and pilot projects. In addition, the governance concepts serve to compare the different types of institutions (procedural processes) involved that developed different types of prescription (substantive rules); more specifically, looking for the involved public policy institutions and their prescriptions, the regulatory/administrative agencies and their prescriptions, and the involved standard setting bodies and their prescriptions.
The layered model of Internet connectivity serves to zoom in on the technical Internet connectivity prescriptions (standards) that facilitate the innovative network applications in intelligent transportation systems and smart grid systems. For the comparative analysis, this comes down to comparing the pivotal networked application areas in the intelligent transportation systems and smart grid systems that have embedded OS platforms that in some way, shape, or form implement the Internet protocol stack. In applying the layered model of Internet connectivity as part of the heuristic, its three layers are used to loosely compare the main categories of user applications and services, their networks (IP/ non-IP based and public/ private/ dedicated), and their underlying (tele-) communication links (wireless/ wired/ hybrid).
Results: Comparative Analysis
Characteristics of Operational Domain
High-level System Functions In both the energy sector and the transportation sector, information technology (IT)
realizing Internet connectivity has the function of improving the existing infrastructure
129

by networking the system components, making them "networked". This means that in both sectors, the integration of IT in the existing infrastructure enables the closer integration of suppliers and users through the exchange of bi-directional data/information about the use, users, suppliers, and state of the infrastructure.
For intelligent transportation systems, through the use of IT, the high-level system functions are to improve the road safety for the users, enhance the mobility of users and the overall efficiency of the infrastructure, and to improve the environmental (eco) protection by reducing energy consumption and reducing emissions.
For smart grid systems, through the use of IT, the high-level system functions are to increase the reliability of the grid, increase the flexibility of the grid, increase the safety of the grid, improve the affordability of electricity from the grid, ensure security of the grid against cyber/national security attacks, ensure energy security by realizing a higher dependence on domestic energy sources, improve environmental protection and sustainability (general push towards more efficient and cleaner energy sources to reduce carbon emissions), enable the integration of (distributed) renewable energy sources (wind, solar, waves, biomass), enable the integration of decentralized/distributed power generation (DG) and storage to the grid, and finally, enable the integration of plug-in electric vehicles (PEV) and plug-in hybrid electric vehicles (PHEV) vehicles and their charging stations and storage to the grid.
Policy Drivers and Challenges
Related to the high-level system functions, each sector has its sector-specific policy drivers that focus on the less functional components of the existing infrastructure and services that IT is envisioned to improve. For intelligent transport systems, the primary policy drivers are reducing traffic casualties, improving traffic management, reducing gas prices, improving the carbon footprint, and stimulating economic development. For
130

smart grid systems, the primary policy drivers are reducing electricity demand, reducing electricity prices, improving the reliability of electricity, improving the carbon footprint, integrating renewable energy sources and electric vehicles to the grid, and stimulating economic development.
Also each sector has its own sector-specific policy challenges in addition to similar challenges as the result of embedding IT. Essentially, embedding IT in any system creates consumer privacy (policy) issues. Privacy concerns are outlined in the conclusion as a fruitful avenue for further research. For intelligent transportation systems, the main policy challenges are data security, data privacy, interoperability of standards, (5GHz) spectrum allocation, liability, public information about benefits of ITS, and distracted driving as it relates to safety. For smart grid systems, the main policy challenges are capital investments, technical risks, existing pricing scheme, market monopoly structures, incomplete or imperfect public information about benefits, data security, data privacy, and interoperability of standards.
Stakeholder Groups
In each sector, there is a different mix of stakeholder groups involved. In addition to the existing infrastructure stakeholders, government stakeholders, and users, there are now also IT stakeholders involved. In the transportation sector, there is a public sector monopoly on managing the infrastructure, whereas in the energy sector, there is a private sector monopoly on managing the infrastructure. The IT stakeholders in both sectors are private sector players that operate on a competitive basis in the marketplace.
For intelligent transportation systems, the involved stakeholders are the State DOTs, automotive original equipment manufacturers (OEMs), consumers, regulatory/administrative institutions, telecom operators, IT providers, and ITS providers. The government, in particular the State DOTs, has the monopoly on highways and
131

surface roads, whereas the automotive OEMs, the IT companies, and the ITS companies work on the competitive basis in the marketplace.
For smart grid systems, the involved stakeholders are the electric utility providers, conventional energy companies, renewable energy companies, consumers (residential, commercial, industrial), the Federal government, regulatory/administration institutions, state legislators and utility commissioners, advanced metering infrastructure (AMI) vendors, telecom operators, IT providers, and environmental groups. In the energy sector, all the players on the supply side of the infrastructure operate on a competitive market basis (private sector); however, utility companies can be perceived as a natural/regulated monopoly in the market.
Field, Pilot, and Research Projects
In both sectors, there are different field tests, pilot projects, and research projects that experiment with the communication technologies that are still in flux. These projects investigate the possibilities for implementing new "networked" technologies for enriching and improving the functions of the existing infrastructures in the sectors to address the challenges the sectors are facing.
For intelligent transport systems, the Ann Arbor Safety Pilot project is the most important project that experiments with dedicated short-range communication (DSRC) standards to enable safety applications for connected vehicles. Other projects include Applications for the Environment: Real-time Information Synthesis (AERIS) and Vehicle Infrastructure Integration (VII) initiative and DSRC Techno.
For smart grid systems, there are many more projects. These are the advanced meter infrastructure (AMI), also known as smart meters, pilot projects that experiment with different applications and communication links, and in which different market players are involved, such as telecom, AMI venders, and utility companies. The US
132

based pilot projects identified so far are led by General Electric (GE), American Electric Power (AEP), Southern California Edison (SCE), Georgia Power, Florida Power * Light, Oncor, Detroit Energy (DTE), CenterPoint, Pepco Holdings (PHI), Duke Energy, Sempra/San Diego Gas & Electric, Ontario Smart-Metering Initiative, Portland General Electric Co., and the Future Renewable Electric Energy Delivery and Management Systems Center (FREEDM).
Institutions and Prescriptions In both sectors, there are administrative and regulatory governmental departments
and agencies (institutions) involved that develop and enforce sector-specific laws and regulations (prescriptions). In addition, the IT administrative/regulatory agencies, such as the Federal Communications Commission (FCC), are involved as well. In both sectors, there are technical standards-setting bodies involved that develop sectorspecific standards and architectures. The same standard-setting bodies are seen being involved in the transportation, energy, and IT sectors.
In terms of institutions for intelligent transportation systems, transportation-specific regulatory and administrative institutions include the United States Department of Transportation (USDOT), National Highway Traffic Safety Administration (NHTSA), Federal Highway Administration (FHWA), Research and Innovative Technology Administration Joint Program Office (RITA-JPO), and State DOTs. IT regulatory and administrative institutions include the FCC, National Telecommunications and Information Administration (NTIA). Industry platform institutions include Intelligent Transportation Society of America (ITSA) and standards-setting institutions include Institute of Electrical and Electronics Engineers (IEEE) and Society of Automotive Engineers (SAE) International.
133

For smart grid systems, there are energy-specific regulatory and administrative institutions such as the US Department of Energy (DOE), Federal Energy Regulatory Commission (FERC), National Institute of Standards and Technology (NIST), North American Electric Reliability Corporation (NERC), National Association of Regulatory Utility Commission (NARUC), state public utility commissions, Federal Smart Grid Task Force, and GridWise Architecture Council. There are also related regulatory and administrative institutions such as the United States Department of Agriculture (USDA), Department of Commerce (USDOC), Department of Homeland Security (USDHS), Department of Defense (USDOD), and the Environmental Protection Agency (EPA). The IT regulatory and administrative institutions include the FCC and National Telecommunications and Information Administration (NTIA) and the standards-setting institutions include American National Standards Institute (ANSI), IEEE Power and Energy Society, and the Electrotechnical Commission.
When looking at public policy prescriptions for intelligent transportation systems, there are the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) of 2005, the Transportation Equity Act for the 21st Century (TEA -21) of 1998, the Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991, the National Broadband Plan of 2010, the Transforming Transportation through Connectivity: ITS Strategic Research Plan 2010-2014 of 2012, and the Intelligent Transportation Systems (ITS) Standards Program Strategic Plan for 2011-2014 of 2011.
For smart grid systems, there are the Energy Policy Act (EPA) of 2005, the Energy Independence and Security Act (EISA) of 2007, the American Recovery and Reinvestment Act (ARRA) of 2009, and the National Broadband Plan of 2010.
134

Network Applications Areas
Each sector has its own application areas. However, there are some similarities as in both sectors, there is the infrastructure supplier-side with the front-end applications, and then, there is the infrastructure user-side with the back-end applications, and finally, there is the user-side aftermarket application area. For intelligent transportation systems on the supplier-side, there are the State DOTs with their back-end ITS systems as managed by their traffic management centers (TMC). On the user-side, there are the front-end websites and apps as provided by the State DOTs that provide the public with information about real-time traffic conditions. In addition, there are innovations in this area regarding safety applications in the emerging connected vehicle paradigm. In the user-side aftermarket, there are the in-car infotainment platform-app ecosystems as provided by the automotive original equipment manufacturers (OEMs), the smart phone platform-app ecosystems as provided by the mobile OEMs, and finally, insurance related applications. For smart grid systems, on the supplier-side, there is the back-end application area of the electricity infrastructure provider for managing the components of the grid. On the user-side, there are the front-end developments regarding smart meters, the so-called advanced meter infrastructure (AMI). In the user-side aftermarket, there are various home energy management (HEM) applications to manage "networked" appliances and home automation.
Each of these 3 application areas has its own requirements and challenges. For instance, mission critical management of electricity in the grid on the supplier-side has different requirements such as latency, data rates, and reliability than when going to a website to monitor electricity usage for the day. Also, both sectors use a wide range of IT to make their sectors "networked". This includes IP-based and non-IP based networks, public and private networks, and wireless and wire-line communication links.
135

The mix of network and communication links is driven by the application or service that the infrastructure aims to provide. For instance, in the transportation sector, a greater emphasis is given to wireless communication links as the objects such as cars, buses, and people are inherently mobile. In the energy sector, a greater emphasis is given to trying to make use of the wire-lines of the existing grid infrastructure as the objects are less mobile, e.g., power lines, houses, and electricity meters.
Finally, the private sector driven aftermarket of the mobile smart phone platform-app ecosystems is more developed in catering to the transportation sector, e.g., navigation apps, when compared to the energy sector, e.g., smart sockets. Based on the analysis so far, the home energy management (HEM) application area is the place in the smart grid systems where the internet protocol stack realizing the IP-based networks of ubiquitous Internet connectivity as driven by the private sector with their market-based models currently manifests itself most prominently. This is similar to the navigation and infotainment apps on the smart phone platforms of the mobile OEMs and the in-car infotainment platforms of the automotive OEMS in the transportation sector.
In the following sections, the 3-layered model of Internet connectivity (applications, networks, and links) is applied to all the three application areas of the two sectors.
Supplier-side of the Infrastructure
ITS: State DOTs Traffic Management Systems This is the application area of State DOT's integrated back-end ITS traffic management systems, e.g., the GDOT TMC NaviGAtor ITS system. This is the backend of the GDOT TMC that through the NaviGAtor system (hardware and software components) is able to manage and control the highways and traffic in Georgia, foremost incident management. This application area also includes government CCTV video surveillance, photo speed enforcement, fleet GPS tracking, electric credentialing
136

and weigh-in-motion, electronic tolling, and vehicle mile taxing. The data that are being exchanged are traffic conditions (speed, volume, and occupancy), incidents, weather conditions, vehicle IDs, vehicle geo-locations, and road maps. The networks tend to be IP-based private networks that aim to establish connectivity between the ITS system components, e.g., variable message signs, ramp meters, and CCTV cameras. The links are a combination of wire-line fiber and Ethernet to link all the system components. For instance, the GDOT TMC operates its own fiber network because they need to have the resources at their disposal when there are issues and incidents. Furthermore, also wireless links are used, e.g., cellular (2G) for communication with remote ITS system components. Recently, Bluetooth has been used as well to monitor traffic conditions.
SG: Supplier-side electricity management This is the mission critical application area for the management and control of the grid by electricity suppliers. This is the intelligent management and control by monitoring electricity and information concerning the generation, transmission, and distribution components of the grid by the supplier side. For example, monitoring and managing, substations, distribution centers, overhead transmission lines, wide-area situation awareness (WASA) systems, assets, preventing black outs and outages, meter data, renewables, and vehicle 2 grid (V2G) integration. In addition, this is also the area of supervisory control and data acquisition (SCADA) systems. The data that are exchanged include information about the electricity currents and voltages such as loads including peak loads, outages, demands, and management thereof, consumer electricity consumption (both historical and real-time), dynamic real-time pricing rates of electricity such as time-of-use pricing (TOU), critical peak pricing (CPP), real-time pricing (RTP), dynamic incentive options for electricity such as direct load control, interruptible service, demand bidding/buy back programs, and emergency demand response programs. The networks that are used can be IP-based or non-IP based (dedicated) and tend to be
137

private to establish connectivity between the grid components for generation, transmission, and distribution of electricity (FAN-WAN). The links that are used are wireline, e.g., fiber, Ethernet, or power line communication (PLC) for establishing two-way real-time communication between grid components.
User-side of the Infrastructure
ITS: State DOTs Traffic Information Provision This is the application area of the State DOTs' front-end. For example, the front-end of the GDOT TMC that through their 511 phone systems, the NaviGAtor websites, their 511 smart phone Android/iOS app, variable message signs, and radio broadcasts are able to provide traffic information to the drivers in Georgia to influence their traffic consumption decisions, e.g., decisions to re-route in case of lane closures, or postponing a trip when the roads are congested. The data that are exchanged are traffic conditions, incidents, weather conditions, and road maps. The networks that are used are IP-based and private or public for establishing connectivity between State DOTs and the road users (drivers). The links are wireline: fiber or Ethernet, or wireless: cellular (2G, 3G, 4G) or WiFi.
ITS: Public Sector Connected Vehicle Paradigm This is the application area of connected vehicles, the V2X traffic safety and warning systems, V2X traffic efficiency and navigation systems, and V2X infotainment systems; safety, navigation, and infotainment apps respectively. The data that are exchanged are traffic conditions ,incidents, weather conditions, vehicle IDs, vehicle geo-locations, road maps, vehicle condition and performance, billing data, music, video and Voice over Internet Protocol (VoIP). The networks are IP-based or non-IP-based, public or private, for establishing connectivity between vehicles in an ad hoc mesh network and between vehicles and the infrastructure. For instance, when analyzing the connected vehicles
138

layered protocols of the Wireless Access in Vehicular Environments (WAVE) / Dedicated Short Range Communications (DSRC) protocol stack and the International Standards Organization Communications Access for Land Mobiles (ISO CALM) protocol stack, it is shown that the IP-based and non-IP (dedicated) protocols are used as they have reduced overhead suitable for the low latency requirements of safety applications. The links that are used are fiber, cellular, and DSRC.
SG: User-Side Electricity Management This is the application area of smart electricity meters that enable the two-way communication of energy and information between consumers and utilities, mainly through smart meters of the advanced meter infrastructure (AMI). The key here is the demand response management (DRM) that is based on prices or incentives based load control that allows for controlling electricity use and loads on the electricity networks. One example would be letting consumers know when it is cheaper to use electricity. The data that are exchanged are the electricity consumption information (historical and real-time), dynamic real-time pricing rates of electricity (time-of-use pricing, critical peak pricing, real-time pricing), dynamic incentive options for electricity (direct load control, interruptible service, demand bidding/buy-back program), and emergency demand response programs. The networks are IP-based or non-IP-based, public or private, for establishing two-way real-time connectivity between consumers and utilities. The links that are used are wire-line (planar light circuit or fiber) and wireless (cellular 2G, 3G).
User-side: Aftermarket ITS: Telematics
This application area, characterized by rapid technological innovations in navigation apps, is called the telematics platform app ecosystem. The apps are distinguished on
139

automotive OEM platform app ecosysems and apps on mobile OEM platform-app ecosystems.
There are telematics systems developed and operated by the automotive original equipment manufacturers (OEMs) such as Ford, GM, and BMW. These are infotainment systems such as Ford Sync, GM Onstar, and BMW Connected Drive that provide navigation and traffic information apps for efficiency to drivers besides the entertainment apps like listening to music or browsing the internet in the car. The automotive OEMs have their own platforms and operation systems (e.g. Windows Embedded) to run their apps.
On the other hand, there are also telematics systems developed by the mobile OEMs such as Apple and Google. Smart phones can be used standalone for infotainment (e.g. listening to music or navigation). The mobile OEMs have their own OS platforms, Apple iOS and Google Android respectively. Key transportation apps include Waze, Google Maps, Inrix, Yelp, Parker, and One Bus Away.
For example, the trip planner app Waze allows navigation through the map and traffic info that is dynamically built based on social media and traffic congestion/load is measured using GPS tracking of individuals running the app. There is a battle over the dashboard going on because it is not clear which platform-app ecosystem is going to dominate.
In short, we can label the emerging navigation apps on the in-car OEM infotainment systems by the automotive OEMs, and the navigation apps on the smart phones platforms by the mobile OEMs in the telematics platform-app ecology. Finally, there are developments in car insurance for monitoring driving behavior for risk and premium assessment.
The data that are exchanged are traffic conditions, incidents, weather conditions, vehicle IDs, vehicle geo-locations, road maps, vehicle condition and performance, billing
140

data, music, video, and VoIP. The networks are IP-based and public to establish connectivity between users and between users and online services/databases. The links are wireless (cellular 2G, 3G, 4G) and WiFi.
SG: Home Energy Management This is an application area of the interconnection of smart home/building appliances and automation systems in smart homes to manage electricity consumption. These systems allow for a web or app based electricity management of smart appliances such as sockets, thermostats, and home automation. There is app activity in terms of webbased electricity monitoring (e.g. Microsoft Hohm, Google PowerMeter); however, these apps have been discontinued. There are two promising applications: Visible Energy and Control 4. Visible Energy is a company that develops "smart electricity outlet products" for the smart home. These allow customers to monitor and manage the electricity consumption of their electrical appliances remotely using the Internet through web-based dashboard/portals and iPhone/iPad applications that communicate with the smartsockets through the in-house WiFi infrastructure. The smart sockets monitor the electricity consumption on 5 minute intervals and store the data for 2 to 4 months. Subsequently, the electricity usage and cost information becomes available and can be read using the web portals and the iPhone apps. Also individual sockets can be turned on/off, timers can be set, and socket can sense devices' standby modes and turn themselves off. Products they offer include the UFO Power Center and the Monostrip. New (to be launched/developed) products are the Desktop Power Center and tools for charging electric vehicles (PHEV and PEVs). Control 4 is a company that in the tradition of smart homes offers automation systems for your electrical components and systems in your house such as lighting, video, climate control, security cameras, and smartphone/tablet apps. Automation
141

means that these components are integrated and are able to work together. Smart thermostats, home theater systems, security systems, lighting systems can be controlled by a central computer that makes the management of the components available through various interfaces (e.g. flat/touch screens, the Internet, smart phone apps accessible anywhere). Key connectivity media are TCP/IP over Ethernet, WiFi, and ZigBee (wireless mesh) to connect all the devices in the home.
The data that are exchanged are electricity consumption (historical and real-time) and device control and configuration information. The networks are IP based and can be public or private for the connectivity between consumers and their appliances/devices in their homes. The links are wire-line (Ethernet) or wireless (cellular 2G, 3G, 4G, WiFi, or ZigBee).
Conclusions
This section offers insights into key differences between the two sectors. These include observations about the different media requirements in each sector, the locus of information uncertainty in each sector, and the degree of likely institutional transformation in each sector.
Link Layer: Need for Wireless Transportation and energy require different physical media for their networks.
Transportation is built on mobility vehicles move around. Therefore, the computer networks that serve intelligent vehicles must allow for such mobility; wireless technology is essential to ITS in a way that is not essential to smart grids. Intelligent transportation systems demand wireless media at the link layer.
The preeminent role played by wireless has numerous consequences. Wireless has numerous technical limitations, beginning with bandwidth and range. Classic WiFi
142

(802.11) offers good bandwidth but has limited range. Other wireless networks, like cellular or Long-term Evolution (LTE), offer better range but lower bandwidth and higher cost. Wireless networks are also less secure. With any computer in range of the route able to attempt a connection, be it authorized or not, wireless links make attractive targets for hackers. That may leave ITS networks less secure than other sectors. Thus, the transportation sector faces significant challengers at the link layer, due to the need for wireless media.
Latency
Not only do vehicles move, they move rather quickly (even by the standards of information engineering). Network latency the time lag between transmission and reception is often measured in entire seconds. However, for safety applications in transportation, such latency is unacceptable. Applications to identify imminent collisions need a much lower latency. This need for low latency impacts the selection of media (link layer); some media simply are not fast enough.
More troubling, the demand for low latency can render the Internet Protocol itself inappropriate. If the Internet Protocol is not adequate, the implications can be profound. The Internet revolution and the Internet of Things (IoT) may not fully apply to the transportation sector. New Transport and Network layer protocols may have to be developed to achieve low latency. At the link layer, new wireless protocols may also have to be developed.
The situation is playing out in the DSRC (dedicated short range communication) development program. Due to latency issues, the DSRC program has had to develop and alternative to the Internet Protocol and an alternative to the classic 802.11 WiFi standards. Instead of building on the Internet revolution, they have had to develop an alternative to the Internet.
143

Unable to build on the Internet revolution, DSRC has found itself in competition with it. DSRC uses frequencies in the 5.9GHz band, but that bandwidth is also sought by WiFi companies. Hearings held at the FCC, and the process leading to those hearings, have manifested competition between established networking firms and DSRC developers.
Unlike transportation, communications in the energy sector are compatible with the existing protocols. While some applications are time sensitive (i.e. emergency response), the incompatibilities are more peripheral than in transportation, where the application area of safety is central to transportation and manifests serious incompatibilities with the Internet.
Uncertainty, Information, and Value
Ultimately, the success or failure of information technology in any sector depends on whether the technology adds value. Valuable systems are more likely to succeed than systems of little value.
What makes an information system valuable? How does information add value? The value of information derives from its ability to reduce uncertainty. Where the future is uncertain, an increase in information sampling both in scope and in frequency allows an observer to detect and track changes. Stated differently, information is most valuable in situations of uncertainty. Where change is rapid and where events are unpredictable, information will provide the greatest value.
Uncertainty is one of the chief characteristics of the transportation sector, for two reasons. The first is already familiar: mobility where objects move around, information needs are high. Second, the transportation system as a whole is uncertain. Vehicles are controlled by drivers who are self-governing and who may lack skills. Predicting what so many autonomous agents will do is very difficult. Poor decisions and harmful
144

actions can easily occur in transportation. Making it worse, the system is vulnerable to perturbation: one bad road accident can block significant portions of a transportation network. Transportation is rife with uncertainty. Moreover, the costs associated with the system malfunction are high. Being an hour late to work is a major cost, both to an employee and to the firm.
Thus, ITS can have a very high value in the transportation sector. This is not to say there is no uncertainty and no value of information in the energy sector. But the transportation sector benefits comparatively more from information technology. Evidence of this is in the experiences of IT firms. While Google and Microsoft withdrew from the smart meter market in the energy sector, they have both committed to the transportation sector. It is easier to deliver valuable benefits in a sector with so many moving objects and so much uncertainty.
Locus of Innovation
In the energy sector, indications are that the greatest uncertainty lies in the "back office" operations of power generation and distribution. Hence, smart grids assist the supply industry as it seeks to achieve efficiency in its operations.
The transportation sector is just the opposite. Here the greatest uncertainty is in the experience of the user of the system. Roads are stable but traffic and operations are unpredictable. The uncertainty is located at the user. This manifests itself in aftermarket products. After-market products like smart phone apps (i.e. Waze) directly serve the user. Innovative systems enter the transportation sector easily because users are accessible in a way that back office operators are not. In energy, entry to the sector by new IT entrepreneurs may be more difficult.
As a result, the transportation sector may experience more new players than does the energy sector. The companies and the industry structures that participate in
145

transportation are likely to change because so much innovation occurs in the aftermarket, where barriers to entry are weak. Governance
Governance refers to control. As new information systems and new players enter the transportation sector, there is likely a change in governance. New players will bring new forms of control to the sector.
One prominent example is traffic management. Currently, traffic management is performed by public sector traffic control centers. With the advent of private navigation systems, TMCs may lose control of traffic and traffic may increasingly respond to the recommendations of private navigation firms. This governance of network operations will, at minimum, slip from the TMCs. It is simply lost or may reappear in the private sector, as private navigation firms influence broad traffic flows.
In summary, the transportation sector may present greater challenges to network adoption and may also be more radically transformed by networks. Where its latency characteristics render the Internet inappropriate, it remains to be seen whether the transportation sector can successfully develop totally new protocols and media. However, where the internet is appropriate, the costly uncertainty that characterizes the transportation sector means that innovation could happen rapidly. As new players enter the transportation sector, effective control of operations could be affected.
146

Chapter 6: Conclusions and Recommendations for Further Research
Transportation is in the midst of a second IT revolution. The first revolution was the ITS program that started in the 1990s and that continues to this day. The ITS program employs comprehensive system planning at the federal level and deploys field tests and operational systems at the state and local level.
Today's second revolution is based on Web 2.0 technologies and bottom-up approaches implemented on mobile computing and communication platforms like smart phones and tablets. We have called this "social networked transportation". Transportation policy is no longer necessarily a top-down process. Now more than ever, there are opportunities for a bottom-up approach that focuses on the experience and needs of the citizens themselves. Social networked transportation is the name we give for the technologies, social interactions, and development processes in this second IT revolution.
Social networked transportation benefits users, operators, and planners. Users get better information, thanks to an explosion of new apps, from transit apps that give riders bus arrival times to parking apps that help drivers find available spaces. Operators and planners benefit, too, from apps that monitor people's travel to crowdsourcing apps that assist in the planning of bike lanes.
Social networked transportation builds on three elements: technical standards, network connectivity, and application development. These three core elements have been the theme of this report.
Technical standards define both data formats and the communication protocols that enable data to be widely shared and understood. Lessons from this research in standard-setting are the importance of open standards and of early on-the-ground
147

implementations of standards. In order to be adopted in a timely manner, standards should be easily accessible and should be developed through participatory processes.
Data networks and data sharing are fundamental components of the second core element, connectivity. Data sharing requires confidence in the data source, and it requires transparency and openness around the technology. Although institutional transitions take time, institutions such as traffic management centers are beginning to use third-party data to monitor traffic.
The third core element in social networking is application development. Once data are standardized and interconnected, it still remains to develop applications to convert that data into useful information. To enable such application development, data have to be open without institutional barriers and collaborators have to be encouraged. In comparing the transportation and energy sectors, we saw how application development processes occur according to the logic of different sectors, reflecting unique circumstances, institutions, and needs.
Funding from this project was used to support two transportation application development events: the "TransportationCamp South" events of 2013 and 2014. A TransportationCamp is an "unconference" where sessions are proposed and led by attendees. This unconference brings together thinkers and doers in the transportation and technology fields for a day of learning, debating, connecting, and creating. TransportationCamp has been held in 6 different cities since 2011: New York, San Francisco, Washington D.C., Montreal, Boston, and now, Atlanta. With its bottom-up approach, TransportationCamp raises awareness of opportunities in social networked transportation and builds connections between disparate innovators in public administration, transportation planning and operations, information design, and software engineering.
148

The first TransportationCamp South was held on February 7th 2013. The event began with a keynote panel titled "Big Problem, Small Budget Addressing Atlanta's Transportation Livability Hurdles through Technology" and featured Dr. Kari Watkins from Georgia Tech, Ben Graham from MARTA, Joshuah Mello from the City of Atlanta, and Nathan Soldat from the Atlanta Regional Commission. Over 200 people attended sessions that included Mobile and Open Payments Beyond Breeze, Networking on Twitter, Data Driven Ped Planning and Ped Counting, the Atlanta Streetcar, Crowdfunding transportation improvements, MARTA Rider's Bill of Rights, and Sustainable Transpo 101. Dr. Klein hosted a session entitled "The Internet Paradigm and Transportation."
TransportationCamp South 2014 was held on Saturday, April 12th and included sessions on Connected and Autonomous Vehicles, Mobile Payments Using Smart Phones, Bridging the Digital Divide in Community Engagement, Smart Parking, Political Strategy for Transit Expansion in Atlanta, and Federal Policy and Innovative funding. It was also attended by more than 200 people. Perhaps most critically to this project, TransportationCamp South was co-hosted with "govathon", Atlanta's citywide hackathon by Startup Atlanta that focuses on problems that affect the local government and the community. Six teams created unique applications to help facilitate transportation in Atlanta and beyond. (More information on this is available at http://transportationcamp.org/events/south)
It is the intent of the Dr. Watkins' research group to continue to host TransportationCamp South and similar events to facilitate developer-agency communication, thereby encouraging application development to further the state of transportation.
149

REFERENCES
1. Gildea, D., and M. Sheikh. Applications of Technology in Providing Transit Information. Transportation Research Record, Vol. 1521, No. 1, 1996, pp. 7176.
2. Watkins, K. E., B. Ferris, A. Borning, G. S. Rutherford, and D. Layton. Where Is My Bus? Impact of mobile real-time information on the perceived and actual wait time of transit riders. Transportation Research Part A: Policy and Practice, Vol. 45, No. 8, Oct. 2011, pp. 839848.
3. Ferris, B. OneBusAway: Improving the Usability of Public Transit. 2011.
4. Tang, L., and P. Thakuriah. Ridership effects of real-time bus information system: A case study in the City of Chicago. Transportation Research Part C: Emerging ..., 2012.
5. Miller, D. L., K. Dot, D. S. Ekern, V. Dot, C. M. Walton, and E. H. Cockrell. TCRP Synthesis 73 AVL Systems for Bus Transit: Update. Transportation Research Board, 2008.
6. ITS FAQs.
7. ITS Joint Program Office FAQs. http://www.its.dot.gov/faqs.htm. Accessed Sep. 3, 2013, .
8. United States Congress. Transportation Equity Act for the 21st Century. Public Law, 1998.
9. The National ITS Architecture 7.0. http://www.iteris.com/itsarch/. Accessed Sep. 4, 2013, .
10. USDOT. Intelligent Transportation System Architecture and Standards; Final Rule. Federal Register, Vol. 66, No. 5, 2001, pp. 14451459.
11. Burger, C., M. Clark, B. Cotton, G. Filosa, D. W. Jackson, A. Linthicum, E. Machek, L. Mejias, T. Regan, S. M. Sloan, and K. Sylvester. FTA Transit Intelligent Transportation System Architecture Consistency Review 2010 Update. 2011.
12. Branscomb, L. M., and J. Keller. Converging Infrastructures: Intelligent Transportation and the National Information Infrastructure. The MIT Press, 1996.
13. Obama, B. MAKING OPEN AND MACHINE READABLE THE NEW DEFAULT FOR GOVERNMENT INFORMATION. 2013.
14. Manyika, J., M. Chui, P. Groves, D. Farrell, S. Van Kuiken, and E. A. Doshi. Open data: Unlocking innovation and performance with liquid information. 2013.
15. Moving America on Transit. http://www.locationaware.usf.edu/ongoingresearch/projects/moving-america-on-transit/. Accessed Sep. 4, 2013, .
16. Gandal, N. Compatibility, Standardization, & Network Effects: Some Policy Implications. Oxford Review of Economic Policy, No. January, 2002, pp. 122.
17. Beggs, A., and P. Klemperer. Multi-period competition with switching costs. Econometrica: Journal of the Econometric Society, 1992.
150

18. Sundararajan, A. Local network effects and complex network structure. The BE Journal of Theoretical Economics, No. September, 2007.
19. Farrell, J., and G. Saloner. Standardization, compatibility, and innovation. The RAND Journal of Economics, Vol. 16, No. 1, 1985, pp. 7083.
20. Katz, M., and C. Shapiro. Network externalities, competition, and compatibility. The American economic review, Vol. 75, No. 3, 1985, pp. 424440.
21. Farrell, J., and P. Klemperer. Coordination and Lock-in: Competition with Switching Costs and Network Effects. Handbook of industrial organization, 2007.
22. Farrell, J., and G. Saloner. Coordination through committees and Markets. The RAND Journal of Economics, 1988.
23. Keil, T. De-facto standardization through alliances--lessons from Bluetooth. Telecommunications Policy, Vol. 26, No. 3-4, Apr. 2002, pp. 205213.
24. Krechmer, K. Open standards requirements. Journal of IT Standards and Standardization Research, Vol. 50, No. 6, 2006, pp. 134.
25. West, J. The economic realities of open standards: black, white and many shades of gray. In Standards and public policy (S. Greenstein and V. Stango, eds.), Cambridge University Press, New York, pp. 87122.
26. Cargill, C. Intellectual Property Rights and Standards Setting Organizations: An Overview of Failed Evolution Submitted to the Department of Justice and the Federal Trade Commission. 2002.
27. History of the OSI.
28. O'Reilly, T. Government as a Platform. In Open Government: Collaboration, Transparency, and Participation in Practice (D. Lathrop and L. Ruma, eds.), O'Reilly Media, Inc., Sebastopol, California, pp. 1340.
29. Greenstein, S., and V. Stango. Introduction. In Standards and public policy (S. Greenstein and V. Stango, eds.).
30. David, P., and S. Greenstein. The Economics of Compatibility Standards: An Introduction to Recent Research. Economics of innovation and new ..., Vol. 1, 1990, pp. 341.
31. Besen, S., and L. Johnson. Compatibility standards, competition, and innovation in the broadcasting industry. RAND Corporation, Santa Monica, CA, 1986.
32. Cabral, L., and T. Kretschmer. 10 Standards battles and public policy. In Standards and public policy (S. Greenstein and V. Stango, eds.).
33. Farrell, J., C. Shapiro, R. Nelson, and R. Noll. Standard setting in high-definition television. Brookings Papers on Economic ..., No. Ses 8821529, 1992.
34. Alvarez, S., J. Chen, D. Lecumberri, and C. Yang. HDTV: The Engineering History. 1999.
151

35. Bresnahan, T. F., and P.-L. Yin. Standard setting in markets: the browser war. In Standards and Public Policy (S. Greenstein and V. Stango, eds.), Cambridge University Press, New York, pp. 1859.
36. David, P. A. Some new standards for the economics of standardization in the information age. Economic policy and technological performance, 1987, pp. 206239.
37. Tassey, G. Standardization in technology-based markets. Research Policy, Vol. 1, No. 45, Apr. 2000, pp. 587602.
38. Hickman, M., S. Tabibnia, and T. Day. Evaluating Interface Standards for the Public Transit Industry. Transportation Research Record: Journal of the Transportation Research Board, Vol. 1618, No. 1, 1998, pp. 172179.
39. Cargill, C., and S. Bolin. Standardization: a failing paradigm. In Standards and public policy (S. Greenstein and V. Stango, eds.), Cambridge University Press, New York, pp. 296328.
40. Perens, B. Open Standards Principles and Practice. http://perens.com/OpenStandards/Definition.html. Accessed Sep. 10, 2013, .
41. The Open Source Definition (Annotated).
42. Open Source Licenses. http://opensource.org/licenses. Accessed Nov. 7, 2013, .
43. Deshpande, A., and D. Riehle. The total growth of open source. Open Source Development, Communities and Quality, No. December 2006, 2008, pp. 197209.
44. Baraniuk, C. The civic hackers reshaping your government. New Scientist, Vol. 218, No. 2923, 2013, pp. 3639.
45. Yin, R. Case study research: Design and methods. 2009.
46. Rojas, F. Transit Transparency: Effective Disclosure through Open Data. 2012.
47. Harrelson, C. Happy trails with Google Transit. http://googleblog.blogspot.com/2006/09/happy-trails-with-google-transit.html. Accessed Sep. 4, 2013, .
48. Hughes, J. proposal: remove "Google" from the name of GTFS. https://groups.google.com/d/msg/gtfs-changes/ob_7MIOvOxU/zEScjv6VLBMJ. Accessed Sep. 19, 2013, .
49. Wong, J. Leveraging the General Transit Feed Specification for Efficient Transit Analysis. Transportation Research Record: Journal of the Transportation Research Board, No. 2338, 2013.
50. Gontmakher, S. Know when your bus is late with live transit updates in Google Maps. http://googleblog.blogspot.com/2011/06/know-when-your-bus-is-late-with-live.html. Accessed Oct. 2, 2013, .
51. Protocol Buffers Developer Guide: Overview.
152

52. GTFS-realtime Documentation. https://developers.google.com/transit/gtfs-realtime/. Accessed Sep. 20, 2013, .
53. Changes to GTFS. https://developers.google.com/transit/gtfs/changes. Accessed Nov. 7, 2013, .
54. Changes to GTFS-realtime. https://developers.google.com/transit/gtfs-realtime/changes. Accessed Nov. 7, 2013, .
55. Ferris, B., and S. Barbeau. Real Time Data Configuration Guide.
56. Creative Commons Attribution 3.0 License.
57. Apache 2.0 License.
58. Ferris, B. GTFS Realtime Resources: Tools and Libraries. https://github.com/OneBusAway/onebusaway/wiki/GTFS-Realtime-Resources#tools-andlibraries. Accessed Nov. 7, 2013, .
59. ITE. Institute of Transportation Engineers Response to Comments. 2001.
60. APTA TCIP Standard. http://www.apta.com/mc/its/previous/2010/Presentations/APTATCIP-Standard.pdf. Accessed Nov. 7, 2013, .
61. APTA TCIP: Documents. http://www.aptatcip.com/Documents.htm. Accessed Nov. 7, 2013, .
62. Transit Communications Interface Profiles (TCIP) Standard Development Program.
63. TCIP Technical Working Group 2: Passenger Information.
64. Lehr, W. Standardization: Understanding the process. JASIS, No. September, 1992, pp. 116.
65. APTA TCIP version 3.0.5.2.
66. Membership Overview. http://www.ansi.org/membership/overview/overview.aspx. Accessed Nov. 7, 2013, .
67. Ayers, R. G., J. E. Ayers, T. W. Schirmer, and A. E. Systems. Transit Communications Interface Profiles ( TCIP ) Traveler Information Pilot. No. September, 2011.
68. OneBusAway-NYC Wiki: Interface Design.
69. Schweiger, C. TCRP Synthesis 104 Use of Electronic Passenger Information Signage in Transit. 2013.
70. Teil-titel, E. E. H. T. Public transport -- Service interface for real-time information relating to public transport operations -- Part 1: Context and framework Contents. Vol. 1, 2013, pp. 193.
153

71. SIRI (Service Interface for Real-time Information) Management Overview - White Paper. 2005.
72. Knowles, N. SIRI Handbook & Functional Service Diagrams: Version 0.13 Draft. London, 2008.
73. SIRI Schema and Documentation Downloads.
74. SIRI History.
75. Technical Specifications.
76. European Standards.
77. Introduction to SIRI.
78. Grisby, D. APTA Surveys Transit Agencies on Providing Information and Real-Time Arrivals to Customers. September. 113. http://www.apta.com/resources/reportsandpublications/Documents/APTA-Real-TimeData-Survey.pdf. Accessed Nov. 7, 2013, .
79. MAP-21 - Moving Ahead for Progress in the 21st Century: Summary. http://www.fhwa.dot.gov/map21/summaryinfo.cfm.
80. Libicki, M. C., J. Schneider, D. R. Frelinger, and A. Slomovic. Scaffolding the New Web. http://www.rand.org/pubs/monograph_reports/MR1215.html. Accessed Aug. 13, 2013, .
81. Van Woensel, T., L. Kerbache, H. Peremans, and N. Vandaele. Vehicle routing with dynamic travel times: A queueing approach. European Journal of Operational Research, Vol. 186, No. 3, May 2008, pp. 9901007.
82. Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users. Transportation, 2005.
83. Burgess, L. M., P. Pretorius, and J. Dale. Statewide Incident Reporting Systems Business and Technology Plan. Phoenix, Arizona, 2006.
84. US GAO. Efforts to Address Highway Congestion through Real-Time Traffic Information Systems Are Expanding but Face Implementation Challenges. 2009.
85. FHWA, and US Department of Transportation. Responses to Final Rule: Real-Time System Management Information Program. Federal Register, 2011.
86. Moving Ahead for Progress in the 21st Century. Public Law 112-141, 2012.
87. Deeter, D. NCHRP Synthesis 399: Real-Time Traveler Information Systems. 70p.
88. Owens, N., A. Armstrong, P. Sullivan, C. Mitchell, D. Newton, R. Brewster, and T. Trego. Traffic Incident Management Handbook. Public Roads, Vol. 172, No. 2, 2000, p. 116.
154

89. Williams, B. M., and A. Guin. Traffic management center use of incident detection algorithms: Findings of a nationwide survey. IEEE Transactions on Intelligent Transportation Systems, Vol. 8, No. Compendex, 2007, pp. 351358.
90. Leduc, G. Road Traffic Data: Collection Methods and Applications. Volume JRC 47967.
91. Ahn, K., H. Rakha, and D. Hill. Data Quality White Paper. 47.
92. US Department of Transportation. Final Rule: Real-Time System Management Information Program. Federal Register, Vol. 75, No. 215, 2010, pp. 6841868429.
93. Oh, S., S. G. Ritchie, and C. Oh. Real-Time Traffic Measurement from Single Loop Inductive Signatures. Transportation Research Record: Journal of the Transportation Research Board, Vol. 1804, No. 1, Jan. 2002, pp. 98106.
94. Margiotta, R. State of the Practice for Traffic Data Quality.
95. Jacobson, L. N. Highway Traffic Operations and Freeway Management. Washington, D.C., 2003.
96. Claudel, C. G., A. Hofleitner, N. Mignerey, and A. M. Bayen. Guaranteed Bounds on Highway Travel Times Using Probe and Fixed Data. 2009.
97. Herrera, J. C., D. B. Work, R. Herring, X. (Jeff) Ban, Q. Jacobson, and A. M. Bayen. Evaluation of traffic data obtained via GPS-enabled mobile phones: The Mobile Century field experiment. Transportation Research Part C: Emerging Technologies, Vol. 18, No. 4, Aug. 2010, pp. 568583.
98. Yim, Y. B. Y., and R. Cayford. Investigation of Vehicles as Probes Using Global Positioning System and Cellular Phone Tracking: Field Operational Test.
99. Ma, Y., J. van Dalen, C. de Blois, L. Kroon, J. Van Dalen, and C. De Blois. Estimation of Dynamic Traffic Densities for Official Statistics Based on Combined Use of GPS and Loop Detector Data. 2011.
100. Faghri, A., and K. Hamad. Application of GPS in Traffic Management Systems. GPS Solutions, Vol. 5, No. 3, 2002, pp. 5260.
101. Schneider, W. H., S. Turner, and J. Roth. Statistical Validation of Speeds and Travel Times Provided by a Data Service Vendor. 2010.
102. Fredman, A. Mechanisms of Interference Reduction for Bluetooth. Burlington, 2002.
103. Hoh, B., M. Gruteser, R. Herring, J. Ban, D. Work, J.-C. Herrera, A. M. Bayen, M. Annavaram, and Q. Jacobson. Virtual trip lines for distributed privacy-preserving traffic monitoring. Proceeding of the 6th international conference on Mobile systems, applications, and services - MobiSys '08, 2008, p. 15.
104. Kettl, D. The Key to Networked Government. In Unlocking the power of networks: Keys to high-performance government (S. Goldsmith and D. Kettl, eds.), Brookings Institution Press, Cambridge, MA, pp. 114.
155

105. Forrer, J., J. E. Kee, K. E. Newcomer, and E. Boyer. PublicPrivate Partnerships and the Public Accountability Question. Public Administration Review, No. May/June, 2010, pp. 475484.
106. Lachman, B. Public-Private Partnerships for Data Sharing: A Dynamic Environment. Publication DRU-2259-NASA/OSTP. 2000.
107. Atzori, L., A. Iera, and G. Morabito. The Internet of Things: A Survey. Computer Networks, Vol. 54, No. 15, 2010, pp. 27872805.
108. Federal Communications Commission. Connecting America: The National Broadband Plan. Washington, D.C., 2010.
109. Ostrom, E. Understanding the Diversity of Structured Human Interactions. Princeton University Press, Princeton, 2005.
110. Cerf, V., and R. E. Kahn. A Protocol for Packet Network Intercommunication. IEEE Transactions on Communications, Vol. 22, No. 5, 1974, pp. 637648.
111. Zimmermann, H. OSI Reference Model - The ISO Model of Architecture for Open Systems Interconnection. IEEE Transactions on Communications, Vol. 28, No. 4, 1980, pp. 425 432.
112. Crawford, S. E. S., and E. Ostrom. A Grammar of Institutions. American Political Science Review, Vol. 89, No. 3, 1995, pp. 582600.
113. Braden, R. Requirements for Internet Hosts - Application and Support (RFC 1123). https://tools.ietf.org/html/rfc1123.
114. Braden, R. Requirements for Internet Hosts - Communication Layers (RFC 1122). https://tools.ietf.org/html/rfc1122.
115. Postel, J. User Datagram Protocol (RFC 768). https://tools.ietf.org/html/rfc768. 116. Day, J. D., and H. Zimmerman. The OSI Reference Model. Proceedings of IEEE, Vol. 71,
No. 12, 1983, pp. 13341340.
156

This page intentionally left blank 157

Appendix A: Materials from Course on Social Networked Transportation
A-1

Internet and Intelligent Transportation Sept 4, 2013

Intelligent Transportation Systems (ITS) are a broad range of communications-based information, control, and electronic technologies.

"Intelligent transportation systems (ITS) encompass a broad range of wireless and wire line communications-based information and electronics technologies."

"ITS improves transportation safety and mobility and enhances American productivity through the integration of advanced communications technologies into the transportation infrastructure and in vehicles."
A-2

Source: Robert Bertini, PSU and former RITA

USDOT's Research and Innovative Technology Administration administers the Intelligent Transportation Systems (ITS) program.
Ways that information and communications technologies can improve surface transportation safety and mobility and contribute to America's economic growth.
Focus on infrastructure , vehicle and integration
Responsible for: Research Technology transfer National ITS Architecture and Standards Training

Sound transportation research and data driven analysis will point the way to future successes in collaborative initiatives.
Battling driver distraction is a public health epidemic More cross modal, now including rail and maritime Cars, trucks, buses, fleets, and vehicles of all kinds Commitment to dedicated short range communications between
vehicles and the infrastructure for: Safety Mobility Environment Increased outreach and involvement of stakeholders Emphasis on data, measurement and evaluation Broadening of participation of pubic and private sectors and universities
Source: Robert Bertini, PSU and former RITA

Traditional ITS Technologies

The Universe of ITS
Major ITS Initiatives

Research

Drivers

Infrastructure

Ramp Metering

Transit Information

CV Electronic Credentialing

Transportation Management
Centers

Deployment

ICM

Wireless Devices

IVBSS

VII - POC

MSAA

NG911

Wireless Connectivity

Demonstration/Deployment

Vehicles

ITS Strategic Research Plan, 2010-2014 Released December 8, 2009 Updated 2012 Vision of national, multi-modal surface
transportation system featuring connected transportation environment Leverage technology to maximize safety, mobility and environmental performance.
A-3

Core is connected vehicles Suite of technologies and applications Wireless communications to provide
connectivity
Vehicle to vehicle
Vehicle to infrastructure

National, multi-modal surface transportation system for people and goods that features a connected transportation environment among vehicles (cars, trucks, buses, fleets of all kinds), the infrastructure, and mobile devices to serve the public good by leveraging technology to maximize safety, mobility and environmental performance. Connectivity is achieved through dedicated short range communications (DSRC).
Goal: Safety Vehicle to Vehicle Communications for Safety Vehicle to Infrastructure Communications for Safety
Goal: Mobility Real-Time Data Capture and Management
Dynamic Mobility Applications
Goal: Environment Applications for the Environment: Real-Time Information Synthesis (AERIS) Real-time, environmental data from all sources will be integrated and available for use in multimodal transportation management and performance improvement and will
contribute to better environmental practices.

In 2010, up to $77 million multimodal research $14 million to technology transfer and
evaluation. Connected vehicle research comprises $49
million of the multimodal research funds.

Vehicle to Vehicle (V2V) Communications for Safety: $11.5 million. Vehicle to Infrastructure (V2I) Communications for Safety: $9.3 million. Real-Time Data Capture and Management: $1.995 million. Dynamic Mobility Applications: $8 million. Road Weather Management: $4.6 million. Applications for the Environment: Real-Time Information Synthesis
(AERIS): $1.93 million. Human Factors: $3.525 million. Mode-Specific Research: $6.35 million. Exploratory Research: $2.5 million. Cross-Cutting Activities: $14.1 million

Vehicle to Vehicle (V2V) Communications for Safety: $11.5 million. Vehicle to Infrastructure (V2I) Communications for Safety: $9.3
million. Real-Time Data Capture and Management: $1.995 million. Dynamic Mobility Applications: $8 million. Road Weather Management: $4.6 million. Applications for the Environment: Real-Time Information Synthesis
(AERIS): $1.93 million. Human Factors: $3.525 million. Mode-Specific Research: $6.35 million. Exploratory Research: $2.5 million. Cross-Cutting Activities: $14.1 million

Effectiveness and benefits of V2V communications
Need for regulatory action by NHTSA to speed adoption

A-4

Effectiveness and benefits Initial focus on applications based on the
relay of traffic signal phase and timing information to vehicles Accelerate next generation of safety applications

Vehicle to Vehicle (V2V) Communications for Safety: $11.5 million. Vehicle to Infrastructure (V2I) Communications for Safety: $9.3 million. Real-Time Data Capture and Management: $1.995 million. Dynamic Mobility Applications: $8 million. Road Weather Management: $4.6 million. Applications for the Environment: Real-Time Information Synthesis
(AERIS): $1.93 million. Human Factors: $3.525 million. Mode-Specific Research: $6.35 million. Exploratory Research: $2.5 million. Cross-Cutting Activities: $14.1 million

What traffic, transit and freight data are available today ?
How to integrate data from "probes" ? Accelerate adoption of transportation
management systems

USES

SOURCES

USES

SOURCES

TRAVELER
LOCATION DECISIONS
VEHICLE
TRANSIT LIGHT VEHICLEFREIGHT
INFRASTRUCTURE
LOOP RADAR OTHER

PERFORMANCE MEASUREMENT ECO- TRAVELER QUEUE DRIVEINFORMATION WARNING
ENVIR. MOBILITY SAFETY
VARIABLE OTHERSPEED LIMITS OTHER
OTHER

Source: Robert Bertini, PSU and former RITA

Current State

Potential End State

TRAVELER "nearly zero"

TRAVELER "some"

VEHICLE

"a few"

T

VEHICLE "nearly all"

INFRASTRUCTURE

V

INFRASTRUCTURE

"some"

I

T

T

"where needed"

V

V

I

I

Potential Interim States

Source: Robert Bertini, PSU and former RITA

Data are too valuable to be used only once Archived ITS data useful for many stakeholders Keep raw data, include quality control Data poor data rich Truth in data Share data freely Metadata for interoperability Performance evaluation and measurement Experiment with different measures Freeways as a starting point Arterials and transit Integrate Into decision support Involve university researchers Management of the transportation system
cannot be done without knowledge of its performance
Source: Robert Bertini, PSU and former RITA

A-5

Vehicle to Vehicle (V2V) Communications for Safety: $11.5 million. Vehicle to Infrastructure (V2I) Communications for Safety: $9.3 million. Real-Time Data Capture and Management: $1.995 million. Dynamic Mobility Applications: $8 million. Road Weather Management: $4.6 million. Applications for the Environment: Real-Time Information Synthesis
(AERIS): $1.93 million. Human Factors: $3.525 million. Mode-Specific Research: $6.35 million. Exploratory Research: $2.5 million. Cross-Cutting Activities: $14.1 million

What technologies can help people and goods effortlessly transfer from one mode of travel (car, bus, truck, train, etc.) to another?
Remove barriers to cross-modal travel

Use vehicle-based data on current weather conditions to enable decision-making

Anonymous data from tailpipe emissions combined with other environmental data
Enable transportation managers to manage the transportation network while accounting for environmental impact

Potential to overload drivers and increase safety risks.
Minimize or eliminate distraction risks from in-vehicle devices

Active traffic management International border crossing Roadside infrastructure Commercial vehicles Electronic payment Maritime applications

A-6

Safety research for rail Technology scanning New research ideas

National architecture and standards Professional capacity building Technology transfer Evaluation

ITS Applications Overview
http://www.itsoverview.its.dot.gov/
Your Assignment
Sections will be divided out Read your section Prepare to brief the class

A-7

Internet and Intelligent Transportation Sept 9, 2013

"Intelligent transportation systems (ITS) encompass a broad range of wireless and wire line communications-based information and electronics technologies."

"ITS improves transportation safety and mobility and enhances American productivity through the integration of advanced communications technologies into the transportation infrastructure and in vehicles."

Highway Capacity Highway Capacity

Full Capacity
This is the capacity that is needed for the worst 15 minutes of a typical day. Design capacity.
Source: Yegor Malinovskiy

Remaining Effective Capacity
Incidents can comprise 50% of peak period congestion. 1 min delay in clearance = 4 to 5 min of traffic backup.
Incidents: more delay is caused by incidents than by recurring peak period congestion.
Source: Yegor Malinovskiy
A-8

Highway Capacity

Remaining Effective Capacity
Caltrans reports 20% of freeway centerline miles are under construction.
Work zones: major cost is delay imparted to the traveler
Incidents: more delay is caused by incidents than by recurring peak period congestion.
Source: Yegor Malinovskiy

Highway Capacity

Remaining Effective Capacity
75% of NHS is subject to snow & 100% is subject to rain. Weather: Snow, fog, rain can all restrict capacity
Work zones: major cost is delay imparted to the traveler
Incidents: more delay is caused by incidents than by recurring peak period congestion.
Source: Yegor Malinovskiy

Highway Capacity

Remaining Effective Capacity
Periodic events can cause further restrict capacity. Special events and disasters further restrict capacity
Weather: Snow, fog, rain can all restrict capacity Work zones: major cost is delay imparted to the traveler
Incidents: more delay is caused by incidents than by recurring peak period congestion.
Source: Yegor Malinovskiy

Highway Capacity

Weather Work Zones Incidents
Source: Yegor Malinovskiy

ITS
10

A-9

ITS Applications Overview
http://www.itsoverview.its.dot.gov/
Your Assignment
Sections will be divided out Read your section Prepare to brief the class

Information types
Travel times
Trip planning
Location awareness Static or real-time Timing of delivery
Pre-trip
En-route

First Generation Call centers (511) Highway Advisory Radio (HAR) TV/Radio Dynamic Message Signs (DMS) Mapping Transit Announcements
Advanced TIS (Second Generation) Websites Interactive Voice Response (511) Web-enabled Mobile Devices
Intelligent TIS (Third Generation) Push notifications (text / email) In-vehicle Systems (IVS)

Informing users allows for an equilibrium solution Users know the entire system and make optimal choices Improves reliability Decreases frustration
Prevents some trips from happening

Source: Yegor Malinovskiy

17

Source: Yegor Malinovskiy

18
A-10

Navigator http://www.511ga.org/
OneBusAway http://atlanta.onebusaway.org

Printed - timetables, maps, service change notices Posted - system maps or notices Audible announcements - stops, train directions, fare
zone Visual displays - on-board or in stations Transit agency staff - station agents or tourist info
staff Telephone information - info lines, automated
menus, SMS Online information Smartphone apps - trip planning, fare info, real-time Transit infrastructure - shelters, signage

Websites Text-
message Facebook Twitter

A-11

Inform riders about alerts
Real-time Individually
Barriers
Data in standard format Easy input without human chain of information
Surveillance
Traffic Infrastructure
Ramp Control
Ramp Metering (http://www.dot.ga.gov/travelingingeorgia/rampmete rs/Pages/default.aspx)
Ramp Closures Priority Access
Special Event Management
Temporary TMC's
A-12

Lane Management
HOV Facilities Reversible Flow Lanes Pricing Lane Control Variable Speed Limits Emergency Evacuation

System -24% travel times +27% reliability -50% incidents 7-9% extra capacity
Drivers 93% understand 84% comfortable 60% want more
Environment 4% less CO2 10% less particulates 5% less NOx 4% less fuel

Source: Yegor Malinovskiy

Birmingham Pilot Program, UK

Baku
Europe overall: 30% less collisions 22% more capacity
Source: Yegor Malinovskiy

Kyiv

WSDOT Smart Highways Video

33

Source: Yegor Malinovskiy

A-13

Internet and Intelligent Transportation Sept 11, 2013

http://www.youtube.com/watch?v=POcQUTl OvZs

http://www.ted.com/talks/sebastian_thrun_g oogle_s_driverless_car.html
http://www.youtube.com/watch?v=cdgQpa1 pUUE

Level 0: No Automation Level 1: Function-specific Automation Level 2: Combined Function Automation Level 3: Limited Self-Driving Automation Level 4: Full Self-Driving Automation

A-14

Driver in complete and sole control of the primary vehicle controls (brake, steering, throttle, and motive power) at all times
Driver is solely responsible for monitoring roadway and safe operation of all vehicle controls.
Driver support/convenience systems including warnings (e.g., forward collision warning, lane departure warning, blind spot monitoring) and automated secondary controls such as wipers, headlights, turn signals, hazard lights, etc.
V2V warning technology alone would be at this level

One or more specific control functions If multiple functions, operate independently from each other. Driver has overall control, and is solely responsible for safe
operation Can choose to cede limited authority over a primary control (as in
adaptive cruise control), the vehicle can automatically assume limited authority over a primary control (as in electronic stability control), or the automated system can provide added control to aid the driver in certain normal driving or crash-imminent situations (e.g., dynamic brake support in emergencies). Vehicle does not assume driving responsibility from the driver, only assists or augments Examples of function-specific automation systems include: cruise control, automatic braking, and lane keeping

Automation of at least two primary control functions in unison
Shared authority when the driver cedes active primary control in certain limited driving situations
Driver responsible for monitoring roadway and safe operation
Driver must be ready to control the vehicle safely at moment's notice
Example: Adaptive cruise control in combination with lane centering

Enable driver to cede full control of all safetycritical functions under certain traffic or environmental conditions
Rely heavily on the vehicle to monitor for changes in those conditions
Driver expected to be available Example: self-driving car that can determine
when the system is no longer able to support automation (construction area)

Vehicle designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip
Driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip
Occupied and unoccupied vehicles Safe operation rests solely on the automated
vehicle system.

Level 1 technology mandatory on all new light vehicles since MY 2011
Guidelines for licensing for self-driving vehicle testing
"NHTSA does not recommend that states authorize the operation of self-driving vehicles for purposes other than testing at this time."

A-15

http://www.forbes.com/sites/chunkamui/2013 /01/24/googles-trillion-dollar-driverless-carpart-2-the-ripple-effects/3/

Look at the next USDOT Strategic Plan: http://www.its.dot.gov/strategicplan/pdf/201 5_ITS_StrategicPlan2015-2019.pdf
Comment at http://itsstrategicplan.ideascale.com/

A-16

Transit Data Standards:
Improving the Delivery of Passenger Information
CEE 8813
Presenter: Landon Reed Advisor: Kari Watkins
Georgia Institute of Technology October 7, 2013

Evolution of Schedule Data Representation

Paper Schedules

Digitization

Interactivity

Schedule

Data Standards
10
9:36

2

Project Scope
Passenger Information Real-time, not schedule-based
Trip updates Vehicle locations Service alerts
Principally in the US
It's not so easy to interview or review documents in kanji (Japanese)
3

Outline
What are transit data standards?
How are transit data standards used?
What are the major barriers to standards adoption in transit?
Findings and Recommendations
Questions / Discussion
4

WHAT ARE TRANSIT DATA STANDARDS?
5

Transit ITS Requirements
TEA-21 (1998) Specified need for "major" ITS projects to conform to a regional ITS architecture Including any applicable standards or provisional standards
Many concerns during comments period FTA clarification stated that only standards required for conformance were commercial vehicle operations i.e., no real requirement to conform to transit ITS standards (unless specified in regional ITS arch.)
6
A-17

Transit Data Standards
Standard ways of representing data that:
Enable the interoperability of systems
Internal IT (scheduling software trip planner) External (Google transit MARTA)
Create more robust markets
Break monopolistic grip of "vendor lock-in" Easily mix and match products and vendors

Example of transit data standard (GTFS) for transit stops.

7

Other Benefits of Standardized Data
Can attract third-party software developers (complementary providers)
Allow agencies to benefit from domain experts E.g., mobile apps, assistive devices
Improve research possibilities
Simplify analysis across multiple transit agencies E.g., comparing network design or on-time
performance
8

Major US Transit Open Standards*

Other Standards

GTFS

(General Transit Feed Specification) Originally developed for Google Transit High adoption (272 agencies)

GTFS-realtime

Real-time corollary to GTFS Medium adoption (10-30 agencies)

OneBusAway

TCIP

(Transit Communications Interface Profiles) Developed in conjunction with FTA Low adoption (~6 agencies)

SIRI

(Service Interface for Realtime Information)

Real-time standard developed in EU by CEN

Low adoption in US; much higher in EU

9 * For passenger information

10

HOW ARE THESE STANDARDS USED?
11

Real-time Information Example
Packaging Options:
Custom Proprietary Format (vendor-based) Standard (GTFS-realtime, TCIP)

Information

"I'm: at Main St & Broad St, 7 minutes late, and carrying 40 passengers."

12
A-18

Example of Application Developed with Data Standards (1)

Example of Application Developed with Data Standards (2)

Walk Score Apartment Search Tool

Shows apartment listings within a given walking
distance to transit stations. (MARTA shown above.)

Source: http://walkscore.com/apartments

13

Example of Application Developed with Data Standards (3)

OpenTripPlanner
Open-source trip planning application that runs on GTFS. Currently used by TriMet for the agency's trip planner as well as about 7 other agencies around the world.
15

OneBusAway: Bus tracking apps

Application suite (web, iPhone, Android) that

allows users to easily find real-time transit

information.

Source: http://onebusaway.gatech.edu

14

OneBusAway: Standards Used
GTFS
schedule data (routes, stops, stop-times) open, originally developed by Google
OneBusAway API
real-time, schedule data transactional, not wholesale (optimal for mobile
applications) semi-open
Repeaters, libraries, etc.
OneBusAway's open source code contains a variety of tools to produce and consume the OBA API and a few others (NextBus, SIRI, Orbital)
16

General Transit Feed Specification (GTFS)
shapes.txt

routes.txt

agency.txt

trips.txt

stops.txt

stop_times.txt

calendar.txt

GTFS

Primarily for external consumption (third-party apps)

Works in conjunction with GTFS-realtime

17

GTFS-realtime

Delivers information in three categories

Trip updates

bus XYZ is 5 minutes late

Vehicle locations 33.7766318, -84.3987985

Service alerts

reroute for DragonCon parade

Provides snapshot of entire transit system

Realistic approach because of reliance on binary data structure (much smaller than XML)

18

A-19

Transit Communications Interface Profiles (TCIP)
Originally developed by ITE Ownership transferred to APTA
in 2001 Standard for interoperability
between agency subsystems Very comprehensive, often
cumbersome Because of its early
development and adoption failure, is it still relevant?
19

Service Interface for Realtime Information (SIRI)
Developed from a few different European standards groups
RTIG in the UK VDV in Germany TransModel in France
Provides a number of real-time and schedulebased functional services
Most relevant here are stop and vehicle updates Others include messaging, facility monitoring
20

Summary Table

Standard Data

Scale

GTFS

Schedule Bulk

Adoption High

Est. License 2006 CC 3.0

Data structure
Text

GTFS-rt Realtime Bulk

Low

SIRI

Schedule / Individual / Low

realtime bulk

2011 CC 3.0

Protocol Buffer

2006 CEN

XML / JSON

copyright

TCIP

Schedule / Individual / EU: Medium 1996

XML

realtime bulk

US: Low

21

WHAT ARE THE MAJOR BARRIERS TO STANDARDS ADOPTION?
22

Problems Related to Data Standards
Agencies lack technical expertise Especially a problem for the complex TCIP May lead to low adoption
Dependent upon network effect If only a few agencies adopt standard, the benefits are small, costs are high Once a critical mass is reached, benefits dramatically increase
Stakeholders value different standards Agencies, third-party developers, vendors, researchers Representing different needs is a challenge
23

Stakeholder Models: Simple

Creators

Users

Implemen ters

Basic stakeholder roles
Creators Implementers Users

Standard
24

A-20

Stakeholder Models: More complex

Technology Providers

Incumbent Vendors

Vendor Challengers

Complementary Providers

Users

25

Stakeholder Models: Much more complex

Standard

NEW Standard

Technology Provider

Vendor Incumbents

Users

Users

Vendor Challengers

Users

Complementary Providers

Users (Direct)

Users (Indirect)

26

GTFS vs. TCIP Programmatic Timetable Publishing

2006
TriMet pilot uses TCIP to generate timetables

2007
TriMet publishes source code for program using GTFS

2008
USDOT webinar held for GTFS timetable tool

To date: 1000+ downloads of TimeTable Publisher source code

Making source code open for TCIP-based tool deemed "not

feasible"

27

GTFS vs. TCIP Lessons Learned
Importance of "open"
Open source, open data, open standards... all have compounding interrelated effects
Growth of open standard may partly depend on open source, open data
TriMet seems to have abandoned TCIP in favor of GTFS
Agency was heavily involved in the development of GTFS
What does this signal for real-time?
28

FINDINGS AND RECOMMENDATIONS

Open Standards Assessment (1)
Krechmer's 10 dimensions of open standards development
Includes metrics for transparency, due process, and universal access
Openness Score

29

30

Standard

A-21

Open Standards Assessment (2)
Each standard has weaknesses
GTFS-realtime initially developed behind closed doors
Official SIRI documentation must be purchased TCIP discussions/meetings not fully open
GTFS/-rt exemplary in some categories
Change proposals and discussion in mailing lists Documentation is clear and concise
31

GTFS Adoption
Source: Wong, James. (2013). Leveraging the General Transit Feed Specification (GTFS) for Efficient Transit Analysis. Proceedings of the 2013 Transportation Research Board Annual Meeting.
32

Growth of Open Source Movement
Number of lines of open source code contributed in 2008 (~60M)

Market Analysis: Many vendors
AVL market continues to be fragmented

Source: A. Deshpande, D. Riehle. (2008). The total growth of open source. Open Source Development, Communities and 33 Quality.
Market Analysis: Standards used in electronic signage
35

Source: D Miller. (2008). TCRP Synthesis 73 AVL Systems for Bus Transit: Update. Transportation Research Board.

34

Importance of Web/Mobile
GTFS/-rt developed much later than other standards, better suited for new markets
Different means, same end SIRI, TCIP focused on internal interoperability GTFS/-rt focused on 3rd party applications (especially bulk consumers like Google)
GTFS/-rt can still provide huge benefits of standardization internally in development of web/mobile tools
At least for passenger information, primary goal remains getting info to customers
36
A-22

Shifting Market Dynamics

Next Bus: 9 minutes

? ??

Market Analysis: Next Arrival Pred.
Next arrival predictions remain second highest AVL function not utilized (next to TSP, which is heavily reliant on costly infrastructure)

10
9:36

37

Source: D Miller. (2008). TCRP Synthesis 73 AVL Systems for Bus Transit: Update. Transportation Research Board.

38

Market Analysis: Open Data
Open data trend contributed to huge adoption of GTFS
Executive support of open data very strong
all federal agency data must be provided in open and machine-readable format
The trend has contributed to a spike of interest in transit from application developers
39

Findings Summary
Committee approach falters on long horizons Likely to see adoption for GTFS-rt in US
Difference in purpose of GTFS-rt from TCIP, SIRI is irrelevant Delivering information to passengers is ultimate goal
AVL market remains fragmented Real adoption power seems to lie with vendors Complexity of SIRI and TCIP offer flexibility, but GTFS-rt offers simple, convenient solution
Openness of standard is important for adoption But the initial closed approach of GTFS/-rt has allowed it to gain market dominance
40

Possible Federal Policy Responses
Support writing of adaptors between open standards Enable legacy TCIP systems to easily utilize new realtime passenger information systems on market Allow for integration of real-time data in GTFSrealtime or SIRI to other TCIP-based subsystems Engage third-party developers
Encourage product vendors to natively support GTFS-realtime or SIRI as export options
Do nothing TCIP continues to languish with low adoption rates
41

Future Research Needs
True survey of realtime implementers
Past surveys are vague about technology setup and avoid questions about standardization decisions
Key questions
Barriers to implementation/decision making tree Integration with schedule data (e.g., GTFS) Change/lifecycle of AVL systems Importance of open standards
42
A-23

Open Transit Data:
State of the practice
October 18, 2013 ITS World Congress
Dr. Kari Edison Watkins Assistant Professor Georgia Institute of Technology
Topics Covered
Evolution of Transit Data Beyond Google Transit Why Open Data? Case Study Findings Experiences in Atlanta Key Lessons Learned

"You take the data that's already there...jujitsu it, put it in a machine-

readable form, and let

entrepreneurs turn it into

awesomeness.
Todd Park United States Chief Technology Officer

"

Evolution of Transit Data

Transit Data Consumption
The changing landscape

Paper Schedules

Digitization

Interactivity

Schedule

10
9:36

5

General Transit Feed Specification

(GTFS)

shapes.txt

routes.txt

agency.txt

trips.txt

stops.txt

stop_times.txt

calendar.txt

GTFS
6

A-24

How does Open Data help?
Data Access Models

Transit Agency

DATA

Agency responds to special requests by
developers

App Developers

Agency produces data and opens
it once.

DATA

Anyone can access data

Riders
Small subset of riders find this specific tool useful.

Many riders access a diverse market of

7

tools powered by GTFS.

Developer Perspective
GTFS Data Exchange

Beyond Google Transit

More than Google Transit
Sharing GTFS with Google allows agency to show up on Google Transit.

HopStop

What else is out there?
OpenTripPlanner

A-25

Walk Score: Apartment Search

Mapnificent

City-Go-Round

Why Open Data?

Motivation for Open Data
Improves customer service Increased information access to transit riders Fosters innovative and diverse apps Interconnected regional transit Agency transparency Plus...

Equitable Information Access
Encompasses Diverse Personal Technologies Considers All Abilities/ADA Access
A-26

Fast Paced Innovation

Agency Releases Real Time Data

Desktop Widget

Countdown Sign

1

2

3

4

Additional websites Google Maps implementation

5

6

iPhone app

IVR Service

7

8

Weeks After Opening Data

SMS Service

Transit Apps in High-Ridership US Cities
Source: Kaufman, Sarah (2012). Getting Started with Open Data: A Guide for Transportation Agencies

Data Analysis across Multiple Agencies
Source: Wong, James (2012) from an analysis performed in conjunction with Open Plans

Direct Agency Benefits
TimeTablePublisher
An application that runs exclusively on GTFS
Produces print-quality schedules for all routes, directions
Options for customization FREE! One of many open source
tools

Case Studies Findings

Case Studies
Transit Agencies Philadelphia San Francisco Chicago New York Boston
Email and phone interviews with staff
24
A-27

Getting Started with Open Data
Overcoming perceptions and attitudes Technical feasibility
Legal concerns
Brand confusion Logo usage Liability
Deployment costs

Development Cost Scenarios
Information Delivery Platforms
Multiple Platforms: BART Experience
Deployed apps for multiple devices Too costly to keep up with evolving technologies
Custom Solution: goroo
Multimodal trip planner Only works in Chicago Costs >$4,000,000 to public
Open Source: OpenTripPlanner
Deployed in Portland Estimated ~$140,000
26 Source: Biernbaum, Rainville, Spiro. Multimodal Trip Planner System Final Evaluation report (2011)

Best Practices
Successful deployment tactics
Open data should be accurate and up-todate
Transit riders will rely on the data Construction, closures, schedule changes should be
updated.
Implementation
Staff-level champions and strong leadership leads to successful deployments
27

Best Practices
Working with app developers

Express agency concerns through usage

agreements

Logo and transit map usage

Ensuring developers don't misrepresent themselves

or apps as "official"

Developers

Agencies

Developer Relationships
Different levels of engagement Support for mutual customers

z
Transit Riders

Best Practices
Working with app developers
Sustainable and holistic
Avoid "once-off" mentality Ongoing and continuous relationship From the website to the conference
Open communication lines
Frequent interaction with developers yields trust and maintains interest
Release updates early and often (feedback loop) Simple, clear, earnest communication

Best Practices
Performance measures
Ways to track usage App downloads Number of apps developed
App Accessibility Inventory
Market Research Surveys

30
A-28

Experiences in Atlanta

Atlanta Regional Commission (ARC)
Regional Transit Goals
Unify the region MPO coordinating transit operators Regional Transit Data Warehouse is one such initiative
Clarity Communication and transparency Accessible tools for staff, app developers, and public
Incent innovation Encourage developers to use all of Atlanta's transit data (not just MARTA) Goal: simple access to data for developers and public and the absence of overbearing restrictions

Atlanta Regional Commission (ARC)
Regional Transit Data Warehouse
Allows agencies to upload data to produce GTFS
Agencies included: Fixed route systems University and activity center shuttles
Utilizes schedules, GIS Provides accessible tools for staff to
Create non-existent data (e.g., agencies w/o GIS) Maintain data feeds over time, service changes

Transit Data Warehouse

Public tool to access regional transit data
Including Route/schedule data discussed above Operations data (NTD, fleet/facilities report)
34

Open Data Trends
Agencies with Open GTFS (August 2012)

Open Data Trends
Georgia adjusted for 2013

35

36

A-29

Transit Open Data Timeline

GTFS Adoption

Source: Rojas, Francisca (2012) Transit Transparency: Effective Disclosure through Open Data
Key Lessons Learned

Source: Wong, James. (2013). Leveraging the General Transit Feed Specification (GTFS) for Efficient Transit Analysis. Proceedings of the 2013 Transportation Research Board Annual Meeting.
38
Key Lessons Learned Open data should be accurate and up-to-date
Transit riders will rely on the data Construction, closures, schedule changes should be updated.
Agencies with staff-level champions and leadership support were most successful in deployment
Strong leadership can help push past legal concerns Staff-level champions implement changes and will be on the
front line with developers

Key Lessons Learned
Agencies can spend a lot of money to produce custom apps
iPhone, Android, Windows Mobile, Palm... Open data allows for free out-sourcing of app
development for multiple platforms
Agencies should think about accessibility and equity
If apps that cater to specific disadvantaged groups, consider taking this challenge on as an agency

Key Lessons Learned
Express agency concerns through usage agreements
Logo and transit map usage Ensuring developers don't misrepresent themselves
or apps as "official"
Documentation and regional standards
Good opportunity for positive press, fastpaced innovation, future analysis

A-30

Internet and Intelligent Transportation Oct 28, 2013

Summing up how things have changed in less than a decade.
http://www.youtube.com/watch?v=78gFoqb8 Yxc (start at 7:25)

What is social media?

Social and professional networking
Facebook, LinkedIn, Google Plus
Blogging Micro-blogging
Twitter, Tumbler
Media- and document-sharing
YouTube, Flicker
Social curation
Pinterest, Storify
Geolocation Crowdsourcing

82% of the world's online population (1.2 billion users)
19% of time spent online 2/3 of adult Internet users (67%) used a social
networking site in 2012
90% of cities use Facebook and 94% use Twitter Every U.S. governor has at least one social
media account 23 of 24 major federal agencies

How can social media be used in intelligent transportation?
A-31

Disseminating Information Gathering Feedback Social Computing Checking the Urban Pulse Transportation Surveys
Real-time information
Closures Service Alerts
Construction management
Carmageddon
Emergency communication
Weather
Websites Text-message Facebook Twitter

A-32

Environmental Impact Statements Strategic Plans Service Planning

Policies Impact Measurement Equity in Information

100%

90% 86%

80%

70%

72%

70%

Total "Yesterday"

60%

52%

50%

50%

100% 80% 60% 40% 20%
0% 2005 2006 2007 2008 2009 2010 2011 2012

18 - 29 30 - 49 50 - 64 65 +

40% 31%
30%
20%
10%
0% Ages 18-29 Ages 30-49 Ages 50-64

34% 18%
Ages 65+

Large groups of people and computer systems collaborate to do things neither can do alone
Branches of social computing Citizen science: People work as sensors and relay information to scientists or advocacy groups
Audubon Christmas day bird count Safecast (Geiger counter readings shared by citizens after Fukishima) Crowdsourcing: Combines the concept of "outsourcing" with the concept "wisdom of crowds" Wikipedia Mechanical Turk Human Computation: People provide valuable information to machine learning systems Participatory Sensing: Mobile phones as a new type of instrumentation / information source

Waze (http://www.waze.com) traffic information collected from a large group of people
Twitter communication amongst travelers Roadify (http://www.roadify.com) OpenPlan's shareabouts.org
map-based participatory urban planning Urban Mediator
citizens and urban planners create and share topics SeeClickFix.com and FixMyStreet.com ParkScan.org
residents report problems with parks and receive feedback from the local maintenance staff

A-33

Ukkusuri, Hasan, and Zhan, "Checking the Urban Pulse : Social Media Data Analytics for Transportation Applications"
Extraordinary amounts of user-generated data every day
Amazing spatial and temporal resolution Visualize the "urban pulse" in real-time (human
behavior and system performance) Growing web of social sensors Observe consumer choices and public opinions -
movements and moods Status updates, media sharing and check-ins

What is enabling this? 2 things
Location-based Services in Social Media
Useful information about daily activities and interactions with environment
Geolocated photos in Flickr Status messages in Twitter Present activity location in Foursquare Group activity in Facebook Places Record travel routes with GPS trajectories with GeoLife,
Bikely, Cycle Atlanta

Smart Phone Growth
One billion smartphones in the world
1/5 use check-in services like Facebook Places, Foursquare and Gowalla (Comscore 2012)
74% of smartphone users get real-time location-based information (directions)

Real-time Visualization of Urban Dynamics Understanding Individual Activity
Participation and Location Choice Behaviors Influence of Communication Patterns in
Social Media and Social Networks on Activity Participation Social Traffic Sensors Measuring User Perceptions about Services

Using geo-locations of individuals Dynamics of urban environments How places are used in the course of a day Synthesize with traffic flow information

Twitter posts to analyze urban human mobility patterns Statuses from third-party check-in services (e.g. Foursquare) Check-ins classified into different activity categories Virtual grid reference of NewYork City map Counted number of purposespecific visits within each cell Proportion of visits to each cell for each activity category
Popular places and functionality of each part of the urban area
Use in agent-based simulation tools

A-34

Geo-located activity related choices (checkin)
Understanding individual location choices over time
Use in activity-based travel demand models
How does shared information influence destination choice and mode choice?
How does social network influence activity travel behavior?

Waze Collect and
disseminate real-time traffic information Traffic assignment models take information provided and suggest optimal path based on future traffic flow

Individually or collectively express opinions, champion a cause or call for action
Arab Spring
Sentiment analysis
Researchers "mining" these opinions from social media to analyze general public perceptions
Users' satisfaction on specific items or at specific times
A-35

Chicago Transit Authority's subway system
Collected tweets containing the keywords of all combinations of "L" train names
SentiStrength Average negative and positive sentiment Minimum / maximum negative / positive sentiment Sentiment word strength list to judge sentiment polarity
Transit riders more inclined to assert negative sentiments than positive Dissatisfaction over specific incidents General trends
Security and Privacy Concerns Location-based information users vulnerable to malicious activities Anonymity of users Data storage
Selectivity Biases Lack of representativeness of the sample
Missing Information Socio-economic characteristics Start or end time or duration of activities Infer missing information
Study design Reading social media posts helps understand target market and frame the research hypothesis.
Questionnaire design Observe how people talk about the topic Relevant and comprehensive choice lists for closed-end questions
Fielding the survey For surveys where a choice or non-random sample is appropriate Publish links to an online survey Enlisting participants for online survey panels
Survey analysis Researchers can supplement quantitative surveys with online social media commentary.

A-36

Crowdsourced Data Collection and Management
Fall 2013

AGENDA
What is `Crowdsourcing' Platforms Used for Crowdsourcing Different Crowdsourcing Systems Issues with Crowdsourcing Case Studies

Crowdsourced Data Collection and Management
Crowdsourcing

What is Crowdsourcing
"outsourcing of a job (typically performed by a designated agent) to a large undefined group in the form of an open call"
"Crowdsourcing uses predominantly advanced internet technologies to harness the efforts of a virtual crowd to perform specific tasks"
"utilizes the `latent potential of crowd' to achieve a solution to a problem that the crowd can relate to"

Elements of Crowdsourcing
Problem
Organizer

Collaborators

Open and Networked Platform

The Problem
Is big enough that cannot be solved by a single person or is difficult to be solved by one person but can be solved easily if broken down into small parcels
Is interesting enough for people to come together for solving it Is mostly local in character as people are more likely to participate in
issues that concern their daily life

A-37

The Organizer and The Participant
The Organizer
is generally the agency/ institute/ commercial entity who requires solution but does not have sufficient funds or in-house expertise
In some cases, third-parties host crowdsourcing. For example, Kickstarter, a crowdsourced funding platform, helps collect funds from the crowd on behalf of the entity who needs the funds.
The Participants
are an anonymous diverse group of population who are interested in the problem
The participant pool can be global but is often mostly local people who are motivated by the issue

Platforms of Crowdsourcing
Wiki system authoring information; Ex: Wikipedia Open Source Software sharing and co-developing
program source code; Ex: Ubuntu Geocrowd mapping: collecting, cleaning and
uploading GPS data; Ex: Cyclopath Mash-ups: a combination of all of
the above mentioned

Crowdsourcing Systems

Crowdsourcing Systems: Participation Based
Explicit Systems: users participate and collaborate in stated problem like answering questions via web, testing software, writing web content
Evaluating (e.g., book review), sharing (e.g., feedback on system performance), building artifacts (e.g., designing T-shirts at Threadless.com) and executing tasks (e.g., collaborating on finding gold mining spots)
Standalone Implicit Systems: indirect input provided by the users
ESP game, participants shown images and asked to guess common words to describe. Those words are then used to label the image
Piggyback Implicit Systems: traces of users collected from an entirely different system and used for solving an issue
Ad keywords generated based on Google and Yahoo search traces

Crowdsourcing Systems: Participant Expertise Based
General Purpose Systems: do not require any special expertise from participants
Ex: Transit Rider Feedback System
Domain Specific Systems: requires some form of expertise from the user
Ex: Developing or beta-testing Open Source Software

Crowdsourcing Systems: Time and Location Based
Audience-centric: participants are at the same place at the same time Event-centric: participants can be at different places but event is time bound i.e., it has a start and end time Global: collaboration can happen between people from different places and over an indefinite period of time Geo-centric: people are at the same place but crowdsourcing is an ongoing process

A-38

Crowdsourcing Issues
How to recruit and retain the participant base important to understand trend over time and maintain a critical mass Solutions: Incentives; Recurring campaigns at regular intervals
User capabilities important that the participants are aware of the issues related to the task Solutions: Project design as domain-specific system; Pre-recruitment interview and training
Aggregation of information provided by the users and data quality management important to bridge the gap between information provided and information required Solutions: A degree of loose hierarchical authority to ensure data quality data quality auditors; Implementing Automated Database Management System
Evaluating the contribution of the users - important to ensure that data is usable for the purpose of the project Solutions: Automated screening of invalid entries; Manual audit of inputted data validity

Crowdsourcing and Transportation
Greater public participation: People in a region tend to identify themselves with the region where they live, work, and socialize, and are generally more interested in the systems that affect them
Diverse stakeholder involvement: Feedback from different user groups are important for planning transportation system but difficult to bring together without a common open platform like crowdsourcing
Cheaper than traditional methods: As data are provided by users themselves, no added investment for planning agency
Particularly useful for collecting data where the user base is not big
Can be either explicit or implicit; general purpose or domain specific and geocentric or local

Crowdsourced Data Collection and Management
Case Studies

User Feedback Based Crowdsourcing Systems
SeeClickFix, PublicStuff, FixMyStreet
Rely on public feedback about neighborhood issues and have been successful in mobilizing communities to take up the task voluntarily
No special expertise is expected from the users Global in character, but majority of reported issues are local and community oriented
Shareabouts
Web-based system that uses maps to generate user feedback on preferred location of facilities and amenities
General purpose System
Street Bump
Mobile application that uses a smartphone's accelerometer to detect potholes and other street hazards as people drive around the city
Geo-located street quality data collected through crowdsourcing are automatically uploaded and integrated with the city's process for locating and fixing pavement

Crowdsourcing Systems for Data Collection
Domain specific systems as data are needed from the user group well acquainted with the problem
Are most useful for otherwise unrepresented or underrepresented community
Can benefit from regularly updated information, which is easy to maintain through "delegated responsibility among a motivated community with common purpose"

User Feedback Based Crowdsourcing Systems
OneBusAway Tiramisu Cycle Tracks Cycle Atlanta Cycle Path

A-39

OneBusAway

OneBusAway Ambassadors

Figure 2: Information Flow of the Transit Ambassador Program

Tiramisu
Smartphone app developed by researchers at Carnegie Mellon University to improve users' transit experiences and transit accessibility
User feedback based real time information system for public transportation in Pittsburgh
Uses riders as the human equivalent of automated vehicle location (AVL) thereby providing an innovative alternative to more traditional cost intensive data collection
Upon activation, app shows list of buses or light rail vehicles scheduled for arriving at that time based off past arrival data as well as real time data sent by riders on the vehicle
Provides an option for the rider to indicate the level of fullness of the bus, which aids people with disabilities to choose the bus they want to access
Also allows riders to share kudos and complaints, providing feedback to transit service

Tiramisu

First press "Nearby" button map
Select stop list of arrival times
If available, "rider real-time prediction"
Otherwise, "rider historical estimate."
Neither, scheduled time
Vehicle arrives select route boarding, destination & fullness
"Start Recording" button to share location trace

Tiramisu
Initially meant to use riders as sensors to report delays, crowding, and other breakdowns
Accessibility, particularly blind, mobility impaired, and elderly riders, was key component
Since late July 2011, users have shared more than 68,000 location traces.
Recently released for Syracuse and Brooklyn, and more cities are planned.
Crowdsourcing arrival information is working.

Crowdsourcing Systems for Data Collection
Domain specific systems as data are needed from the user group well acquainted with the problem
Are most useful for otherwise unrepresented or underrepresented community
Can benefit from regularly updated information, which is easy to maintain through "delegated responsibility among a motivated community with common purpose"

A-40

Cycle Atlanta
Cyclists prefer riding on dedicated infrastructure1 Demographics (especially gender) affect cyclists'
preferences regarding bike infrastructure2 Most of Atlanta's bicycle network miles have a level of
service ranking of "E" or worse3
1. Tilahun, N. Y., D. M. Levinson, and K. J. Krizek. Trails, lanes, or traffic: Valuing bicycle facilities with an adaptive stated preference survey. Transportation Research Part A: Policy and Practice, Vol. 41, May 2007, pp. 287301.
2. Krizek, K. J., P. J. Johnson, and N. Tilahun. Gender Differences in Bicycling Behavior and Facility Preferences. Research on Women's Issues in Transportation, Transportation Research Board of the National Academies. 2004.
3. Atlanta Regional Commission. Atlanta Region Bicycle Transportation & Pedestrian Walkways Plan. 2007

Cycle Atlanta : Record a Trip

Cycle Atlanta : Assets & Issues

Cycle Atlanta : Assets & Issues

+ Bike parking

- Pavement issue

+ Bike shops or repair kits - Traffic signal

+ Public restrooms

- Enforcement

+ Secret passage

- Bike parking

+ Water fountains

- Bike lane issue

+ / - Note this spot

Cycle Atlanta : Assets & Issues

Cycle Atlanta : Mapping tool

A-41

Cycle Atlanta : Future Initiatives
Collaborate more with other biking groups/organizations
Route choice analysis Integration with City of Atlanta work order queue Infrastructure ranking tools for City of Atlanta / ARC Populate web-based tools for using data
Vote issue items up Route selection based on features or cyclist type Assets data

Cyclopath
Crowdsourced geowiki-based bicycle map developed by researchers at the University of Minnesota
Maintains an active database of user-contributed bicycle routes and trails within the Minneapolis St. Paul metropolitan area
Users of Cyclopath can add, modify, and delete roads and bike trails, segments thereof, points of interest, and neighborhoods
Users can add notes and tags describing any feature on the map, such as `bumpy' or `closed'
User can rate bike routes on a five-point qualitative scale (excellent, good, fair, poor and impassable) for their own use and for aggregation to enhance bikability ratings

Cyclopath Map

A-42

Appendix B: Additional Questions and Responses from TMC Survey
B-1

What kind of facility do you work at?

Transportation planning
department 0%
Traffic operations center in a shared facility with other
operations (emergency, police, etc)
0%

Other 7%

Traffic Management Center (primarily
freeways) 36%

Traffic Management
Center (combination of
freeways and arterials) 39%

Traffic Management Center (primarily arterials/local
roads) 18%

How would you describe the primary areas that your TMC serves? (Check all that apply)

Urban

Suburban

Rural

B-2

What kind of end-point equipment do you currently use for traffic monitoring?

Closed-Circuit Television (CCTV)
Radar/Microwave Inductive loops Video Detection
Wireless "Pucks" Bluetooth sensors (any brand)
Other Aerial detection
0

5

10

15

20

25

30

For your primary ITS systems, are there any major elements that your agency doesn't own and operate?

No, the agency owns and operates all elements of our ITS system.
Yes, at least some of our endpoint equipment is owned and/or maintained
by a third party or vendor.
Yes, we use at least some power infrastructure from a public utility or
third party.
Yes, we use at least some third-party communications (phone lines, cell
service provider, non-agency owned fiber optic lines)
0

5

10

15

20

B-3

Some field equipment can operate "off-grid" with independent power and communications. Does your facility use devices that are off grid?
Aerial detection Radar/Microwave Closed-Circuit Television (CCTV)
Inductive loops Bluetooth sensors (any brand) Wireless "Pucks" (including the cabinet)
Video Detection Signal Communications
None of the above Portable traffic trailer - camera, microwave...
Portable Road Weather Information...

0

5

10

15

Independent Communication

Aerial detection Radar/Microwave Closed-Circuit Television (CCTV)
Inductive loops Bluetooth sensors (any brand) Wireless "Pucks" (including the cabinet)
Video Detection Signal Communications
None of the above Portable traffic trailer - camera, microwave...
Portable Road Weather Information...
0

5

10

15

B-4

Do you use any real-time traffic data provided by a third party as a standard procedure for traffic management?
100%

80%

13

13

60%

40% 5
4

20%

5

6

0%

Free online traffic maps

Paid traffic data

(Google Maps, Bing (INRIX, Nokia/NAVTEQ,

Maps, Waze etc.)

TomTom etc.)

No Yes, but only casually Yes, standard procedures

What kind of data do you use from a third party?

Travel time on a segment (measured at two points)
Speed of traffic

Traffic volume/counts

Traffic density

Traveler reported incidents/congestion

Live video stream

Vehicle classification

Automated incident detection

0

1

2

3

4

5

B-5

What kind of data do you use from a third party?

Other

GPS-based

Cellular signal based

Bluetooth/MAC Address matching

I don't know

0

1

2

3

4

5

Which of the following types of technology would you trust to generate traffic data?

Bluetooth/MAC Address matching

GPS-based

Cellular signal based

Other

I don't know

0

5

10

15

20

B-6

How much would you like to rely on third-party data if you purchased it?

We want third-party data to provide coverage in areas we don't have good existing
coverage; it would only affect those locations.
We want to use third-party data, but also want our existing infrastructure to verify it.

We want third-party data to be the primary source of information for traffic management.

We want to use third-party data, but would not rely on or make decisions based on it.
We want to have third-party data, but would not use it for our operations.
0

5

10

15

Would you want third-party data in order to forego a major investment in replacement or expansion of infrastructure?

No 50%

Yes 50%

B-7

For the following characteristics, how much better or worse would you expect the third-party data to be compared to your existing TMC system?

TIMELINESS - data is updated quickly without lag

AVAILABILITY - system operating without interruptions

ACCURACY - data reflects actual speed/conditions

0%

25%

50%

75%

100%

Much better A little better About the same A little worse Much worse

What assurances would you want to have to know that data provided is accurate? (Select up to 3)

Testimonial from a peer agency/TMC Internal audit (repeated regularly)
Audit by a third-party (repeated regularly) Guarantee from the data provider. Internal audit (once) Audit by a third-party (once) Other (please describe) 0

5

10

15

B-8

Compared to your existing TMC system, would you be more or less concerned about the public's privacy if a third-party system collected and managed the data?

0%

25%

50%

75%

100%

Less concerned About the same More concerned

What kind of limitations could you tolerate from third-party data?

We could not share mapped data publicly.
9% We could not
share data outside the TMC.
9%
We could not store/archive it.
9%

Other 0%

We could not share raw data feeds publicly.
17%

None, we should be able to do
anything we want with the data. 56%

B-9

What do you think are the main benefits of third-party data?

Lower capital/setup costs
Lower operating costs Improved reliability (up-time) over existing
system Other: Please describe
Improved accuracy over existing system 0

5

10

15

Do you feel comfortable with your agency's ability to successfully/effectively procure a third-party data service?

0%

25%

50%

75%

100%

Yes Maybe No

B-10

Locations