We live in a data-driven world. Data underpins our communications and financial systems, our understanding of weather and the changing landscape, business functions, government relations, so much more—and that includes homeland security. The one standard across these areas is that bad data inputs lead to poor outputs.

The Transportation Security Administration (TSA) collects a range of information from the public on its performance. It uses this to improve its processes, more effectively allocate security resources, and most importantly, measure the volume of passengers experiencing poor customer service at TSA checkpoints. Customer service reports allow TSA to “manage from the data” to maximize outcomes.

But what if the data are inaccurate or misrepresent reality? What impact does that have? And what if a Security Debrief blogger came across some poor quality TSA data and decided to dig deeper? This is that story.

The Path of Customer Service Data

Let’s say I regularly fly from the same airport, and every time I reach the checkpoint, I receive rude or disrespectful treatment from a Transportation Security Officer (TSO). One option I have is to visit the TSA website and file a complaint. That complaint, along with others, forms an aggregate record of TSA’s performance across airports and checkpoints. The data is particularly important because it reveals potentially deeper problems (e.g., mis-budgeting, poor-performance and deficiencies in security).

So far in 2019, more than 400 individuals each month complained to TSA about something that was bothering them. According to Lee Resources Inc., for every one traveler who complains, there are another 26 angry customers who do not. The 400 individuals who complained actually reveal, potentially, 10,400 dissatisfied individuals (400 * 26 = 10,400) every month. It doesn’t stop there. The White House Office of Consumer Affairs, as reported by Customer Service Manager Magazine, reveals a dissatisfied customer will tell 9-15 people about their experience. And approximately 13% of dissatisfied customers will tell more than 20 people about their problem.

The number of people with a negative and worsening perception of TSA customer service stacks up quick. Moreover, while many people might show their dissatisfaction at the airport, those most affected are the ones who will bother to comment to headquarters. That’s why the customer service data are important.

For many years,TSA data have been published each month by the Department of Transportation (DOT), a legal requirement under the “Century of Aviation Reauthorization Act.” The data tables contain information on courtesy, screening procedures, processing time and personal property complaints. It is public information, open to anyone who cares to read it. Recently, I was doing just that. This is where things get interesting.

Challenging the Data

I was conducting research one morning, looking through TSA customer service data in the aforementioned DOT reports. Specifically, I was considering data for January, February and March 2019. And I found something that did not look accurate—the January and March data were identical, with exactly the same number of complaints in each category. This was not an “easy catch” as the data for these months could not be easily compared, buried as they were in PDF reports separated by a month.

The odds of having the same data in two months are near zero. So in the spirit of See Something, Say Something, I reached out to TSA. Talk about stepping into a rabbit hole.

June 12: I wrote to an individual I know at TSA who works on customer service issues, saying in part:

I still review and scrutinize the TSA customer service data that you provide DOT
on a monthly basis. Would you please send to me the last three months (January,
February, and March 2019) worth of data that you also sent to DOT for their
Consumer Reports Publication? I know that this may seem redundant, but I just
want to double-check my numbers.

Thank you very much. Also, please let me know whether you received this email.

I did not hear back.

June 13: I sent my note to As per their policy, they got back to me within 24 hours (kudos on that). The response was not what I expected:

Good Morning Mr. Becker:

The information published by DOT is correct.



June 14: I replied:

Good Morning and thank you for your quick response. Please look at the links below for January 2019 and March 2019. I have identified the specific pages. The January 2019 and the March 2019 data are identical in every sense. I recopied it below. Are you absolutely sure that “the data published by DOT is correct”?

And to help persuade TSA to really look at this data, I followed up with TSA blog posts offering data that did not match what was in the DOT report. I wrote:

15.9 million + 16.6 million + 33.3 million = 65.8 million. 65.8 million passengers screened from the TSA blog is more than 60.0 million that you reported for all of March 2019.

I did not receive a response.

June 15: Finding TSA uninterested in the quality of their data, I sent a note to a DOT e-mail address (not knowing who would receive, read and hopefully respond to it). After restating the issue, I added:

I already tried speaking with someone at TSA. Are you the right person at DOT who I should speak with regarding this data? If not, then would you please refer me to the right person? Thank you very much. I look forward to hearing from you.

Bingo. On Monday morning at 7:38 AM, DOT was on the case. Mindaugas Lescinskas, Sr. Aviation Industry Analyst in the Aviation Consumer Protection Division, Office of General Counsel and Secretary does his Department proud. He wrote:

I have reached out to my contacts at TSA and asked them to contact you directly. I have also noticed the data discrepancy, since both January and March tables appear to contain identical numbers. I will update the reports as soon as TSA submits the revisions to me.

Thank you for bringing this to my attention!

And just a couple hours later, guess who dropped into my inbox?

Thank you for bringing this to our attention. It is possible there is an error. We will check the numbers again and have DOT issue a correction if necessary.

TSA Contact Center

This is an example of a small story telling a much larger and more troubling tale. If my experience attempting to clarify data is evidence of a wider trend (not unlike a customer service complaint), then we may have a serious data problem.

Lessons that Need Learning

On its own, an instance of bad data in a monthly government report is not a big deal. But the time and persistence it took just to get TSA to even look at the data speaks to something more troubling. Here are several insights TSA might best weigh to improve their approach to data sharing and publication.

It shouldn’t be this hard: As a consumer of many kinds of data, it is important for our government agencies, like TSA, to be open to challenges on their data. When someone with a genuine interest in and use for TSA data inquires (such as myself), it is necessary for the agency that created the information to review it. I even took it a step further in an e-mail, pointing to other TSA data for approximately the same time period that flat out did not correlate with what they sent to DOT. And the response to this was a subtly indignant, “Everything is fine, taxpayer.” That culture of we’re-always-right is inauspicious for an agency that seeks to manage from the data.

Bad data is infectious: Data is only useful when it’s used and that means data sharing. TSA gave bad data to DOT, which disseminated it to a wide audience of data consumers. That incorrect information may trickle out into a mess of poor outputs—industry analysis, academic research, private sector security operations, airport operations, customer loyalty programs, on and on. The agency where the data originated has an obligation to ensure accuracy. If it does not, why is that agency even collecting data?

The data future demands a pristine record: We are rapidly heading into a technological era where artificial intelligence (AI) and machine learning is helping us extract ever-deeper insights and revelations from the data we collect. At some point (if that point is not already upon us), TSA data will be paired with the cognitive tools needed to understand expansive datasets. What happens when AI draws conclusions from imperfect data? We don’t know what red herrings and faulty conclusions we’ll end up chasing because our input was wrong.

As of this writing, TSA has corrected its data. I am left wondering what other bad data, potentially even more difficult to identify, is out there—and if it’s found, whether TSA will even care to explore it.

Gary S. Becker is the Chief Economist for Catalyst Partners, LLC. In this role, Becker offers economic analyses to clients on matters relating to homeland security, including the cost impact of proposed and final rulemakings. He offers advice on how to save money while achieving desired security benefits. Read More