THE COMMUTE: For the third consecutive year, the MTA released the results of its satisfaction surveys for each mode of travel / agency, using a 1 through 10 rating system, on September 24th. An earlier format, using letter grades A through F for buses in 2009 and subways in 2006 and 2007, was abandoned because a C- response for many questions was not to the Authority’s liking. Those surveys used larger samples and some responses were broken down by line for comparison.
Using a 1 through 10 scale provides results that appear to be more favorable, since a 50 percent satisfaction level is considered a passing grade, which would be failing under a letter grade system. In the end it really doesn’t matter since the survey results are mostly meaningless, as I will explain later.
A summary of results appears in the MTA general press release and states categories, which show subway satisfaction increases and that bus satisfaction remained stable. It directs you to attached press releases by specific agency for more details. However, the only one available on the website is for Metro-North (MNR). Ned provided me with the New York City Transit press release, which makes no mention of a decrease in the perception of subway personal security and is not as glowing as the MNR press release.
The press release only states for buses that satisfaction levels are stable and that 13 percent of subway riders switched from local bus to subway: 10 percent in the past five years and seven percent in the past year. Half said they switched because bus service was too slow. The survey does not state how many switched due to service cutbacks or if that was a choice. Off-peak riders were 11 percent more satisfied than peak riders.
Summary Of Results
Satisfaction Levels For Subways: Service quality is slightly up, as is information and communication, and station maintenance and elevator satisfaction; perception of personal security is down after 8:00 p.m.
Satisfaction Levels For Buses: Overall satisfaction, service availability, and information and communications are all down. The only area where satisfaction levels are up is courtesy and helpfulness of bus drivers.
The complete survey results can be found here. The margin of error is +- 3.8 percentage points. Statistically valid differences are in bold.
*Survey only asks about usefulness of announcements regarding planned service changes not about adequacy of station signage or information availability on internet.
**Actual performance indicators show escalators are working 96 percent of the time. Why should the perception be so different? If an escalator is operational for one hour in a day, does that count as it being operational for the entire day? The MTA never adequately discloses its methodology. (A brand new escalator at Bleecker Street opened on September 25 was already out of service within two days.
*Survey does not ask about bus detours or availability of service change or service delay information availability.
Why The Subway And Bus Surveys Are Mostly Meaningless
Most of the differences between 2011 and 2012 are within the margin of error. Additionally, should the opinion of someone who took one trip in the past 30 days count just as much as one who took 300 trips and can provide more accurate responses? What if someone does not use an elevator or escalator — will they answer the question anyway? There is no N/A choice. If someone refuses to answer a question, is his survey discounted?
No raw numbers are shown, only percentages. No information is given as to how census data was used to weigh the raw data. The MTA just asks us to trust them that it was done correctly. We only know that 1,200 riders, who used either a bus or subway, responded — not even the number of bus or subway riders surveyed.
For 2012, the MTA does not even state what the 1 to 10 scale represents. We have to go to the 2011 survey for that information. Ratings 1 and 2 represent very dissatisfied; 3, 4 and 5 represent dissatisfied; 6, 7 and 8 represent satisfied; and 9 and 10 mean very satisfied. The assumption is that the scale did not change but nowhere is this stated. When you are only interested in four categories: very unsatisfied, unsatisfied, satisfied, or very satisfied, why complicate matters by adding six additional rankings that have no affect on the analysis?
There is no difference betwee the degrees to how satisfied or dissatisfied respondents are. Ratings of 3’s and 8’s count the same as 5’s and 6’s, respectively. Mean and modes are not shown, nor is the percent of respondents who are very dissatisfied.
If a respondent forgets the difference between a five and six somewhere during the 27-minute survey and mistakenly says a five for a six or vice-versa, his following responses are inaccurate. A one through 10 scale may be fine for a written survey when you can refer back to the scale definition at any time, but is just too confusing for a telephone survey, especially when some response levels are two numbers and others are three.
A statistically valid survey asks the same question in more than one way to ensure respondents are paying attention to the questions and not merely repeating numbers just to get the survey over with. That was not done here. Accuracy is especially crucial when dealing with such a small sample size.
In my reviews last year here and here, I went into much detail regarding how less than one third of the bus questions relate to actual service, with a majority of questions relating to environmental issues. I also mentioned questions that should have been asked but were not. I stated that the MTA cannot be trusted because of the information they choose to display. Many of my comments from last year are still applicable.
The MTA’s Purpose For Undertaking Customer Satisfaction Surveys
The MTA performs these surveys as a means of propaganda to convince riders that they are doing a generally good overall job and to justify capital expenditures to make upcoming fare increases more palatable. (The press releases mostly highlight the positives.) The purpose of these surveys should be to find which routes / areas need improvement so problem areas can be specifically addressed. You accomplish that by asking the right questions to a large enough sample, which permits segregation of data into specific geographic areas by route or time period. If the 2012 survey was consistent with the one for 2011, approximately 240 bus respondents used Brooklyn buses. This represents a sample size of .04 percent far too small for a route or geographic area analysis. Similarly the sample size for subways was .02 percent.
In Other News…
In other news, weekend B2 service made its return yesterday, after a more than two-year absence. The flyer stated “Introducing Weekend Service as if it never existed.” (Restoration” or “Return” would have been more appropriate than “Introducing.”) As long as the MTA insults the intelligence of its riders with such signage and by performing surveys for propaganda purposes to justify its capital expenditures and prepare riders for fare increases — not to make improvements — they will never be trusted.
The Commute is a weekly feature highlighting news and information about the city’s mass transit system and transportation infrastructure. It is written by Allan Rosen, a Manhattan Beach resident and former Director of MTA/NYC Transit Bus Planning (1981).
Disclaimer: The above is an opinion column and may not represent the thoughts or position of Sheepshead Bites. Based upon their expertise in their respective fields, our columnists are responsible for fact-checking their own work, and their submissions are edited only for length, grammar and clarity. If you would like to submit an opinion piece or become a regularly featured contributor, please e-mail nberke [at] sheepsheadbites [dot] com.