THE COMMUTE: This is the fourth year the MTA has performed a customer satisfaction survey in this format. I criticized past surveys for faulty methodology, not asking enough questions related to service, and too many about the riding environment. Rather than summarizing statistics as I did last year, I will make a few observations since there is little change since the previous survey. You can read my past reviews from 2012, 2011 and 2010 because those criticisms still apply to this year’s survey, which is still mostly meaningless.
Why Conduct A Customer Satisfaction Survey?
You perform one to learn about system problems that you otherwise might not have known about and wish to correct. Also, to generally find out how you are doing. That is not why the MTA performs these surveys. They draw their conclusion first (that they are doing a great job, since they know they are so great in spite of all the criticism they receive) and use the survey as a public relations gimmick to convince you of the same.
If you read their press release, you will see that the statistics are used to provide validation for the job the MTA did to restore service following Superstorm Sandy, and to validate the success of Select Bus Service (SBS), Bus Time, the crackdown on fare evasion (after insisting for years that it was within acceptable limits and not a problem), and institution of video security cameras on buses, all relatively new programs. They don’t ask obvious questions such as how serious a problem do you think bus bunching is to determine if it is lessening, because they do little to minimize it.
The MTA attributes the eight-point increase in satisfaction with rush hour bus service reliability to the increased mechanical reliability of buses. Is that what you think of when you think of service reliability? Of course the mechanical condition of the buses is important, but buses rarely break down. Also, if mechanical reliability is so improved during rush hours, why is the satisfaction with reliability for all hours unchanged since 2011? I guarantee you, if rush hour reliability had not risen in the past year, it just would not have been mentioned. It is disingenuous to pick and choose data to present just because it makes you look good.
When bus passengers think of service reliability they are interested in if the bus got them to their destination on time and if they waited longer than expected for the bus. A customer survey is not needed to determine the reliability of bus vehicles. There are in-house statistics that give the MTA that information. Riders are less satisfied today with the overall availability of service, the frequency of service, predictability of travel time and how fast the local bus gets you where you want to go (pages 8 and 9).
Also, in spite of Bus Time, which is now in three boroughs, fewer riders are satisfied in knowing how far away the next bus is (Page 11) than they were in 2011. However, the MTA cites in its press release, “Satisfaction with ‘Knowing how far away the next bus is’” increased to 63 percent in The Bronx, compared with 42 percent in the other boroughs as “proof that Bus Time is a success because it has already been rolled out in the Bronx.” They conveniently do not mention satisfaction levels in Staten Island, which also has had Bus Time for awhile.
A Closer Look At The Survey
There is no borough-wide breakdown for any statistic in the report that is available on the MTA website. In fact, there is barely any methodology or description as to how the MTA even determined “satisfied” or “very satisfied.” In a press release, which is really an executive summary of a report, one summarizes information — one does not present additional data not shown in the report itself. The MTA does not want you to go past the press release to find out how meaningless the survey really is. (If the full report is even available to the public, it is not brought up through a Google search, which only yields the presentation to the MTA Board.) We have to look at past surveys and assume the same methodology was used. The first year the survey was conducted, the actual questions were released enabling more analysis. This year, there isn’t even a mention of the scale that was used.
To determine if passengers are satisfied or not satisfied, the question is not directly asked. Instead, you are asked to rate your satisfaction level on a scale of one through 10, with 10 being the most satisfied and one being the least. You are not told how your responses will be evaluated. After answering 25 or 50 questions on the phone, does someone even remember what the scale means or are they just rattling off numbers just to get to the end of the survey?
Most respondents who feel neither satisfied nor dissatisfied, but more or less neutral regarding a specific question, would tend to give it a five or six, indicating fair or acceptable. A rating of seven would indicate a passing grade, and an eight would mean you rate it good. A rating of nine or 10 would mean that you have a very favorable opinion. That is how normal people think.
However in the world of the MTA, a six is all that is needed to indicate your satisfaction. (They do not even reveal which ratings that they consider to be “very satisfied.”) If a weatherman is correct in his forecasts 60 percent of the time, would you be satisfied with his performance? Were your parents satisfied when you brought home a “D” report card? That is 60 percent. Would your boss be satisfied if you were late for work two days a week? Are you satisfied with a transit system that can conveniently take you to only 60 percent of the places you want to go? How about if 40 percent of the stations are dirty or if the trains or buses ran on time only 60 percent of the time? If not, the MTA still ranks you as satisfied according to their interpretation of the one to 10 scale, which skews the results enough to invalidate the entire survey. You can read the press release and the bus or subway survey if you are so inclined. I would not even waste my time summarizing the results. How can you believe a survey that does not even adequately explain its methodology?
A Final Word
In their one page methodology summary, the MTA states that 1,729 city residents were surveyed by phone and, of those, 1,201 customers took a ride on a subway and / or bus in the past 30 days and were asked the survey questions. (I guess non-city residents do not use city buses or subways.) The 528 respondents who did not ride were only asked demographic questions to weigh the survey. Why?
If the MTA were truly interested in knowing why residents do not use mass transit, and want to improve it, they would also survey non-riders by asking them why they did not use the system in the past 30 days with questions such as: Was mass transit too expensive? Were you confused about which bus or train to take? Did the route(s) to your destination meet your needs? Were they too indirect? Would the trip have taken too long? Was it because you would not have been comfortable, i.e. would have had to wait too long or would have had to stand?
Any company that sells a product is not only interested in its own customers — it is also interested in the competition. The MTA believes that the passengers who use their system do so because they have no other choice. Therefore, the MTA does not recognize its competition to be the automobile, a bicycle, a cab, or walking, so they ignore non-riders in their planning, rather than trying to attract them.
They do not even state how many bus or subway riders responded to the survey. We can deduce from prior surveys that fewer than 850 bus riders responded. With 300 local routes in the city, that averages out to fewer than three riders per bus route. Many routes and or areas of the city are most likely not represented at all.
If you just arrived from outer space and you read the MTA’s press release, you would believe that the MTA could not be doing a finer job, because their programs are all having the desired effect and the vast majority of its riders are content. If you ride the bus or train, you know otherwise.
The Commute is a weekly feature highlighting news and information about the city’s mass transit system and transportation infrastructure. It is written by Allan Rosen, a Manhattan Beach resident and former Director of MTA/NYC Transit Bus Planning (1981).
Disclaimer: The above is an opinion column and may not represent the thoughts or position of Sheepshead Bites. Based upon their expertise in their respective fields, our columnists are responsible for fact-checking their own work, and their submissions are edited only for length, grammar and clarity. If you would like to submit an opinion piece or become a regularly featured contributor, please e-mail nberke [at] sheepsheadbites [dot] com.