By Robert Goodspeed, MIT Department of Urban Studies and Planning and Planning & Technology Today Co-Editor
Already a major technology trend, 2012 promises to be a watershed for “big data.” A shorthand term for the proliferation of large datasets, big data also refers to the expansion of analytic techniques for teasing meaning from the vast archives of information produced by the digital world. The New York Times’ Steve Lohr declared we have entered the “age of big data” in a recent article that compared it with another revolutionary research tool — the microscope.
As I observed last year, big data is beginning to filter into the urban planning world. Here are a few examples of the intersection of cities and big data:
•A visualization of people’s paths through the city created by researcher Eric Fischer using geotagged tweets
•Explorations of social ties and commuting patterns gleaned from AT&T’s telephone network data by researchers at MIT’s Senseable City Lab
•Research on New York City’s taxicab system using GPS data collected by meters by Columbia’s David King
•Analyses of the color and location of Flickr photographs
•The collection and analysis of data lies at the core of “smarter cities” initiatives by IBM, Cisco, and Siemens.
What do all these exciting examples of big data have in common? If you have modest technical skills and work for a local government or community-based organization, you probably do not have access to the data and skills necessary to replicate the projects.
Inequalities of data access are not new in planning. Sixteen years ago David Sawicki and William Craig argued in a Journal of the American Planning Association article titled “The Democratization of Data” that the most important ingredients to expanded access to the first generation of data wasn’t advances of computing power or analysis skills, but the rise of data intermediaries that worked with community groups in low-income communities to ensure they had access to quality data and skills. Whether nonprofits, local governments, or university-led projects, these intermediaries helped equalize access to data in the public sphere.
However, as the size of datasets has increased, so have the skills necessary to manage and analyze the data. No longer is mastery of a few desktop applications sufficient for analysis, since wrangling today’s large datasets requires database servers and analysts skilled at statistical and algorithmic data mining techniques. Although government datasets may have been the original big data, many of the new datasets are provided by corporations, introducing a morass of ethical and practical challenges. Frequently collected at the individual level, negotiating access requires navigating privacy and security concerns. Even when companies provide public access, extracting and using their data requires programming skills to tap application programming interfaces (APIs) or manipulate unusual data formats.
Finally, lurking beneath the big data hype are problematic unstated assumptions about the nature of truth. In the 1980s, the so-called quantitative-qualitative debate raged across several social science fields among scholars arguing the merits of various research methods. Some researchers stressed the need to collect empirical evidence and rely solely on quantitative analysis for research. Others argued social science required qualitative analysis such as interviews and observation to understand society. Although the debate is different today, important differences of opinion remain.
We should be cautious about claims that big data will necessarily answer important or relevant research or policy questions. Are cell phone traces sufficient to intuit travel behavior, or are surveys or interviews required to understand how people make choices? Can postings to social networking websites provide as much insight as a windshield survey, or an in-depth interview of community residents? The big data hype also runs counter to important developments in social science that stress the role of experiments and counterfactual reasoning, instead of relying on ever-more-complicated statistical models to explain the world.
What are some practical steps that big data could take to expand access by community-based organizations? A start might be to provide data in formats and sizes (perhaps through summary versions) that they can be analyzed in common software packages, such as ArcMap, Excel, and Google Earth. Data providers should provide documentation about the source, variables, and assumptions used to collect and process the data. Existing data intermediaries should explore the new datasets, and strategically expand their expertise where it seems appropriate. Although the proliferation of broadband and Internet-connected smartphones has reduced the prominence of the “digital divide,” we must take steps now to reduce the emergence of a new “data divide” between sophisticated analysts and communities seeking to plan for their futures.
This article was originally published on Planetizen. The author can be reached at firstname.lastname@example.org