Paper

Neural Embeddings of Urban Big Data Reveal Emergent Structures in Cities

In this study, we propose using a neural embedding model-graph neural network (GNN)- that leverages the heterogeneous features of urban areas and their interactions captured by human mobility network to obtain vector representations of these areas. Using large-scale high-resolution mobility data sets from millions of aggregated and anonymized mobile phone users in 16 metropolitan counties in the United States, we demonstrate that our embeddings encode complex relationships among features related to urban components (such as distribution of facilities) and population attributes and activities. The spatial gradient in each direction from city center to suburbs is measured using clustered representations and the shared characteristics among urban areas in the same cluster. Furthermore, we show that embeddings generated by a model trained on a different county can capture 50% to 60% of the emergent spatial structure in another county, allowing us to make cross-county comparisons in a quantitative way. Our GNN-based framework overcomes the limitations of previous methods used for examining spatial structures and is highly scalable. The findings reveal non-linear relationships among urban components and anisotropic spatial gradients in cities. Since the identified spatial structures and gradients capture the combined effects of various mechanisms, such as segregation, disparate facility distribution, and human mobility, the findings could help identify the limitations of the current city structure to inform planning decisions and policies. Also, the model and findings set the stage for a variety of research in urban planning, engineering and social science through integrated understanding of how the complex interactions between urban components and population activities and attributes shape the spatial structures in cities.

Results in Papers With Code
(↓ scroll down to see all results)