Can researchers using computer algorithms really measure how people perceive a place? In the past, researchers relied only on surveys and limited data sources. In a new approach, Zhang et al. use a data-driven machine learning approach and achieved a high accuracy rate in predicting six human perceptions, namely safe, lively, beautiful, wealthy, depressing, and boring.
Training the Model
A team at MIT’s Media Lab created a platform called Place Pulse 2.0, where they had hundreds of thousands of photos and trained the model by asking people which photo looked safer, livelier, more boring, wealthier, more depressing or more beautiful. Zheng et al. collected around 400,000 photos from Beijing and Shanghai to classify neighborhood perception. Using the images, the researchers trained a deep learning model to quantify how each photo looked like one of these attributes.
Their results were very accurate, however the accuracies differed between the six attributes.
For example, the attribute of safe, wealthy and beautiful were slightly higher than depressing, boring and lively. This is because the later three attributes are more prone to human interpretation.
Results of the Model
The perceptual scores of all the images were calculated and collected by the pre-trained model. Using the results, they mapped the scores to create a “City Perception Map” of Beijing and Shanghai.
They conclude that the downtown areas are more “safe” and “lively” than the suburbs. Similarly, mid-level roads are more “safe” and “lively” than ring roads and highways.
New technology has enabled companies to use image recognition to quantify the image and correlate it to a specific attribute. At SquareFeet.ai, we use images to quantify the quality of the views of each unit to ensure that a new project is priced as efficiently as possible. Book a Demo to learn more!
Source: Zhang, F., Zhou, B., Liu, L., Liu, Y., Fung, H. H., Lin, H., & Ratti, C. (2018). Measuring human perceptions of a large-scale urban region using machine learning. Landscape and Urban Planning, 180, 148-160. doi:10.1016/j.landurbplan.2018.08.020
Photo Credit: https://towardsdatascience.com/module-6-image-recognition-for-insurance-claim-handling-part-i-a338d16c9de0