SurroundSense: Indoor positioning by ambience fingerprinting
Posted in Geospatial Technology, Mapping on September 24th, 2009 by Justin – CommentsIndoor positioning has baffled the greatest minds in the location-based technology industry since its inception. Rarely can a GPS fix be attained indoors with today’s PNDs and cellphones, and other methods of positioning such as relying on Wi-Fi access points can be inaccurate. But researchers from Duke University may be on to something with a new method of determining indoor position dubbed SurroundSense.
SurroundSense works by pulling in data from a cellphone’s various sensors and creating an “ambience fingerprint” by filtering through a variety of algorithms. Ambience fingerprinting takes into account ambient light, sound and color in a given location and then assigns an overall ambient signature to a given place. Because most businesses feature a different aesthetic from others in order to differentiate themselves and reduce competition, the Duke researchers believe that the combination of ambient information types will usually be different in each place. They can then be augmented by accelerometer data which detects a person’s movement through a location, and Wi-Fi hotspots when available. The accelerometer in particular can help detect what type of establishment a person is in by the pattern of their movement. For example, a person in a grocery store may move quickly up and down aisles, while a person in a restaurant may move through a short line-up and then stay in one spot for a length of time while eating.
In order to minimize the number of places that have to be compared to a given fingerprint in any situation, SurroundSense utilizes GSM to locate a user on a macro scale. Then only those businesses in the general area will be filtered through the entire fingerprinting process. The project is a work in process and there remains a ton of work to be done. But so far the researchers have done a trial using 51 business locations and have had an average positioning accuracy of 87%. You can read a paper about the project here (PDF).