Click here for link
It's not like I was trying to give you guys a dissertation on my research. I was simply shedding some light on a few select scenarios to show how such concerns the original poster had were overblown. I didn't feel it was necessary to go into great details, as I was trying to be simplistic in my explanations, because it's easier for an audience not in the field to understand such concepts in simplistic terms/examples, rather then getting into nitty gritty details.
How would you track customer smart phones without some sort of overall network management? How else would you get the nodes signal strength and other metrics in real time so you can locate the device? If you're talking about geolocation on the client side, that's completely different than what this article is talking about.
I was tackling a slightly different problem, so yes I was trying to do things client side.
we were seeing 2 - 3 meter positioning for normal cell phones in a large open area with 4 Wifi nodes available, with 3 - 5m in a more typical office environment with 4 - 5 nodes.
This is inline pretty much with what I was getting. 15-20ft being 4-6m.
The reason I was talking about unmanaged networks, was because the original poster was talking about Google aggregating information from arbitrary locations to make determinations on the user. If a department store wants to implement a system to track it's users in a store, they can do that pretty well for what their needs are. I was talking about Google trying to aggregate location data from places where equipment was not necessarily deployed with tracking in mind. For example, starbucks could probably give 2 sh!ts where you were in the cafe. If google got a hold of the data of you while at starbucks, they would be in the same position I was referring to.. They have signal strength readings of various APs that they have no idea where they were deployed, how they were deployed, etc. That's a different scenario then if Starbucks were to deploy a purpose-built location tracking system, and then forward the information to Google.
By the way, I tested some server side commercial solutions, but I ran into some interesting scenarios, but maybe it's because the environments I was dealing with have less constrained environments. For example, in our own workspace each office space is not identical, nor is there any pattern to the layouts, as it's the employees choice on the layout. That means we ran into problems even with the commercial solutions, based on how the device was placed. Some people placed the phone on the desk next to keyboard. Some placed it in pocket. Some placed it in jacket pocket. Some placed it behind their monitor out of the way of their work area. Some put it in their flipper cabinet. Due to all this, we were never able to reliably get accuracy below 15ft. Depending on the problem you are trying to solve, that is probably good enough.
But one scenario I tested, involved a restaurant. Even with granularity down to 6ft... That wasn't good enough to differentiate someone sitting at the same table as you from someone sitting at the table next to you, because sometimes the person sitting in the chair in the next table over, is actually closer to you then the person sitting across from you at your own table.
You are at point A walking towards point B with the device in your hand. As you are walking you put the phone in your back pocket. When you get to point B, the algorithm could think you are still at A, because the APs that are on the "B" side of the room now have to go through your body to get to your phone. But when you were at A, the phone was in your hand, so APs on the A side of the room at to go through your body to get to your phone's antenna in your hand.
So its possible (and yes, I've actually seen this with real data), that while at A with phone in hand, you get almost identical readings from all the APs as you did at B with the phone in your back pocket. So now you try to get smart, and try to map all the possibilities. But now you are stuck, because the profile of device at A in hand is identical to the profile of device at B in pocket. So now you need to figure out if the device is in pocket or not.... See how the algorithm quickly gets more complicated? (And this is only the beginning... Detecting when device is in pocket is an even trickier problem to solve then the location tracking was.)
Think of it this way... Imagine yourself walking into a store with me, with your eyes closed. Now only blink once every 30 seconds, even if you knew our precise location every time you blinked, do you have enough information to know what I was doing in the store, what sections actually appealed to me, and what products I got? You may know that I was in the meats section, but you wouldn't know if I was just passing through, if I paused. If I paused you don't know why I paused, maybe because somebody's cart was blocking me. Your eyes may have been closed when I grabbed the frozen pizza, because that section was near the produce section, which is where you blinked, but I was able to get to the frozen pizzas and grab a pizza, and walk back to the produce section because I forgot to get some grapes, before you blinked again.
Most of the research I was doing was centered on user-intent. I mentioned this research, becuase the original poster was talking about similar scenarios with regards to how Google might use the information. Determining user-intent is vastly more complicated then simply location tracking, especially with the coarse grained tracking afforded by a passive scan. My original argument was that to do the types of scenarios the original poster was talking about, requires much more then just beacon packet sniffing, which is what Euclid is doing.
Are you that crappy at your job? They use more than one radio (usually SDR so they can simultaneously track BT and GSM), and stores are pre calibrated to map coverage and propagation.
In case you didn't read the original article, the technology in question only looks at wifi beacon packets, it doesn't track anything else from the device. That's why I used the specific research examples that I did. In fact, if you actually read my arguments, I was saying you needed to have other sensor inputs to make the results more accurate.
> the AP
Where did I say I only looked at situations with a single AP?
But like I said earlier. Rough estimation is fine for most intents and purposes. I was talking to the argument about using these technologies to determine who you were "with", which requires much more fine grain location tracking. For example one thing that comes up in location tracking is orientation. However, orientation of the device does not imply orientation of the user. So how does the app know if two people are facing each other, or away from each other? You could try to rely on orientation of the phone, but you don't know if the user put the phone in their pocket face forwards, face backwards, or if it's actually in a bag situated sideways, etc.Now when you start adding other sensors into the mix (which is what I was talking about earlier), it is more feasible to do, but that's the original argument I was making... That you need to rely on more than just simple wifi beacon packet sniffing.
The type of tracking I was referring to earlier, was dealing with AP's that were not necessarily geo-tagged, since I was dealing with trying to build something on top of public (or private) infrastructure that wasn't managed by the parties involved. So for example, a proper solution involved deploying a number of AP's in a specific location, in a specific pattern, etc, with the each AP geo-tagged. The solution I was dealing with, was trying to figure out proximity and rough location tracking (think of the game Marco Polo), where you don't know the location of any of the APs around you, as they are not yours.