Quantified Self Data: Privacy & Accuracy Considerations

Quantified Self Data: Privacy & Accuracy Considerations

14 months of my Fitbit data

For the last fourteen months or so, I’ve been wearing a Fitbit, counting steps, competing with friends and family to see who might win the week. It’s been fun – and it probably has made me more cognizant of my own activities. Of course, to be fair, during this same period of time, I’ve also changed my diet, moved to a standing desk and started running half marathons. None of these changes were specifically planned out, nor are they systemically linked.

In many ways, they simply represent a convergence of social influences, opportunity and a general desire to just be more active. My husband, a few close friends and a couple of co-workers had been wearing Fitbits for months before I bought mine. I was privy to their general cajoling and poking and I wanted in on the fun. Meanwhile, a couple of my work colleagues had been using stand-up desks for several months quite successfully and were actively trying to get more of us engaged and standing. At one point, we even had an informal policy that everyone would stand for 30 seconds, 30 minutes into an hour long meeting. This idea was a bit disruptive at first, but quickly became a seamless ritual that had little or no impact on the flow of meetings. Somewhere along the way, my weekend running partners decided that we should increase the number of races we were running and, eventually, up our distances as well. All the while, my Fitbit has been a silent sentinel, collecting data about me.

When I run, I wear a Garmin and I collect a lot more data about my activities including distance, running cadence, heart rate, and GPS mapping. All of this data gets uploaded into the Garmin ecosystem and I can easily compare similar activities (races, running routes, segments, workouts, etc.) going back a couple of years now. I actively share this information with my running partners and with my coach and I import the data into two other online ecosystems: Strava and Training Peaks. So far this year, I’ve spent 126 hours running 605 miles – and the year isn’t over. This really is a lot of data – and it’s all data about me. It’s very personal.

Blaze the Beach

Data from a recent race in Long Beach, CA

Meanwhile, the marketplace is becoming increasingly crowded with brands, devices and apps that track all manner of personal health data from steps, weight, sleep patterns and calories to heart rate, blood pressure and even very obscure things like Thiamin levels in the blood stream. Everyone wants in on this game. Health insurance companies like Humana are encouraging data collection. Apple has just launched HealthKit. Microsoft and Google are in the game. This is all very interesting. It’s kinda cool and kinda scary – and I’m not one who is prone to wearing a tinfoil hat and looking for conspiracies. Nevertheless, there is a lot here to contemplate.

Data Accuracy
Currently, there is no sanctioned standard of measure for what constitutes a “step”. There are no governing bodies verifying or validating steps. There is no gold standard, no ISO, CSA, ANSI or other interested organization looking out over all of these units of measure to insure the accuracy of the data. Each branded device and app is using its own proprietary algorithm that is built on some combination of inputs such as data collected from an accelerometer (in a phone or other device), height and possibly things like GPS information, weight and . . . ? This means that Fitbit, Garmin, Apple, Jawbone, et. al. are not necessarily measuring the same things nor are they doing so in the same ways. From device to device, system to system, your mileage may vary – literally. It also means there is no easy apples to apples comparison between systems. Nor are there published formulas for establishing baselines for comparisons between systems. GPS enabled systems probably provide the greatest degree of accuracy for measures of distance, but my own experience running with friends shows that over a 10K run, three of us running step for step, side by side, using three different Garmin models, will often be out of sync on our devices by as much as 1/10 – 1/4 of a mile. (NOTE: This is particularly frustrating on a long run when you’re the one who has to run the extra distance to make the numbers look right on your watch. And let’s face it, if you are quantifying your life, measuring your performance, working with a coach and setting goals for yourself, you will not be happy when the watch says 9.85 miles even though your buddy’s watch says 10.00 – and you were never more than three feet away from each other for the whole run. You’ll run the extra distance. You want your watch to say 10.0 too, damn it!)

Calibration is another gray area. Some of the more sophisticated devices will allow you to calibrate them, but over time and usage, there is no way to know how well this calibration is maintained. When we rely on devices like our phones, what happens when we drop them? Do we just assume that all of the sensors and accelerometers survived the fall? Sensitive medical equipment is maintained by professionals and calibration happens regularly. Heck, even gas pumps are professionally calibrated and monitored for accuracy by the government.

For medical professionals, the question of data accuracy (and data quality) is concerning because it means potentially mis-diagnosing conditions or, even worse, missing warning signs that could aid in diagnoses. It also means that patients – or more accurately, consumers – may ignore warning signs because their data looks “normal” – at least to their untrained eye. It also opens the door to potential frustrations and conflict between doctor and patient, ‘What do you mean all of this data is useless? Why can’t you make a diagnosis from it? It’s all right here. It’s good data. It’s from [insert your favorite brand name here.]”

Data Protection and Privacy Issues
If Target and Home Depot can’t keep our credit card data secure and Apple can’t keep Jennifer Lawrence’s photos safe, why do any of us believe our personal health data will be secure when entrusted to systems that are not subject to the standards, compliance or regulation of the Health Insurance Portability and Accountability Act (HIPAA)? The software and systems employed by health care professionals are subject to government oversight and regulation and – at least in theory – are maintained in a secure manner. HIPAA is designed to afford us with a minimum standard of protection of our data and privacy around who may access it and for what purposes. Currently, all of the consumer level activity trackers and systems that are flooding the market are operating outside of HIPAA oversight. This should raise a red flag for us, but we seem complacent to just ignore it for the moment. Interestingly, once the data is transmitted from our phones or other online systems to our doctors or pharmacies, it may very well become protected by HIPAA. This creates a potentially interesting inside/outside problem to be solved.

Similarly another interesting contemplation best answered by a team of legal experts, is whether or not the voluntary mass participation with platforms like HealthKit or Fitbit will weaken our protections under HIPAA – or even render the law indefensible at some point. When we give away our health data openly and freely, how can we also claim it is “protected” data? I feel certain we’ll see this decided in a court case in the not too distant future.

Who owns the data?
Yeah. Who does own the data? Is it my data? It’s information about me, right? It must be mine. Does it belong to whomever I have shared it with? What does the fine print in that End User License Agreement (EULA) actually say? And how do I know it won’t change in the future? I don’t. And I haven’t read the EULA. Neither have you. I’m not going to read it either. They are dreadful affairs designed to cure even the most stubborn cases of insomnia. They aren’t designed to be read or understood by the general population. At some point, this will need to change. This is currently a terrible user experience for all but the most stoic attorneys. Until there is some legal intervention, government oversight or regulation, we must either decide not to play in the world of quantified self data collection or we have to hope that companies in this space will adopt short, concise and clearly worded statements that are easily accessible that will explain who owns our data, how our data can be used and by whom – and that they will honor these promises. Fitbit’s journey in this arena has not been without missteps or guffaws, but I do believe they are on the right track.

Fitbit_Privacy

Always-On Systems
Although the focus of this editorial has been on quantified self data tracking, it’s important to know that the issues are more broad and impact many more systems in our lives. As our homes and appliances become smarter and more connected, as the Internet of Things grows, these issues will also grow. When your TV and your Xbox are always on and always listening, ready to respond to your commands, they will invariably “hear” personal, private things. When Kinect style gestural controls are embedded in devices, they will always be “watching” us too. That’s not to imply nefarious intent or egregious data collection, but do you want the “things” in your life listening and watching in the bedroom? Or the bathroom? In your car? At your bank? When you are making big life decisions? How will you know who has access to that data? Or what they will do with it? Are you going to read the 46 page Privacy Policy that comes with your TV? No. And you shouldn’t have to. In fact, your TV shouldn’t need a 46 page Privacy Policy.

Better User Experiences to Come!
In many ways, everything discussed above represents a wealth of opportunity for design. In fact, it’s imperative that we do a better job designing the smart systems in our lives, that we commit to helping our uses understand what they are opting into and what that means. We need to make sure that users have the ability to access and interpret their data so that it’s meaningful and actionable for them – not just a collection of pretty charts and graphs with an odd gamificaiton quality. We need to design systems that afford and deliver privacy. It is critically important. And we need to use design to protect people too. All of these systems bring the promise of helping us be better people, living healthier lives, but they are ripe for abuse and mis-use as well.

This is from a recent article in Yahoo Finace:
“On the black market, electronic health records can be worth 20 times as much as stolen credit card data, according to Stephen Boyer, CTO of security rating firm BitSight Technology. There’s really no richer source of personal information about a person than their medical chart. It may include insurance details, past home addresses, phone numbers, Social Security numbers, as well as a patient’s entire medical history — plenty enough to commit insurance fraud or identity theft.”

We can do this!
Viva the technological era we live in and the wealth of ways we are able to inexpensively harness this technology. I will continue wearing my Fitbit and my Garmin. I will continue playing in this space and collecting data about myself. And I’ll continue to think deeply about how all of this data – and the data from countless other systems – is being used. As a designer, I will commit to thinking about my users and their needs and about how to deliver amazing experiences for them on every possible screen and platform. But, I will also remember that they are real people who look to these systems to make their lives better, happier, easier. They don’t deserve to be hoodwinked or set up for identity theft, denial of services or health insurance coverage or other abuses of the system. So, let’s all commit to keeping these opportunities for abuse in mind and doing our best to design protections into our services and experiences. We can do this.

Interested in reading more about some of these issues? Check out these links: