Free isn’t Free – Part 2 3/30/12

I got some very good responses to my post earlier this week about free apps and services in healthcare. Several of the points in the comments were spot on, articulated better than my post, and required me to write this follow-up to better explain my thoughts on free services and revenue models.

I also looked at user agreements and privacy statements for several of the companies I listed. The wording seemed vague enough to not really be restrictive in terms of use of data, though almost all restricted the sale of data to third parties. I’m not an attorney, but I imagine almost all of these agreements are written to assure flexibility for the companies. Why not, right? Just out of curiosity, do any readers actually go through these agreements when they register for services?

After going back and re-reading my post, I realized the main message that I wanted to come across was that there needs to be transparency in how companies are going to use personal health data. Both for the user and, in certain instances, for the people they invite to the service or app (doctors inviting patients, patients inviting doctors, doctors inviting doctors, patients inviting patients.) Several of the comments touched on transparency as the key factor in personal data usage.

I agree with reader Margalit Gur-Arie, who wrote, "My current assumption is that wherever you enter any data in an Internet application, sooner or later, it will find its way out of the company servers, with very few exceptions." That mapped well to this post by Fred Wilson about online privacy and current attempts by legislation to address online privacy. Fred makes some very good points about profiling and tracking, including that it can provide real value to users. I personally love Amazon recommendations, but I’m getting annoyed with Amazon’s listing of links to relevant external sites. Fred, being a VC with a much better understanding of the issues in my post about startup investors and returns, also writes about online profiling and tracking as "the economic underpinning of the Internet." He warns about privacy regulations undercutting this significant Internet driver.

That leads to part of reader HIT Project Mgr‘s well-stated comment. "Unlike Travis, I’m not ready to throw out a promising start-up company who has a business model of creating a user base before figuring out how to make money with it, because innovation is often done at the initial investors (Venture Capitalists, Angel Investors, etc.) expense and thus their risk financially not mine. My only investment is my time to upload, learn and use the app. If the start-up never makes it, the investor who knew the risk loses out for the most part. Those who got the product for “free” only have the trouble of finding a new ‘free’ app to use." I definitely understand developing something is never free and companies operate to at least cover their costs. I also understand that investors take on risk when they invest in new companies in order to hopefully get a good return. This risk is higher with earlier-stage companies, especially those that are building networks or acquiring users with a less well-defined strategy to monetize that network or those users.

The second part of the above comment, about the risk and investment on the the part of a user, is something that I think is dependent on the app or service itself. While I agree there is no real risk to users of free services for something like logging blood pressure or linking patients with similar health conditions, these were not the services I had in mind. I probably should have been more clear about this as I was writing. The services I was thinking about as I was writing involved providers inviting patients, and to a lesser extent other providers, to join or connect to them.

In the instance of providers inviting patients, providers are implicitly vouching for the startup service by inviting patients to it. If that service shuts down, well that’s one thing and likely not a major loss to the provider or patient. But, if the startup survives, it now sits between provider and patient. In this place, it can offer things (goods, services, etc.) directly (or indirectly through partners and affiliates), to patients based on interactions with providers. These targeted offerings might be accurate and they might not be accurate. This represents a risk to providers. I realize this is a very specific example, but it’s the one that really motivated my last post. Am I over-thinking this scenario?

I also realized as I was thinking about personal health privacy, especially as it relates to tracking and profiling, that most people already have data about their personal health floating around today, even if they haven’t used any free startup health services that I mentioned. Most consumers search and browse the Web for health information. If people have searched for erectile dysfunction, depression care, fertility treatment, diabetes products, cosmetic procedures, or any number of other health-related topics or products, those are likely linked to that user in some way. This tracking might not be truly personal identification, but it goes a long way to painting a profile of health and tailoring content to a user. As an extreme example, I can’t imagine what online profiles look like for clinicians that search the web for health-related information for patients.

One other point that I wanted to clarify is about intent and motives. It’s not that I don’t trust free services because I think they are malicious in intent (was that a double negative?) It’s more that I realize they will do everything possible, within the confines of the law and user agreements, to increase revenue and value for the company. This makes sense for the company and its investors.

That goal to maximize share value has to be ideally balanced with doing things that don’t harm the company in the long run. I don’t think this is always the case, as different stakeholders can have different short- and long-term motivations. I suppose the counter argument to the above example, since I’m debating myself, would be that a service using its position between provider and patient could harm the company by upsetting providers and having them stop using it. This might be true of providers knew what was happening, though I don’t think this is usually the case with online profiling.

I’m not sure if this post clarified anything. If anybody out there wants to write a guest post about online privacy and free services in healthcare, please let me know. With my last post, I did not intend to play the role of Luddite to the digital destruction of medicine. My point was to bring up potential issues with free services, especially those that target specific interactions, ideally generating a dialogue in the process. Thanks again for the thoughtful responses.


Travis Good is an MD/MBA involved with health IT startups. More about me.

↑ Back to top

Founding Sponsors

Platinum Sponsors