2022
The Fine Line Between Creepy and Fun title slide

In 2010, I gave a talk at Open Source Bridge called “The Fine Line Between Creepy and Fun”. The main target was developers who were experimenting with the new combinations of data created by social media (aka Web 2.0) and smartphones. I proposed that the interesting and delightful things that could result from those projects also carried a strong risk of crossing into unwanted, unpleasant, and even dangerous situations. If all of this seems like old hat now, keep in mind that the first iPhones that were released in 2007 didn’t even have a developer kit or app store available. Apple went through a strange period of trying to convince developers to not create their own iOS apps and stick to the web instead. Features like photo uploading and ad tracking came later. Social media sites like Twitter and Facebook also relied on explicit user behaviors to do a lot of things we take for granted now.

Automated geolocation was just starting to become possible, rather than being embedded into everything we use. The following year, 2011, I spoke at SecondConf in Chicago on the basics of adding location to your applications. This was entirely new stuff to most people working in the field, and so problems arose as much from developers not knowing the best way to handle these new aspects of software design as the capitalist desire to slurp up all possible data. Often users were happy to give over that data, anyhow. The early adopters of social media and smartphones were likely to work in or around the tech industry, which hit a particular low point for gender diversity in this period (and had never done well at representing marginalized groups). So the people who were creating things and seen as the primary users were frequently not those who might have their own safety concerns.

In my talk, I gave some examples of ways software might act in unintended or unwanted ways:

I posted my location on Foursquare to tell my friends to stop by – but my ex showed up too.

Social: The ex doesn’t know or care that he’s not invited.

Technological: I shared my location publicly, but I didn’t know he was listening – Or perhaps he’s watching a mutual friend’s updates to see where I am (The friend posts “Hanging out with Katie at Lucky Lab”)

I used Facebook to invite certain friends and family members to a baby shower, but it was re-posted to a public local events calendar via an iCalendar feed.

Social: I only want to invite some people to the event, not everyone in town, so I picked a privacy setting I thought would do this.

Technological: Facebook’s iCalendar feed service interpreted my privacy setting choice in a different manner than I expected when I selected it.

Then I described the discomfort that results:

We think we’re sharing with the people we want to see it: friends, family, friendly strangers. But maybe we’re also sharing with people who wish us harm, or family or co-workers who aren’t part of our social life, or marketing databases.

The software might even expose information we didn’t even know was there to be shared.

Like the amount of time we spend on a website.

Or that we’ve just qualified for the “douchebag” badge.

How can we do better?

Show the user the full context when information is shared.

Provide flexible controls for setting who has access.

Pick explicit over implicit.

Finally, I asked “Is creepy always bad?”

A better question:

Is exposure always bad? Don’t we want to share with each other?

Different things are creepy for different people.

The narrator stands between the Creepy bear and Fun cat

These days, terms like “metadata” are commonplace. The general public became painfully aware of what information was being collected by Facebook during the Cambridge Analytica scandal, and Snowden’s whistle-blowing revealed a lot of things about what the government could learn about you from your phone carrier or ISP. Ad tech is so pervasive and taken for granted that we just make jokes when Instagram ads suddenly try to sell you something a friend mentioned over dinner. There’s a level of weirdness that seems pretty normal.

I had a lot of optimism when I gave this talk that getting these conversations out in the open, early in these technologies’ development, would have an impact on what we ended up creating. I certainly wasn’t the only one bringing it up (though I’m confident no one else made hand puppets). At the time there was such a sense of experimentation and opportunity around what we were building. It was fun to connect the pieces.

The steps we might take to retain control of our own data and experiences now can feel inadequate, especially in light of how algorithms (“AI”) obscure and complicate what’s happening. In looking back at my talk and its context, I notice how niche many alternatives to mainstream social media and data collection have become. The scope of these concerns goes beyond the social or personal side of things – telemetry tracks user actions across all kinds of desktop and web applications. We rarely have enough insight into how a software system is developed to know for ourselves if data collected “to improve the product” is being used in a way we’d agree with. It’s much, much more likely than not for any digital action you take to be observed and used in some way you’re unaware of.

The weirdness of things like Instagram ads is instructive in another way, though – the opacity of all these algorithms plays both directions. The companies advertising to you or trying to optimize their profits from your user experience may have no better idea of what’s actually going on than you do.

There are legal attempts to ensure more visibility and user control over data collection and use, such as GDPR for the European Union and California’s CCPA. We see technological approaches too, like the privacy notifications Apple has added to their platforms. These efforts create a patchwork of coverage (geographic or by operating system) and there are many issues with circumvention. (The Markup has covered this extensively.)

But the most interesting question from my talk that I want to return to is whether we can still be delighted by our own data and how we use it. Have we lost the chance to play with these kinds of tools, to create something unexpected and good, or even to enjoy the moments of dissonance? Has that well run dry, or have we been strained so far it would be better to just detach from it? I don’t know yet, but I’ll be looking. Wouldn’t it be interesting if building the internet could be fun again?