I’ve been thinking and writing and making things about creepiness and the internet since 2009 or so. Initially I was interested in how all this “social software” facilitated not just the usual kind of awkward experience (as long as we’ve been able to post things online, we’ve run the risk of having them seen by unintentional audiences like our parents or employers), but perhaps actively created uncomfortable and even dangerous situations, such as a website that uses your combined social networks to decide you’d like to share information with an abusive ex.
Lately I’ve noticed my attention has shifted from the creepy experiences generated by human actions and amplified through certain tools, to the kind of thing documented by the New Aesthetic. Often it’s fun (as is some of the weirdness expressed by Creepius), but I read “the motive of the algorithm is still unclear” and I stop and read it again, and again. There’s another layer of separation here, a disconnect. We’re moving from a system that takes human input and responds to it (in both useful and horrifying ways), to one that takes the seed of human input but has an increasing degree of autonomy. Bots reading, writing, and responding to the news.
We’re really good at pattern matching. By which I mean we’re really good at seeing patterns and of making them out of whole cloth.
And we are starting to ask the network – the machines and cameras and sensors and cables and all the pipes, visible and invisible, that connect them and surround us – to look for patterns too. We’re trying to teach pattern matching and we’re not sure what to do with what’s coming back
I am not here to argue that the robots are really seeing anything yet - at least not on our terms.
What interests me is that we - not the robots - are the ones seeing the patterns coming back and we know from experience the kind of crazy-talk narratives that could be shaped out of them. We can see both motive and consequence birthed in those patterns.
The thing that worries me about all this is that we’ve seen how small actions can be amplified. The networked communication structures we’re building have all sorts of side-effects (my least favorite being those that are intended by the system builders but undesirable for the users). We’re creating increasingly autonomous systems that have the ability to amplify our own motives. And I don’t trust those motives, not when they decide to share my personal data with advertisers, other people, or governments. I don’t trust the motives behind aiming drones at people, for surveillance or to kill them.