I like this reversal of the alien intelligence metaphor: rather than figuring out how consciousness from another solar system works, build our own “alien” intelligence. Furthermore, rather than building new kinds of intelligence as an imposing of logical/intellectual properties, start from motion and perception – because that’s the origin of whatever is called intelligence. Check it out:
On the iPhone app Secret. I didn’t download it. Why should I? I know where this stuff goes. Suffice to say, the makers claim that the app allows you to: “Be yourself” and “write beautifully”, with “no names or profiles” and where “great ideas spread”.
Now having “great ideas spread” – if, in fact they are great ideas – isn’t a shoddy ambition but it’s unlikely that such an app is an incubator for inventive and innovative advances. Also: seekers of patentable expressions of unique ideas would pilfer those ideas that could create markets. If one has a great idea, it’s best to recognize the value of secrecy from others, lest the pain of seeing someone else earn the licensing fees persists unto the deathbed.
Back to the idea of “Secret” with an observation: as people use social media more to communicate, and less often face-to-face, the need to communicate things that they cannot (or won’t) admit into the public ether grows. The problems is: it’s an artificial inflation of a human need.
To confess is not to just speak or to communicate a hidden desire or a shameful fact. To confess is to confront a deep-seated problem that cannot be uttered in the daily conversation with others. To confess is to reflect – in the hope that the reflector has the wisdom to listen (not just hear) what can’t be fully processed by the confessor.
This is not something that an app can accomplish. And that’s a good thing. It’s a good thing because it reminds us that we need smarter people and dumber computers – not the other way around. That is to say: the problems we need to solve should invoke only just-smart-enough technology that does the lifting so-to-speak. Setting out to replicate human intuition, for instance, isn’t necessary; rather, using computation to check our intuition is the sufficiently dumb solution.
Secret is simply an example of Silicon Valley misunderstanding basic human needs and – in complete opposition to its claim to advance and enhance human beings – is a setup for collective, imitative violence.
Why? Because in the process of sharing secrets to contacts anonymously the likelihood of abusive ‘secrets’ increases and because GetSecret’s conception of ‘secrets’ replaces true confession with its illusion. Violence isn’t just physical: it’s emotional, political, intellectual, and social.
People do need to confess, to tell secrets at the right time, in the right contexts, with the right people. Confession releases the inherent tension that represses the violence of force required to keep them – until the tension snaps and the “horrible errors of childhood” come “storming out to play”.
The human problem of technology is not that it will oppose us, nor that it will support us. Both schools of thought assume that we are at the center of “the Technology” – as if technology is out there for us with which to develop inter-relationships wherein technology extends from us and to us.
No – the human problem of technology is that technology does not care. We are not the center of the technology. In this sense, Friedrich Kittler was right where McLuhan was wrong.
Just as we learned hundred of years ago that our planet was not the center of the solar system, so too must we realize that we are not the center of technology. The current debates about technology are mostly dived into the two world views: the utopian and the dystopian.
What we must replace with those views is an “atopian” perspective: that there is no place for us in the center of Technology. Technology does not extend what we already possess – it replaces our eyes, our ears, our fingers, and – in limited extents – our brains.
The technology doesn’t care about us at all. Do you understand that? The problem isn’t that it hates or loves us. It’s not even that it ignores us, for that is an active process. It doesn’t care.
Every single time we approach and come into contact with anything technological, it is we who have to do the caring. The program doesn’t care.
Imagine a relationship with someone who doesn’t care about you at all. At least when someone on whom you depend hates you, you know where not to go. With love, you know where to go. To love or hate such a person is pure madness.
Where do you go with someone who doesn’t care at all?
Whatever new ideologies concerning technology that we evolve, we must abandon the human-centric view of technology because it will blind us from the solution to the human problem of the technology:
We have to do a lot more caring than we have ever done before our time.
Paradoxically, once we realize it isn’t about us any longer, we can make it about us again. Isn’t that what a civilization is for anyway?
A post over on O’Reilly Radar questions the stability of Google’s user-trust in the context of the discontinuation of its syndication service Reader.
What posts such as these miss, however, is that it’s not about Platforms – it’s about Missions. Platforms are simply buttresses for missions. Google’s mission is *extremely* stable – its Mission Statement is the most brilliant mission statement of any company – it’s the most encompassing yet focused corporate mission statement ever written. Google is currently the 3rd largest VC in the United States. It can afford to put its hands on anything, walk away from it, and then pick it up later (either via acquisition or reclamation) if there’s a pertinent market interest in it.
Does it “get” Social? It doesn’t matter because there isn’t much to “get”…other than creating a clean well-lit place for people to be social – whatever the heck that means when people are separated by electrons and keypads. Besides, time will tell – the social media stunads might not care for it, and it may not be all popular yet, but Google (unlike any other company) has plenty of time to grow the place. All that matters is data-collection for ad-placement upon *other* real-estate. Unlike every other social platform, G+ won’t need to insinuate advertisements into its users’ social streams (which would, in fact, make it a far more attractive social site once people get sick of algorithmic-based ads and sponsored non-sense on Facebook, Twitter, Pinterest, etc. – how’s that for user-respect and Social-as-a-service?). Google can afford to fund the Social game long-term because it’s the data that informs the revenue streams…not the LOLcats.
Google has the following: Mobile (Android), Cloudy Stuffs, Social, Search, Maps, Youtube, Chrome, Chromium, APIs out of their gord, BigQuerry (and Dremel), and a panoply of diversified assets to spiral out from its empiric center. It also employs some of the smartest people in the industry, and has equity in ventures that very few people know about across multiple verticals, including: Life Sciences, Payment Processing, Energy, Gaming, Business Intelligence, and Data Analysis (think also indirect benefits arising out of symbiosis with CIA/NSA).
No other tech company comes remotely close to the shear size of cross-vertical power that Google possesses and expands. Google’s future has nothing to do with these little trinket toys like Keep or Reader (users may not see them that way, but in the big scheme of things, they’re just toys). RSS was a nice little scrape-script written when content on the web was becoming more dynamic. The times they have changed, however, and interfaces have matured (besides, what you are looking at when viewing social media sites is effectively RSS – FB, Twitter…they’re effectively RSS with CSS and other scripting commands layered on top to tart things up).
As for Google’s willingness to risk losing millions of Reader users (and I am one of them), Google’s omniscience of every single link clicked-through to ads informs the value of its assets. Google Reader may have benefited tech and other blogs, but Google knows just how tangible value all of that traffic actually did or did not render to its clients. Now, Google is privy to these numbers – whether or not they themselves were stunads in terminating the service (either as a move to force users towards G+ or for some other reasons) remains to be seen. But their decision only serves to lead to the essence of this post.
I’ve said it too many times – you own nothing on the web except your own domain. No company owes you anything. Google Reader isn’t yours, Facebook isn’t yours, Twitter isn’t yours, WordPress’ file-structure isn’t yours. The ‘self-hosted’ website you think is self-hosted isn’t yours – only if you own the servers that echo content to visitors. And even that ownership is tenuous. This is the nature of the Web. It will always be this way.
In a sense, “Stability as a service” may sound like a good thing to have in our time. The Internet, however, breeds nothing but instability. Alan Watts, not Seth Godin, is the man to pay attention in times like these.
Curious, then, that the ideologues of Open Source, Information-wants-to-be-free, Disruption-as-a-service, and the Social Media Revolution would be let-down when the planks drop out from under their feet.
The question isn’t: “How will technology revolutionize the world”. The crucial question of our time is: “How will we revolutionize the technology.”
Google does what it does and will continue its shuffle of platform instability to serve its mission.
Complaining about the instability of a corporation’s data-traps isn’t terribly revolutionary, is it?
Then again, nor is this post.
But hopefully at least, it got you to think a bit regardless of how close to or far from the mark I am. And, in thinking a bit, you’ll think-through the importance of selecting ideologies that subsume and command technology’s trajectory and not the other way around.
Google products are free…of charge.
Democracy is not free. Its almost-impossible mission is to liberate us from our collective Stupidity-as-a-service.
Let’s not be stunads.
(Comments can be emailed – it’s more social that way.)
All these efforts to ease the torments of existence might sound like paradise to Silicon Valley. But for the rest of us, they will be hell. They are driven by a pervasive and dangerous ideology that I call “solutionism”: an intellectual pathology that recognizes problems as problems based on just one criterion: whether they are “solvable” with a nice and clean technological solution at our disposal.
In our time, questions concerning technology are becoming not only harder to answer, but harder to *ask*. “Harder” in the sense that we are becoming increasingly resistent to challenging the emerging ideologies about just what technology can do for us. The persistant problem we always face with technological changes is that our thinking is delinquent – that is to say, we always live behind-the-times of technology, not ever quite catching up – no matter our power to imagine the future.
Among the many reasons for our delinquency, there is one today that is very disturbing – namely, the set of ideologies emerging largely out of Silicon Valley (both literally and figuratively).
The following are examples of these ideologies and claims:
The Quantified Self Movement
The Social Media Revolution
Real-time sharing of our lives
Healthcare Social Media
Big Data Business Intelligence
The Singularity Movement
It’s an endless list. It just so happens, that in every single ideological category someone stands to reap enormous benefits. And it just so happens that those “someones” are the Digital Architects.
The Digital Architects have their congregants, evangelizing the rapturous trajectory of technological angels. Were the evangelizing rooted in critical thinking and the nobel cruelty of questioning, we would be in a stronger position to assess and shape the trajectory of our vital hopes.
But a common feature of these ideologies is that technology (or more typically *media*, which are different from technology, but that’s another topic of discussion) can now enable us to open-up our lives, share and exchange our data, and then use more technologies (e.g. digital algorithms) to “synergize” all of these atomic bits of our selves and release their stored energies.
You are a nuclear device! And your inner data-energies can be harnessed. (Horses are harnessed too…just a reminder).
Certainly, we have the ability to use technologies in meaningful, targeted, and purposeful ways to improve research, identify important hidden patterns, etc.
Having access to the number of steps you take or having an iPhone version of a Holter monitor has its benefits. But…that’s not Healthcare. No, we aren’t killed by what we know – we’re killed by what we *don’t* know. You can track all the data you want, but if you don’t know about that little thrombus that’s been fashioned out of those micro-vascular tears from all of that Cardio your iPhone app has succeeded in getting you to perform meticulously? That little thrombus doesn’t care about iOS.
The over-arching tone of these movements – and the digital architects programming them – is that we can achieve these improvements without doing the difficult dirty work.
This is the real danger that we face in our time: a surrendering of our utter devotion to doing whatever it takes – regardless of the cost to ourselves – to achieve the ‘impossible’.
Our ancestors got here using technology. But they didn’t get here by surrendering to it nor by expecting it all to be painless. For them, life’s meanings always came out of the struggles against inner temptations and outer threats.
There’s no guarantee, however, that their descendants will be as fit as they were. Evolutionary bottlenecks ramify the universe.
The state of today’s technology didn’t arrive randomly – specific people made specific decisions at specific times. And these decisions shaped the course of technology. 160 characters in a text message was a decision. Hypertext Protocols were the result of specific decisions. Somebody had to create the Back Button (that person had a forward-thinking brain…some of his colleagues? Not so much).
Therefore, we aught not surrender ourselves to technological determinism, nor the specific ideologies of technological decision-makers with their own world-views. Our technologies should compliment our true selves – not de-salinate them of their rich tonicity.
Bad ideologies have held our species back. Every single utopian ideology had its underbelly – and every underbelly was regressive, not progressive.
It’s not technologies which will usher in Holocaust 2.0. It’s the unvetted ideological agendas about those technologies, supported by superficial thinking, which will do that.
Technology works best when it enables us to do the hard stuff, not the easy stuff.
If I had to sum up the core problem with the philosophy of The Digital Architects of Painless Love, it would be this:
Imagine a world where everybody claims to be in a constant state of love, where there is no temptation to hate, there is no heartbreak, there is no risk of falling and hitting the pavement, no danger of tumbling into the abyss of unrequited love, no sweat-soaked sheets after making love.
A world of light without shadow.
This is the Painless Love promised by the Digital Architects.
Chutzpah without the chutzpah.
Love without its melancholy.
Dreams without alarming nightmares from which to wake up.