Google had a tremendous keynote at its annual IO conference. What the company is doing with augmented reality in Maps and continued conversation, multiple actions, and WaveNet for Google Assistant is totally next-generation stuff. It's the kind of stuff that should keep Apple executives up at night figuring out what they need to change about their priorities and people in order to not only catch up, but not be left behind when this becomes the most important platform in history.

We need them to. You and I need them to. And if you want to know why — just look at Duplex.

Deconstructing Duplex

It was the main event. The show stopper. No doubt about it. A neural-network assistant able to interact with — and pass for — human.

It was, from a purely technological point of view, amazing. Amazing. But it was also profoundly concerning.

Google mentioned the term AI over and over again. Not since Web 2.0 and To The Cloud have buzzwords been so buzzed. They mentioned AI so often it's almost like they were afraid we'd forget they were using it. What they didn't mention, though, was Google's responsibility in the face of all this AI. And what they absolutely didn't mention was privacy.

Not addressing corporate responsibility and privacy in today's climate isn't just absurd, it's borderline negligent.

In a time when Facebook is being hauled in front of Congress and users of online services everywhere are finally waking up and wondering just how their data is being used and potentially abused, not addressing corporate responsibility and privacy isn't just borderline negligent, it's absurd.

If Google is going to show off such deep knowledge of us and our contexts that it can draft emails for us and make phone calls for us — in other words, act as us — it has to actively, emphatically, repeatedly tell us how it's respecting our privacy while its doing it.

Privacy and responsibility should be repeated in each and every segment as much as AI. More. Not just for our benefit but for Google's benefit. Not just to assuage us, but to remind itself.

Because, right now, the way Google is terrifying.

Just take a look at the Google AI blog:

The Google Duplex technology is built to sound natural, to make the conversation experience comfortable. It's important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We'll be experimenting with the right approach over the coming months.

Experimenting with the right approach over the coming months? No. Sorry. You have to nail down the privacy first. You have to come to terms with the ethics first. Then you have to build in a way that respects the privacy and holds to the ethics. There's no way to retrofit it later.

Can do vs. should do

Contrast this with how Apple developed the AI-powered Face ID feature: Apple has a privacy team. That team was involved from the very begging of the process, identifying every potential issue with regards to privacy, and making sure its addressed as part of the product development. And if Apple couldn't ensure the privacy, the product wouldn't ship until they could. That's how you take responsibility for your company and show respect to your users.

Rewatch the Google Duplex demo as well. Google Assistant never identifies itself as Google Assistant. Just the opposite, it does everything it can, including hemming and hawing as part of the conversation, in order to pass for human. That's a huge ethical problem and one Google doesn't even bother to acknowledge, much less attempt to address.

I mean, what better time to start the biggest conversation in the history of technology than during the biggest demo of the show? When better to stop being cocky about how often you can drop the term AI and what you're building and start showing some self-awareness and humility over its potential repercussions for all of us.

Now, I get that passing as human is better UX. It avoids all sorts of potential issues on the other end of the line, including having to explain what Google Assistant is and risking the human simply hanging up.

But those are reasons, not excuses.

There's a real person on the other end of the line, not just another API endpoint.

So. Much. Gray.

There are major benefits to this technology, absolutely. For people with voice accessibility needs it will be transformative. But there are negatives as well. For people with social anxiety or feel isolated, it will enable new levels of avoidance and regression.

Worse, we've already seen what Twitter and Facebook bots can do in terms of fermenting extremism, misinformation, and abuse. Technology like this is the gateway to those bots having a voice.

Sure, Duplex is tightly constrained to very narrow domains right now, but that's a processing limit that won't last. Despite what Google said, it isn't really working on this technology just to bridge the digital divide to businesses that haven't yet rolled out an online reservation system. That's a transitional situation that won't last out the generation.

This is going to be everything going forward. Google knows that. And Google needs to stop acting like it thinks we're too stupid to know that.

Google needs to address responsibility and privacy again and again, untiil we're tired of hearing about it. And then it needs to do it again.

Google needs to start the conversation. Google needs to talk about its responsibility in all this, it needs to address privacy and ethics all the time, every time. It needs to address it again and again until Google's culture is steeped in it and we're all tired of hearing about it. And then it needs to do it again.

Because right now, the problem isn't the artificial intelligence, the problem is Google.

And Apple.

Absent Google giving any indication it cares about any of this, Apple needs to get its assistant shit together and fast. This really is the future. Like I talked about in a previously, SiriOS will be the future. And if Apple doesn't take it seriously, doesn't make it a priority, and doesn't just catch up but start leading — ethically, morally, if not initially technologically — we'll have no privacy and user-centric options.

And that's bad for everyone.

Start the discussion

There is the spin on an old joke: when the last human is killed by the machines and goes to heaven, God meter at the pearly gates. And she says, how could you let this happen? And God says, I sent you the terminator, I sent you the matrix, those were warnings, what the hell are you doing here?

Now, I don't think this is Rise of the Machines. I don't think this is Judgment Day. But I do think it's important that we talk about these issues in this technology now. So, I'd love to hear what you think. Is duplex amazing? Is it terrifying? Is it a little bit of both? Where do you see this all going, and how do you see us getting there?