Main image of article When We Talk A.I., Should Ethics Take Top Priority?
As artificial intelligence (A.I.) continues its push into the mainstream, developers and users will find themselves on the same side of at least one topic: ethics. Every large tech company working on A.I. has its own platform, bespoke intellectual property, and corporate ethos. This is the crux of the issue; without a singular platform or guiding principle, we’re all subject to the whims and impressions of the companies giving us these different tools. One entity, OpenAI, is providing a sort of working concept for ethical artificial intelligence. A non-profit research group, it operates on the premise that artificial general intelligence (AGI) might be dangerous for humankind, though it “will be the most significant technology ever created by humans.” You could draw the conclusion that OpenAI is most concerned with the Singularity, the concept that AGI will become so powerful it will spawn "artificial super-intelligence" and begin to self-replicate, taking human beings out of the equation completely. To some, this is a trigger for the end of mankind. It’s all very "Terminator." Google’s Duplex, recently unveiled at the company’s annual I/O conference, triggered some of these fears. A sophisticated A.I. bot, Duplex can call restaurants and hair salons, speaking to humans and making appointments and reservations. It even says, "um," like a real human being. At first blush, it seems as though Google is trying to be the first company to have its software pass the Turing test, which effectively means an A.I. is indistinguishable from a human. This hits at the core of ethical A.I., too: should we be aware we speaking to bots, and should artificial intelligence be so effective? Amazon is approaching things from a different angle. Though Alexa is a hugely popular platform, Amazon has been caught storing voice searches and queries, which means (theoretically) that your searches are personally identifiable. This has already presented legal entanglements. (A 2016 Wired article suggested that Google also stores such searches.) [caption id="attachment_142250" align="aligncenter" width="2048"] A.I. PepperBot Dice Artificial Intelligence via a robot[/caption]   Cortana is Microsoft’s A.I. bot, largely aimed at customers who wish to peer deeper into Outlook or other Microsoft services. Apple’s Siri, an immensely popular voice assistant due to the ubiquity of iPhones, is also around. One major differentiator between these companies (and their platforms) is the approach to privacy. Apple and Microsoft CEOs Tim Cook and Satya Nadella (respectively) are on record as saying that data privacy is a fundamental human right. Google and Amazon have made no such distinction. The singularity is in the far distance (if it ever happens), but the ethics of artificial intelligence and data privacy are topics that should occupy us today. Without tech companies agreeing to abide by a single set of rules (such as OpenAI's), we’re subject to competing interests. Google is an ad business; Amazon wants to sell you goods and services. Apple and Microsoft are more interested in getting money upfront for hardware and software, then providing A.I. as a service. No laws govern A.I., either by force or agreement. This lack of governance even affects OpenAI. Elon Musk, an open critic of singularity and founder of OpenAI, quit its board when one of his companies (Tesla) became more absorbed with its own A.I. efforts (Autopilot). A lack of oversight is low-hanging fruit for bad actors, too, some say. The Malicious A.I. Report notes how artificial intelligence is easily exploitable by rogue states, terrorists, or other criminals. These exploits are only made possible by companies placing stronger emphasis on a return on investment than pragmatically iterating on A.I. security. We don’t even know if developer support for A.I. is lucrative, fun as it may be. Amazon still doesn’t report how much Alexa skills pay, and Google Assistant isn't very reliant on third-party developer support. Apple has throttled what developers can do with Siri, which isn't subject to direct monetization, either. Today, artificial intelligence is relegated to phones, tablets, computers and voice-first in-home speakers. It's becoming ubiquitous. Rather than focus on which bot does the most – which doesn’t impact the bottom line for you yet, as a tech pro – we must focus on which company has ethics that align with our own. At some point, the tech industry will also need to wrestle with a generalized "set of ethics" for A.I.