Now that artificial intelligence (A.I.) has evolved from the theoretical to the practical, it’s perhaps time to introduce an ethical framework for the technology. That’s the perspective of Google subsidiary DeepMind, which has just launched a research unit for that very purpose, dubbed “DeepMind Ethics & Society.” Researchers within the unit will study real-life applications of A.I. and figure out how to put ethical considerations into practice. “At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes,” read the company’s announcement. “Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face.” It will also involve cross-disciplinary researchers from the humanities, social sciences, and other fields. People have already been exploring the values and standards of A.I. for years. For example, OpenAI, a non-profit “artificial research company,” has tasked itself with shepherding an A.I. that’s friendly to humanity. As part of that effort, OpenAI conducts research, releases papers, and offers useful tools for A.I. experts and other tech pros. The need to establish some sort of framework grows increasingly necessary as A.I. systems become more powerful. For example, Google is “all in” on A.I., baking its Assistant into its latest generation of hardware products; meanwhile, machine-learning algorithms increasingly power its software platforms. That potentially affects billions of people worldwide. “Today, we overwhelmingly get it right,” Google CEO Sundar Pichai told The Verge. “But I think every single time we stumble. I feel the pain, and I think we should be held accountable.” That’s quite a bit more moderate than the view held by Tesla CEO Elon Musk, who thinks that artificial intelligence could spark a global conflict. “China, Russia, soon all countries w strong computer science,” he Tweeted in September. “Competition for AI superiority at national level most likely cause of WW3 imo.” Whether or not a body like DeepMind comes up with ethical standards for A.I. that are adopted by the technology industry as a whole, it’s clear that this evolving technology will present a host of thorny conundrums for tech pros in coming years. A.I. offers the tantalizing possibility of solving a lot of technology and data problems—but if handled incorrectly, it could also cause a lot of pain.