Technology

Right and Wrong and A.I.

I live in the San Francisco Bay Area where innovations in tech have been a defining feature of the region for decades. We’ve been told over and over that computers will change our lives, but I think it’s no great insight to say that change is difficult for us humans to navigate. Just think of the COVID-19 pandemic and how quickly the world (and our little slice of it) changed in the space of a week as we went from a relatively open society to one where we were ordered to lock ourselves inside our homes, because doing so would be a way to limit the spread of a virus we had very little knowledge about.

At first, it was only going to be three weeks. And then three weeks became three months. And then three months was pushed to six. With each extension of COVID protocols, people started losing their patience. Anti-masking, anti-vaxxing, anti-social distancing became expressions of saying that, for some, they had enough and want to go back to the way things were. Many welcomed the false information that was being pushed on us by politicians and even bots — whose function is to sometimes amplify beliefs (even if those beliefs lead to widespread sickness and death) on social media. Thankfully, most folks understood the dangers of dropping all safety measures in order to “get back” to what we had. Those who ditched their masks and got back to large groups found that some in their like-minded hive got sick and died in a relatively quick way. Did the reality of 100,000 deaths a day change behavior? Maybe for some, but for a vocal minority, their protestations were part of a larger movement to transform the society into an ill-defined, less free order where there were elements of social darwinism and political authoritarianism. For those who, perhaps, didn’t think through the consequences (intended or not) of their proto-fascist flirtation, questions of ethics likely didn’t enter conversations about what they were doing.

Well, it appears we’re slowly getting back to pre-pandemic norms. It’s a gradual dimmer switch of change that’s slow, more deliberate, and calibrated so we don’t have another deadly surge with a variant that’s even more lethal. And that slow change may ultimately tamp down the nihilistic streak brought on by a new virus.

What does ethics have to do with COVID? Well, for those who live in liberal democracies, we tend to make public heath decisions based in part on the ethics of our society. What’s right and what’s wrong if policy X is implemented. Will policy X lead to Y outcomes, or Y minus 1 outcomes. Sure, our thought processes are more nuanced and complex than a simple binary, but ultimately we do make these decisions based on principles of right and wrong. Protecting the health and safety of our population is deemed the right thing to do, while letting people take their chances and let the chips fall where they may is viewed by a majority as a wrong policy to pursue. To be sure, that’s a simple calculation, but one that’s not incorrect. What if, for example, we left those ethical questions to Artificial Intelligence? Would a computer be able to reach similar conclusions as humans — based on all the risk factors a minority in society felt it was worth taking with COVID? Perhaps. But perhaps, A.I. would buck the majority opinion in favor of something wholly different; something neither side on COVID protocols thought of — and one that truly dystopian. Would we accept the A.I. view? Or just note it as an option that was on the table? I’d hope we’d be smart enough to veto anything A.I. came up with that went against our core beliefs and ethical standards, but people can be fickle, just as computers can be ruthlessly logical. That’s why ethics needs to play an influential role in decisions large and small.

That’s why conversations like one on a recent episode of The Ezra Klein Show featuring Sam Altman needs to happen more often — and with different guests in the dialogue on A.I.

Sidebar: Altman is the CEO of OpenAI. This is the company that makes ChatGPT. ChatGPT is an A.I. bot that’s going to put a lot of writers out of business. So, in the future, this blog might be updated by a prompt that ChatGPT takes and crafts a 1000 word blog post.

Both Klein and Altman are pretty bullish about A.I., seeing it as a liberator from not only tasks that can be mind-numbingly boring, but also decrease the costs of professional services like hiring a lawyer or even a computer programer. That’s going to close off careers for those whose knowledge of those fields will be replaced by A.I. — which will be a big change that might not go as planned. What are the ethical consequences of replacing a human lawyer, jury, or judge with A.I.? Certainly, a computer has the capacity to predict outcomes or render judgement based on a massive amount of data it can process in a short amount of time. A.I. is good a prediction, but can it learn nuance? Can A.I. learn empathy? Hatred? Vindictiveness? Love? It’s difficult to know since those who are bullish on A.I. don’t seems to invest a lot of time into addressing these questions. Those who are watching the level of change, technology (and A.I. specifically) is causing an understanding that if we humans are to live with A.I. as a partner in the now and the future, we cannot skimp on the friction that ethics will bring to the mix. That means elevating philosophers, sociologists, psychologists, and even political scientists from a reactive group who we turn to when things go horribly wrong, to those who are proactive and an integral part of the architecture of A.I.

Technologists have too often shown themselves to be too solipsistic in understanding what their innovations can mean for the future — and some of that is because we have prioritized STEM majors and marginalized the liberal arts and social sciences. If we’re on the verge of 10x change, our laws, our ethics, our institutions won’t be able to manage the inflection point if they are not an integral part this new matrix.