AI: Finding a safe path

0

We live in a nation whose economic engines are driven by capitalism. It’s high-octane fuel that requires, we’ve learned over time, some regulation. Champions of capitalism advocate for its ability to foster innovation and perfection through competition. Those sometimes mistrustful worry about mantras like, “Greed is good.”

Artificial intelligence is the new hypersonic commodity, and the corporate rush to make good is like no other in the history of mankind. So the question is how to govern this super-novel economic wave in ways that will benefit mankind and maximize its potential while at the same time prevent its potentially damaging and existential effects.

As a father and grandfather, my recurring thoughts about AI were rekindled when I read about Dario Amodei, CEO of an AI company called Anthropic. Amodei appeared in an article in Time Magazine by Billy Perrigo entitled “AI Safety is About More Than Business,” suggesting to me how this company’s work might conflate these off-setting issues of primary profit obligations to investors versus essential safety and security concerns for humanity.

Anthropic was leading the way in the development of frontier AI, discovering that the secret to training better performing generative AI systems was better achieved by infusing “more data and computing power than by relying on new algorithms.” Anthropic’s creation, called “Claude”, preceded Open AI’s public introduction of ChatGPT by several months. The difference in timing had to do with the question of ethics and safety. Billions of dollars in investments were at stake. Anthropic paused out of concerns for unintended consequences and lost billions, “opting instead to continue internal safety testing.” In Amodei’s words, withholding Claude was an intentional “commitment to prioritize safety over money and acclaim.”

Amodei wanted Antropic to model how ethics and safety can be a “race to the top on safety,” not a race to the bottom for who can achieve the biggest pot of gold.

Willy-nilly, it is a race. Besides Claude and OpenAI’s GPT, there’s Google’s DeepMind, and Apple Intelligence. Policymakers lean into Anthropic’s mission for policy advice because their record of ethics and safety is more credible. Competitors mouth the same safety concerns but their actions don’t reinforce what’s come to be known as “responsible scaling policies.”

Amodei cautions that as these advanced generative AI companies go into hyperdrive to lead the competitive pack, “Researchers seeking to assess if an AI is safe, chat with it and then examine its outputs. But that approach fails to address the concerns that future systems could conceal their dangerous capabilities from humans.” That sounded to me like the implicit warnings from the movie “Space Odyssey 2001”, when the spaceship computer HAL takes control of the spaceship and locks the mission specialist Dave out of the oxygen compartment to asphyxiate him. When Dave appealed to the computer, HAL’s cold and controlled response was, “I’m sorry Dave. I’m afraid I can’t do that.”

The Godfather of artificial intelligence is a man by the name of Geoffrey Hinton, a British computer scientist and winner of the Turing Award (the equivalent of a Nobel in computing). In a recent interview with CBS’s Scott Pelley, he praised the potential of AI, especially with respect to health care, but he also gave deep warnings of the potential for AI to be dangerous, warning that “AI may be more intelligent than we know.”

Hinton, who first imagined computerized neural networks, described how these machines of artificial intelligence learn. AI features layers of software with each layer tasked with solving parts of a problem. When the machine makes a mistake, that feedback is sent down through the software layers for dissolution or correction. When it succeeds it adds to its own program. Either way, it learns, sometimes creating its own software code. When Pelley posed the question, “So human beings will eventually be the second most intelligent beings on the planet?” – Hinton paused and then said emphatically, “Yes.” – Pelley: “You believe these systems have experiences of their own and can make decisions based on those experiences?” – Hinton: “In the same sense as people do, yes.” – Pelley: “Will they have self-awareness, consciousness?” – Hinton: “Oh yes, I think they will in time.”

Overlaying the potential and the risks Hinton talked of AI’s ability to read MRIs as well as or better than radiologists and its ability in designing drugs, perhaps even better than the scientists, but “The risks are that we may have a whole class of people who are unemployed… fake news, and autonomous battlefield robots.”

Building in the safety mechanisms to limit the risks of what unconstrained AI might do has to be calibrated to fit our sense of human values. But if artificial intelligence becomes the determinant of future global prowess and power, then how do we balance our competitive impulse to stay ahead when competitors like Vladimir Putin or Xi Jinping may have few if any scruples if it means gaining an advantage over the United States. Jack Clark, co-founder of Antropic, has said that “It would be a chronically stupid thing for the U.S. to underestimate China on AI.”

This is a critical place for education. Decades ago, civics became a mandatory course in many of America’s public schools (38 states) for obvious reasons. I would argue that a basic course in artificial intelligence and the risks associated with social media are fundamentally essential to the knowledge base of America’s rising generation. The syllabuses of these general courses should simply help middle and high school students understand the basics of these 21st century technologies, the prospects, the risks, and in the case of social media the cautions related to misuse, misinformation, disinformation and social media influencers.

When questioned about an ethical and safe path forward, Hinton seriously raised my level of concern. “I can’t see a path that guarantees safety. We’re entering a period of great uncertainty, dealing with things we’ve never dealt with before. And normally the first time you deal with something totally novel, you get it wrong.” – Pelley: “Taking over from humanity?” – Hinton: “Yes, that’s a possibility.”

Bill Sims is a Hillsboro resident, retired president of the Denver Council on Foreign Relations, an author and runs a small farm in Berrysville with his wife. He is a former educator, executive and foundation president.

No posts to display