There is little consensus on the future of artificial intelligence. But that hasn’t dampened the euphoria over it. Nearly 400 million users — more than the population of the U.S. — are expected to have taken advantage of new AI applications over the last five years, with an astounding 100 million rushing to do so in the first 60 days after the launch of ChatGPT. Most would likely have been more deliberate in purchasing a new microwave oven.
Technology is undoubtedly improving the quality of our lives in innumerable and unprecedented ways. But that is not the whole story. AI has a dark side, and our futures depend on balancing its benefits with the harms that it can do.
It's too late to turn back the clock on how digital technologies have eviscerated our privacy. For years, we mindlessly gave away our personal data through web surfing, social media, entertainment apps, location services, online shopping and clicking “ACCEPT” boxes as fast as we could. Today, people around the globe are giddily scanning their retinas in World (formerly Worldcoin) orbs, the brainchild of OpenAI’s Sam Altman, providing it unprecedented personal data in return for the vague promise of being able to identify themselves as humans in an online world dominated by machines. We have been converted into depersonalized data pods that can be harvested, analyzed and manipulated.
But then, businesses and governments realized that they no longer needed to go through the charade of asking permission to access data — they could simply take what they wanted or purchase it from someone who already had it. Freedom House says that, with the help of AI, repressive governments have increasingly impinged on human rights, causing global internet freedom to decline in each of the previous 13 years. Non-democratic nations are learning how to use AI as weapons of mass control to solidify political power and turn classes of people into citizen zombies.
To understand where we are going, we must first appreciate where we have been. Humans have always been superior to animals despite the fact that animals can be stronger and quicker. The difference maker has always been human intelligence. But, with certain aspects of that superior intelligence now being ceded to machines, could humans eventually become answerable to a higher level of non-biological intelligence?
The threat of machine dominance is not new. In the 1968 movie by Stanley Kubrick, “2001: A Space Odyssey,” the congenial computer known as H.A.L. eventually turned on its human handlers because they became roadblocks to the completion of its mission. In a story that could be apocryphal, it has been said that during the Navy’s use of AI in war game simulations, the program sunk the slowest ships in the convoy to ensure that it reached its destination on time.
Any reasonable version of the future must also consider that, to the extent that humans are the product of millions of years of evolution, they may not represent the end point in that process. Primates may have thought they were that end point 6 million years ago, but that didn’t work out for them. That future as told by futurist Ray Kurzweil in 2005 will include the merger of biological and non-biological intelligence. Perhaps shockingly, 50 percent of AI experts today go even further, believing that there is a 10 percent chance that intelligent machines will lead to human extinction. AI doomsday clocks are counting down to the days when AI makes all of our decisions, as brain computer interfaces are being studied and implanted.
We shouldn’t need these catastrophic predictions of the future to encourage us to act. For all their positive contributions, AI is already facilitating the creation of vast criminal conspiracies perpetrating deep fakes, cyberattacks, the distribution of child sexual abuse material, money laundering, and elaborate new ways of committing violent crimes and acts of terror.
We all want to believe that a dangerous parade of AI horribles won’t happen because governments are on the job and AI entrepreneurs have made solemn pledges that their AI will be benevolent. Many companies even agreed to halt AI development for six months. Based on what has followed, we would be wise to be skeptical about such assurances and promises.
Policymakers don’t seem to be concerned about the dark side of AI. So far, every administration and Congress has stood back and allowed the current AI arms race that no one bargained for to play out. In this version of the future, we are left to hope that as AI capabilities proliferate, the good guys and mutually assured digital destruction will keep things in balance.
Democratic nations must pursue an alternative and come together to establish international accords — a tech version of the 1944 Bretton Woods Agreement, where 44 nations agreed on a system of global monetary management and commercial relations. These agreements would begin by establishing international controls and regulations that would first secure the internet and create effective forms of online governance, including oversight infrastructures focused on enhanced authentication, digital hygiene, enforcement, and the imposition of liability to make everyone responsible for the technology they deploy. Similar recommendations have been made by noted AI leaders.
As AI regulatory infrastructures and processes expand, AI innovation could admittedly slow. As surveillance increases to optimize security, there may be tradeoffs in personal freedoms. But it is better to slow innovation so that the threats can be better understood and compromises on personal freedoms can be reached rather than permitting unfettered AI to fast-track us to an end point where freedom may be an illusion and humanity is no longer in charge. If other nations want to race forward to that point, let them.
The futures of our children and grandchildren depend on smartly harnessing the power of AI, particularly given that China is already surging ahead in the race to be AI, quantum and 5G dominant. While we are still capable of unplugging intelligent machines, the president and lawmakers must make sure that the United States achieves AI dominance first, and then leads a global effort to establish AI standards that leave humans and democracies in control.
Thomas P. Vartanian is the executive director of the Financial Technology and Cybersecurity Center, and the author of “The Unhackable Internet.”
Read More Details
Finally We wish PressBee provided you with enough information of ( Can we save ourselves from the dark side of AI? )
Also on site :