Open Technologies

Reining in Runaway Technology

It’s critical to understand what’s happening beneath the surface of all the technology platforms and instruments that run our world.

By Rob Schuham, co-founder of Undercurrent

I simply can’t “un-see” all that I’ve seen in this space. Both the good, as well as the lack of ethical foresight. I feel it’s critical to understand what’s happening beneath the surface of all the technology platforms and instruments that run our world. We cannot make the transition to a “Golden Age,” without addressing the very real challenges we face as machine learning and AI is already a powerful layer in our “life stack.”

Simply stated, automation is happening whether we like it or not. Machine Learning and Artificial Intelligence backed by supercomputers is driving innovation to exponential levels. Moore’s Law has been left in the dust in so many different industries and sectors. It is enabling incredible innovation, yet at the same time, it’s also creating new threats even more quickly.

On the positive side, many are able to experience the hands-on benefits of AI; self-driving cars, enhanced automation to eliminate tedious and dangerous jobs, medical adaptation for zero-scope for errors, faster digital assistance, more secure airports and other public spaces through facial recognition, and as a creative and innovation tool across many public and private sectors. Additionally, the United Nations has convened experts over the last several years to discuss and demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, assist the delivery of citizen-centric services in smart cities and provide new opportunities to help achieve universal health coverage.

With regards to the pandemic we’re living through now, there is a version of surveillance and tracking that will likely be implemented for mitigation purposes including contact-tracing, community and regional transmission rates, individual health monitoring leveraging body temperature and pulse oximeter apps and of course your own testing history. To wit, CNN reports on China’s reliance on mobile technology and big data: The Chinese government has used a color-based “health code” system to control people’s movements and curb the spread of the coronavirus. The automatically generated quick response codes, commonly abbreviated to QR codes, are assigned to citizens as an indicator of their health status. Essentially your daily routine is entirely dependent on a smart phone app. Leaving your home, taking the subway, going to work, entering cafes, restaurants and shopping malls — each move, dictated by the color shown on your screen. Green: you’re free to proceed. Amber or Red: you’re barred from entry.

This appears to be proving somewhat effective for Coronavirus, although it’s hard to believe what’s actually reported. In fact, China just revised its death toll in Wuhan up 50% because of international pressure on them for truth and accuracy. Regardless, this is likely the future for countries that leverage high levels of surveillance already. In fact, Singapore last month launched a contact-tracing smartphone app, which would allow authorities to identify people who have been exposed to Covid-19 patients. The Japanese government is considering the adoption of a similar app. Moscow has also introduced a QR code system to track movements and enforce its coronavirus lockdown.

Apple and Google, along with other smart phone platforms, are rolling out EU and U.S. contact-tracing tools soon, however there are already tensions and disagreements around privacy. In countries where freedom of movement and thought is highly valued, the notion of ubiquitous surveillance is tricky to say the least. The benefits of its use come at a steep cost. I value the necessary health benefits, but bristle at the thought of “big brother” and any kind of far-reaching controls or restrictions on my choices. And I know this sentiment is shared widely.

China sees it quite differently and has been rolling out another massive surveillance program over the last few years that ranks all its citizens based on “social credit.” People can be rewarded or punished according to their scores. Like private financial credit scores, a person’s social scores can move up and down according to their behavior. According to the Chinese government, “the system will use big data to build a high-trust society where individuals and organizations follow the law.” This illustrates the risks around heavy-handed societal conformation, government control, weaponizing of your personal data and other civil liberty violations. And in the case of Western countries that want to use technology for “beneficial purposes,” who’s to determine what kind of surveillance is the right kind? And whose ideologies will this be representative of?

Meanwhile, much has been written about the future of work and whether AI and robots will displace jobs. Estimates range widely. The World Economic Forum, for example, estimates automation will displace 75 million jobs but generate 133 million new ones worldwide. A 2019 Boston Consulting Group Survey found that use of advanced robots would reduce the total number of employees at manufacturers, although regional differences are evident in the results. Among survey participants from Asian companies, 56% expect the number of employees to decline by at least 5% within the next five years. This expectation was strongest among participants from Chinese companies: 67% of them expect the number of employees to decline by at least 5%, and 21% expect the reduction to exceed 20%. Fewer participants from North America (50%) and Europe (44%) expect a decline of at least 5%. Participants from most countries expect demand for white-collar workers to increase.

A conclusion to draw here is that there will indeed be some blue collar job displacement for repetitive manufacturing tasks, but an increase in what has been traditionally white collar jobs. The challenge will be re-training workforces to take on more technical roles than physical ones. As the Boomer and Gen X workforces age out, and a younger, more adaptable workforce ages in, there may be an equalizing effect.

In a “Golden Age” of capitalism, a higher value will be placed on liberating humans from backbreaking, repetitive tasks and migrating to positions such as line-coordinating the machines, or into other fields that workers personally find more meaningful. The person who was repetitively assembling kits and parts is now overseeing the robots and/or more focused on QA and technical troubleshooting. Workers may also move into coding and development work, or into vocations that leverage their imagination and passions. For example, we see a rise of craftspeople, similar to what one experiences on Etsy and other similar marketplaces. The unlocking of creativity, combined with a higher value being placed on artisanal practices is one of the many byproducts of a liberated workforce.

Let’s now turn to another serious threat imposed by AI which is the gene-editing and splicing technology CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) and other genetic technologies which can be easily redirected for biowarfare and bio-engineered pandemics.

CRISPR has been called a miracle by some as through gene-editing and splicing, we may be able to correct genetic errors that cause disease, eliminate microbes that cause other diseases, make “designer babies,” resurrect extinct species, eradicate dangerous pests and other “benefits.” While these sound amazing to some people, humans do not tend to do well when we act as God. We are particularly adept at solving problems which result in unintended consequences that lead to existential threats.

Yale Insights asked Dr. Greg Licholai, a biotech entrepreneur and a lecturer at Yale SOM, to explain CRISPR’s dangers. He said, “One of the biggest risks of CRISPR is what’s called gene drive, or genetic drive. What that means is that because you’re actually manipulating genes and those genes get incorporated into the genome, into the encyclopedia, basically, that sits within cells, potentially those genes can then be transferred on to other organisms. And once they’re transferred on to other organisms they become part of the cycle, then those genes are in the environment”.

A study ordered by the US Department of Defense has concluded that new genetic-engineering tools are expanding the range of malicious uses of biology and decreasing the amount of time needed to carry them out. Rapid progress by companies and university labs raises the specter of “synthetic-biology-enabled weapons.” Among the risks of “high concern” is the possibility that terrorists or a nation-state could re-create a virus such as smallpox. That is a present danger because a technology for synthesizing a virus from its DNA instructions has previously been demonstrated.

Runaway technology is also creating a deeper mental health crisis through AI-backed social media platforms that have thousands of datapoints on you, and can tailor divisive and destructive content. This has led to deep political and ideological divides in many countries, not to mention a direct impact on self-esteem that has shown a correlation to higher teen suicide rates.

Tristan Harris, founder of the Center for Humane Technology whose mission is to reverse ‘human downgrading’ and re-align technology with humanity says, “By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them with new ones.” He also says, “Our addiction to social validation and bursts of “likes” would continue to destroy our attention spans. Our brains would still be drawn to outrage and angry tweets, replacing democratic debate with childlike ‘he-said, she-said’. Teenagers would remain vulnerable to online social pressure and cyberbullying, harming their mental health.”

Further, and according to Amar Ashar and Sandra Cortesi of the Harvard Business School Digital Initiative, there is a very real risk that — without thoughtful intervention — AI may in fact exacerbate structural, economic, social, and political imbalances, and further reinforce inequalities based on different demographic variables (including ethnicity, race, gender, gender and sexual identity, religion, national origin, location, age, and educational and/or socioeconomic status). Issues of exclusion and bias within AI include how facial recognition systems can reinforce structural biases based on how such systems fail at reading skin-type and gender. Scholars are illuminating that data discrimination may further oppress often marginalized and underrepresented groups; and it has been demonstrated that automated systems may reinforce inequality and bias in oftentimes unintentional or less visible ways.

AI is creating harmful effects across so many technological applications and sectors, including costly cyber-attacks. According to Security Magazine, the next generation of computer technology — quantum computing — will be able to crack encryption that would have taken traditional computers millions of years in mere hours or minutes. Quantum computer-backed AI will allow bad actors to build computer malware that can change both its form and purpose. Attackers will use this artificially intelligent malware to find new ways to access Government’s, or an organization’s, network and disrupt its operations. Mission-critical information assets will be targets for compromise — all without detection

The UN has identified the need to achieve transparency and explainability in AI algorithms. Simply put, AI will be a layer in virtually every tech, medical, manufacturing, supply chain, education, transportation, and, eventually, governance stack moving forward. It’s clearly important enough that when truly thoughtful decisions and approaches are made in what the future of capitalism looks like, that we will have to address this vector meaningfully and with great attention to unintended consequences.

The train has left the station for the most part, but we will likely need to slow or redirect that train when we start to truly embrace the significant potential for externalized harm. As Robert Oppenheimer, inventor of the nuclear bomb said: “It is a profound and necessary truth that the deep things in science are not found because they are useful; they are found because it was possible to find them.”

AI is indeed in its “Oppenheimer moment,” as we are at a juncture where we can choose to adjust our path and alter technologies where the bad exponentially outweighs the good. I am hopeful that as we move forward towards our potential “Golden Age,” we will collectively make decisions in favor of overall wellbeing.