Connect with us

Editorial

Theology in the 21st century – the robots are coming

Published

on

May and June is a time of commencement at primary and secondary education institutions across the United States. From high school diplomas to newly minted doctors, this is a time of transition. At a recent graduation ceremony, one of the schools was honoring their theology graduates. Who would study theology in 2019? Beyond the obvious answer of priests and poets, there is a significant demand for theologians in the modern workplace. You see, the robots are coming.

Artificial Intelligence (AI) is moving at a fast pace, and Moore’s Law provides some insight into how fast it will improve – double the performance every 18 months. The idea of AI achieving singularity and becoming self-aware is moving from the pages of science fiction and into the realm of science fact. Not this year, not next, likely not even the next decade, but in our lifetime it seems an almost certainty.

Today, AI touches us in many ways and goes far beyond sales and marketing algorithms that display ads of items on webpages you visit 30 seconds after you browsed for a said item on Amazon (or eBay, or clicked a Facebook ad, or…). Some AI is simply the computerization of long created formulas such as credit evaluation, your CLUE report for insurance quotes, or actuary tables. But as computing power doubles every 18 months, and our ability to process the mountains of data collected over the last three decades get better, AI is moving to replace a lot of human decisions.

AI is already being used by IBM to determine who will likely quit their job with 95% accuracy. Yes, a whole 95%! That means one out of twenty people are misidentified by the AI, and they are either pushed out the door without real due cause or given an incentive to stay when there was no reason to provide one. So if you end up on the AI, “this person is on a quit job trajectory,” list, you have a 5% chance of your career being ruined or enhanced depending on the decision of an AI. Just let that sink in for a little bit.

There are billions of dollars, if not tens-of-billions, being invested in developing Level V autonomy for cars and trucks – a fully autonomous four-season any road any condition self-driving vehicle. Contrary to what some cheerleaders are declaring, we are still years away from this goal. Tesla has already missed preannounced goals for self-driving cars, self-driving test vehicles have killed people, and Tesla vehicles still struggle to identify potential risks like 18-wheelers in the road. There isn’t’ a single autonomous system that exists today that can drive on a snow-covered highway, as we had in February in the Puget Sound region. Without the ability to see the lane markers, a self-driving car can’t pick a course on a multilane road. However, that is only the beginning of the challenges.

As an example, let’s say I’m sitting in my fully autonomous car, and it is taking my future elderly self to a doctor’s appointment. So far, this is pure goodness as I still have my independence. Now let us introduce little Jane. Jane is playing with a ball. Jane rolls the ball to Dick. Dick misses the ball. Run Dick, run! Little Dick runs right in front of my autonomous car, which is traveling at 40 MPH, the posted speed limit on this road that goes by a park. In the oncoming lane is an 18 wheeler, also autonomous and driving at the 40 MPH posted speed limit. Now the AI has to make a choice. It calculates the distance faster than a human ever could from the front bumper to little Dick using cameras and sensors better than the human eye. It runs a formula that is only as good as the lowest paid programmer that worked on that particular code, that considers road conditions, angle, lighting, vehicle condition, and concludes there is no way it can stop in time without running over Dick at a speed of 21 MPH.

A Waymo self-driving test car – Wikimedia Creative Commons – these little pods use to ply the streets of Kirkland and had issues with stop signs

Now the AI runs another subroutine to determine which is the best outcome. Does the car swerve right into the park, brake and hold and hit little Dick anyway, or dive left and crash head-on into the 18-wheeler. As part of that subroutine, it looks at my old age versus Dick being seven years old, calculates the risk of injury to Dick versus me, and then calculates the severity of the potential injury and quality of remaining life. The AI concludes Dick is about a seven-year-old male child based on what it sees in its sensors and running it through a comparative database. Based on those calculations, the autonomous car makes a choice and commits suicide into the 18-wheeler traveling at 40 MPH.

It calculated I had an 82% chance of surviving the accident, while Dick had only a 3% chance of being hit by flying debris if it swerves into the truck. It calculated it could slow down to 21 MPH before hitting the 18-wheeler, for an equivalent impact of 61 MPH. It calculated there was a 13% chance my injuries would be fatal, short or long term, but that Dick would be more severely injured and with a greater disability if it elected to run him, and the ball over. It also took this as the safe bet because Dick, being a pedestrian and a flawed human, had a 38% chance of doing something not considered by the subroutine.

The auto industry that builds these cars got protection from Congress years earlier, making it impossible for me to sue them for the AI decision. The self-driving car was owned by a car-sharing service; private vehicle ownership became an anachronism years ago. File this under shit happens. Unfortunately, due to prior abdominal surgeries when I was younger, I suffered uncalculated internal injuries and died from complications eight months later. The AI decided that little Dick’s life was worth more than mine.

If you think this is the stuff of fantasy and fear, spend some time talking to people working on autonomous cars. We can reach a place where all cars are autonomous (that will take decades), but you still can’t account for pedestrians, cyclists, and animals. The AI for a self-driving car has to consider all these options because drivers around the world are faced with the snap decision to run over little Dick, crash head-on into oncoming traffic to save little Dick, or swerve off the road and hope not to kill anyone else. To develop a truly autonomous car, it has to make these decisions. Do you want an AI to decide whether you live or die? Can you say you would never run over little Dick in front of Jane? You can’t because you’ll never know how you’ll react until you’re in the crisis – but you have free will to respond. Some will argue that the AI can make a cold, emotionless calculation faster, which makes it better than human.

There is why we need theologians and why there is a demand for them in the development of AI. As our technology defeats, “God,” we are programming computers to play God. AIs are determining the potential of human beings today, not just financially – educationally, work achievement potential, but those cold, perfect instant decisions calculated using thousands of known and proven data points, are only as accurate as the worst computer programmer who worked on the project and the quality of the data used.

We see a world turn its back on religion, and a significant minority screaming louder out of fear as we become more, godless. I am not arguing whether this is good, bad, or indifferent. The profoundly religious are wringing their hands on abortion, gay marriage, gender fluidity, and other wedge issues that have no real intrinsic value beyond dividing us more deeply along political lines. There is a growing belief in corporate America and among technologists that computers can replace humans when it comes to making decisions, and the religious are almost silent on this matter. We can use AI because computers and artificial intelligence can do it faster, without emotion, and consider more data points than a human could ever accomplish. All at the performance level of the worst programmer to work on the code.

So if we abdicate human decisions to AI, do we extinguish the essence of what makes us human – free will. Do we risk labeling humans the second they are born based on thousands of data points on their potential? In this future world, where no one takes a risk, does a different AI decide that because Dick suffered trauma at 7-years old seeing a fatal motor vehicle accident, there is a 23% chance he has PTSD that could impact his work product so that the Schmectel Corporation won’t hire him? Because Schmectel won’t hire him, he can’t get into Super Amazing University, but Amazing University will take him? If you think that is so far fetched one only has to look to China, and their social scores they are rolling out. Because yet another AI could decide that if Dick complains about not getting into Super Amazing University, he shouldn’t even get into Amazing University because the AI predicts there is an 8% chance he will be disruptive at the University.

Wikimedia Creative Commons – Boston Dynamics “robot dogs” are the stuff of your science fiction fantasies pulling Santa’s slay, or your science fiction nightmares if they are out to slay

We need theologians more than ever. Some amazing minds that exist today consider AI to be the biggest threat to humanity as we know it. Where does the human race go when the singularity is achieved? If those subroutines and algorithms are flawed, an AI could start doing a personal assessment on the threats humanity represents and begin making…decisions to protect itself. Never forget another critical part of being self-aware is a need for self-preservation. Oh pish-posh, the Three Laws of Robotics written by Issac Asimov, would prevent that from happening. One can look to the books I, Robot or 2001 as an example of what happens when an AI receives conflicting instructions. At least the Matrix will almost certainly not happen; in reality, we would make terrible batteries to power the machines.

Think about it.

Malcontent, out.

Please Support Malcontent News

When you become a subscriber, you help us stay independent and paid advertiser free.

A free press is a Constitutional right, but it doesn’t come free. Our core missions are showing you an unfiltered view of the before, during, and after, defend the First Amendment, and amplify BIPOC and LGBTQIA+ voices in an advertisement free experience.

Patreon Logo

For as little as $5 a month, you can support Malcontent News. Becoming a Patreon will help cover technology, data, bandwidth, and travel costs. Patreons get early access to content and a Discord server, virtual meetings with the staff, and private Zoom meetings. Subscribe NOW!

Venmo Logo

A one-time Venmo donation can help with safety gear, equipment, or a small thank you to show your appreciation for what Malcontent News provides. No amount is too small, even $5 goes a long way. Venmo NOW!

Continue Reading

Trending