March 3, 2024

Just Ask Siri

Spectator
By Stephen Tuttle | Oct. 14, 2023

Those who believe artificial intelligence (AI) is a genie still struggling to get out of the bottle are sadly, perhaps even dangerously, wrong. AI is loose and already both helping and looking for trouble.

Let’s back up a bit. Just what is artificial intelligence? According to the Oxford Languages Dictionary, it is the theory and development of computer systems able to perform tasks that would normally require human intelligence, with some examples being visual perception, speech recognition, independent decision-making, and translation between different languages.

AI, at least in the abstract, has been around for a long time. According to Britannica.com, the first AI theories go back nearly nine decades to 1935 when Alan Turing—a British logician, computer pioneer and, ultimately, WWII cryptologist extraordinaire—discussed creating machines that could learn and expand their capabilities absent human intervention. Turing was part of a very small group of prescient thinkers who saw a future most did not.

He couldn’t get much beyond theory, because prior to 1949, computers could execute commands but could not store them; they could do what they were told but couldn’t remember to do it again.

(Turing’s personal story is the stuff of classic tragedies, but that’s a different column.)

We now recognize the first use of AI as a program written in 1951 by another Brit, Christopher Strachey. It enabled a computer to learn and improve its skill at checkers. Pretty basic stuff, but it was 72 years ago.

Huge advances continued through the 1970s, mostly unnoticed by the public, but telephone calls and advanced computer software were among many activities already being improved through early AI. The biggest impediment to more dramatic improvement in more areas was a hardware issue—computers simply lacked the power and memory capabilities to implement AI activities scientists could conjure up.

Now, however, though you might not recognize it, AI is all around us every day and is becoming more and more pervasive.

Examples? Your virtual assistants like Siri and Alexa are AI driven. They listen to you and catalog choices you’re making in order to learn everything about you they can. If your tastes change, so do theirs. It’s more than a little spooky.

AI is also widely used in e-commerce, search engines, customer call centers, fraud detection, facial recognition, autonomous vehicles, medical diagnoses, and the much-in-the-news chatbots like ChatGPT, just to name some of the more obvious applications.

Advocates believe AI will bring mostly good to us, improving our everyday lives and work environments as invaluable helpmates. AI is already being used to develop more effective drug therapies targeted at disease or tumor-specific DNA chains. All manner of word processing programs can now access AI, some fancy kitchen appliances do likewise, and our space program is full of advanced AI.

However, there are those who believe we have already allowed AI to get too strong a foothold and that it will be used for ill purposes by corrupt groups or individuals, not to mention various military applications. The real fear is we will make these things too smart and they will ultimately become sentient and recognize humans as the world’s biggest problem. After all, computers can already “talk” to each other and program and reprogram each other without human involvement.

A more likely scenario is the weaponization of AI, likely already well under way in China, Russia, Israel, and here in the U.S. at a minimum. There is a long history of the military finding a way to turn something otherwise useful into something deadly.

A primitive periodic table of the elements was first produced in 1869 by a Russian scientist. That, and the ability to combine these chemical elements into useful compounds, was considered miraculous. By 1914, both France and Germany were using these “miracles” in deadly chemical warfare in WWI. Though subsequently deemed illegal, we know Hitler, Saddam Hussein in Iraq, and Bashar al-Assad in Syria all used chemical weapons. We, and plenty of other countries, regularly use tear gas and pepper sprays in riot situations, and that’s chemistry as a weapon, too.

(Biological warfare, by the way, goes way, way back. According to Britannica.com, the first recorded use of a biological weapon happened in 1347 when Mongol invaders actually catapulted bodies infected with the plague into the walled Black Sea city of Caffa. We are not guiltless—there is evidence 18th century colonists gave indigenous folks blankets that had been infected with smallpox, but the blankets were old, the disease was dead, and the scheme did not work.)

AI is already far too developed to be effectively regulated and impossible to be stopped. It will likely be used to greatly help and harm us. Whatever directions it takes, it will develop far faster than we imagine. Just ask Siri.

Trending

Building the Party

In a recent opinion column, small business ownership expert Mary Keyes Rogers adeptly diagnoses several problems in our Un... Read More >>

Film Review: Bob Marley: One Love

Perhaps the first sign that Bob Marley: One Love might be a suspiciously polished and glowing take on the pop culture icon... Read More >>

The M-22 Chili Challenge

Chill(i) out at the 2nd annual M-22 Chili Challenge at Leelanau Sands Casino on Saturday, March 9, from noon-4pm. This coo... Read More >>

Just Keep Skiing

Don’t let the intermittent snow fool you—there is still plenty of ski hill fun to be had! Crystal Mountain in ... Read More >>