“Answer”

Here at Better Questions Than Answers we specialize in questions. Today, however, an “Answer”— a short short story by Fredric Brown (1906-1972). Here is the entire story, with one small modification:

 

Dwan Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing. 

He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe — ninety-six billion planets — into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies. 

Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment’s silence he said, “Now, Dwar Ev.” 

Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel. 

Dwar Ev stepped back and drew a deep breath. “The honor of asking the first question is yours, Dwar Reyn.” 

“Thank you,” said Dwar Reyn. “It shall be a question which no single cybernetics machine has been able to answer.” 

He turned to face the machine. “Is there a God?” 

The mighty voice answered without hesitation, without the clicking of a single relay. 

“There is now.”

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch. 

A bolt of lightning from the cloudless sky struck him down and fused the switch shut. 

(Fredric Brown, “Answer”)

 

The modification of the story is the line “There is now” substituted for the original “Yes, now there is a God” at a friend’s suggestion. We thought the change more dramatic. Improvement?

Is Superintelligent AI inevitable? What’s the time scale? 10 years from now? 100? 1000? 

How will Super-AI choose its purposes? Your laptop does not care if you turn it off. You and I do care about being turned off. Will Super-AI care and take any steps necessary to stay on? Why should it? Because “on” is intrinsically better than “off”?

Brown’s “Answer” is mentioned in a Scientific American column by Christof Koch “Will Artificial Intelligence Surpass Our Own”    (https://www.scientificamerican.com/article/will-artificial-intelligence-surpass-our-own/) discussing Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies. Koch thinks the danger lies in the unpredictability of Super-AI:

What concerns Bostrom is the unpredictability of what might happen when the technology starts edging toward acquiring the capabilities of a strong AI that takes its goals to extremes never intended by its original programmers. A benign superintelligence that wants nothing but happy people might implant electrodes into the brain’s pleasure centers, to deliver jolts of pure, orgasmic gratification. Do we really want to end up as wire-heads? And what about the innocent paper-clip-maximizing AI that turns the entire planet and everything on its surface into gigantic, paper-clip-making factories? Oops.

Given humanity’s own uncertainty about its final goals—being as happy as possible? Fulfilling the dictum of some holy book so we end up in heaven? Sitting on a mountaintop and humming “Om” through nostrils while being mindful? Colonizing the Milky Way galaxy?—we want to move very deliberately here.

The danger lies in the uncertainty of “final” purpose. Humans don’t know and therefore Super-AI cannot know. It might make up anything at all.

Share this:
Like this:Like Loading... Related