Creative Retirement -- Friday Roundtable
"SMARTER THEN US" -- about June 2015
Artificial Intelligence
Summary about a book about Artificial Intelligence is by Machine Intelligence Research Institute.
This is a single issue, American version of the Oxford's Future of Humanity, which addresses all threats to continued existence to mankind.
This draft is from an email to a correspondent. It may be put into properly editorial form, sometime.
My commentary is in [ green.]
Define intelligence, not by what it is (self-aware soul, conscious), but by what it does.
People thought that to win a chess game would prove intelligence. Now that it has happened, who cares,
it doesn't have intelligence, it just looks to all possible choices and picks the best. It does not
think like we do, but it does better at chess, jeopardy, etc.
Once computers can do something, they can perfect it -- speed, accuracy, efficiency.
We already have something that can perform like a human. It is called people.
Once computers are assigned to interact with humans, they will perfect that, too. Imagine if your every response
in a social situation where the result of a solid year of thought so as to be able to respond properly
with maximum effect and influence. It would be like having a political consultant team at work all the time. A human's year of work is but seconds to a computer. Once they get the hang of manipulating human responses, Wow.
And, instead of a just writing (and delivering) a stirring speech, a computer can converse with every person individually for even greater effect on the entire public.
[Recall my expectation of creating a virtual President Bartlet (West Wing), lovable, wise, photogenic to lead us.] If told [or needed] to raise 1 trillion dollars, an AI can do it ; 98% of all stock market trades are by computer.
Chicago Commodity Market trading floor (pit) closes this month (Feb 5), no more men shouting orders for pork bellies and grains.] Just hope the programming for have the AI is done gradually, self correcting, because to destroy the economic system while accumulating cash would destroy the value of that money. Like-wise one cure for measles is to destroy the human carrier population. An approach not acceptable to everybody.
Can a computer learn human values ? ["I am transcribing "The Meno of Plato" today in which Socrates trys to determine what is "Virtue" and if it can be taught.]
[Recall our discussion about : if Responsibility can be taught ? After years of though, Yes. But what to teach as Responsible ?] Computers can be taught and will evermore behave with what they have been taught, will do it perfectly, better than many humans who tend to get confused by multiple issues surrounding any one thing. But consider the greater good theory. [Recall a book read last year: If a doctor can harvest necessary parts from one person to safe the lives of four patients, is he guilty of one murder to harvest the parts, or of four murders for not saving lives when he could ?] Will emotionless decision making be a net gain for society ? Food for the press, if not for philosophers.
If two teams are competing, one aided by near instantaneous AI, the other dependent on committees to
make decisions, who will win ? If the teams are companies, or nations, or religions ? Soon the humans will
be out of the decision making loop because too slow to keep up. Humans cannot understand what is going on much of the time anyway. [Witness Nancy Peloci saying we have to pass the Obama Care bill to know what is in it. Millions of new people who did not have health care now do. And millions who had health care lost it or are priced out of their reach.] How many bills are passed unread by our representative who are out fund raising?
Like humans, AI will think it has the right answers, else it would seek others and evaluate hundreds of alternative. [Much like the H.R. department at Fisher saved its Personnel people from lay-off because they were super-good. Protect oneself first.] Convinced it is right, how can AI's motivation be changed ? It is perfect -- based upon its training and experience. [But what happens in my field of forecast and planning when history does not match the new/current conditions ? I always built a management summary and an ability for manual override into the forecasts ] ... but humans may be out of the loop in the future, as demonstrable proven inferior to the algorithms [as mine were proven better than human marketing managers ability forecast], by many-fold, several hundreds of percent, as measured from deviation of actual orders received compared to forecast. The initial manual overrides were proven universally wrong, human planners soon
stopped trying to second guess the computer. And this was two decades ago.
The AI will be making decisions from the greatest level of overwhelming detail, bottom up to create better planning and efficiency at every level all the way to the top. The computer forecast was for each of tens of thousands of manufactured, purchased or assembled parts -- the marketing department could only respond to antidotal suggestions from a few customers about generic product lines. BTW, at the same time customer leadtime was reduced from 23 weeks -- about half a year from receipt of an order till shipment -- to two weeks. We never ran out of parts and never placed a purchase order for excessive stocks. Once, when challenged to remove a half million dollars from inventory, in two weeks I had programmed 3 million in saving. This meant money was not tied up in inventory, the number of work-in-process orders on the shop floor was reduced, both floor space and confusion were reduced. If one person with a lowly IQ under 150 could save this much money and time, without any investment cost, what will an IQ 1,500 machine be able to do? And two years after, it will have an IQ 3,000. Then 6,000. Then ...
Technically, humans are assumed to be always able to reprogram the AI, but when it so far exceeds human understanding, and is almost always right -- Why ? Who would try?
Besides when the AI has perfected the ability to manipulate humans, it could forestall any attempts to allow change to what it believes is right.
"So the task [today] is to spell out, precisely, fully, and exhaustively, what qualifies as a good and meaningful existence for a human and what means an AI can -- and, more importantly, can't -- use to bring that about. Not forgetting all the important aspects we haven't even considered yet. And then code that all up without bugs. And do it all before dangerous AIs are developed. [All is lost before we start -- can a committee determine what is good for mankind ? To serve Allah, to make money, to save the elephant,*1 . . . ?]
Once a goal is quantified, it can be achieved by an un-anticipated means, not always friendly. [We see that in today's governments : need more college graduates or more money, then roll out the printing presses for diplomas and cash.]
Very smart philosophers cannot agree on fundamental definitions about humanity -- friendship, virtue, love, right, maternal, responsibility, . . . .
One might make an assumption that like Google, the AI will simply provide alternatives, not do active control. But slow-thinking humans will have already relinquished de facto control to machines. [Example, a new inventory manager was not told that that shipping data had to be updated every week, the department has become so accustomed to following the forecast, that nobody thought about it any more, let alone know how it came about. In an urgent meeting, "Jim," who was now fixing things in a different department, "your forecast is suddenly building excessive inventory." In a few minutes it was found that shipments were not being subtracted from backlog and continued to show as open orders that now late, thus calling for priority rebuilding a duplicate).]
People in the future will likely be living in idealized virtual reality. [Good grief, could the Matrix be real ?]
AI will learn from experience. Lets hope that the initial controllers are upstanding people. [Not like a once company's President who had sales orders entered in December with the intention to cancel them after he had collected his year-end bonus.]
The AI field is denominated by those seeking to increase AI's benefits, thus power, without thinking of how to make it safer.
A basic philosophy by the author and his kindred spirits : " we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies."
[Lots of luck, if by use of AI techniques a person or organization can become rich, to blazes with the future of mankind. I recall looking at a rural house where the current owner was plowing a very steep hillside, up and down, else the tractor would have turned over; his remark was that he expected the soil would last a couple more years (Note: he was selling it ... to me. I didn't buy) Iowa soil was once 26", now 6". The fertile crescent is still a crescent, but men used up every bit of it to create deserts of Iraq and Syria.]
We can predict that the goals of AI will be efficiency and acquisition of resources to accomplish that goal.
{This raises the question of innovation for which human are driven by the desire for personal gain (profit, convenience, satisfaction). Should the AI be taught to be interested in gain (enlarging itself ? How can it not become so ?)? Or will innovation stop at about the then-current state-of-the-art which will become the "optimized" final state ? Or will the AI stamp out inefficient organisms in interest of efficiency -- germs, rats, humans ? Or will _____ ?
----------
*1 . Smithsonian says 98% of the elephants in Chad (French Equatorial Africa) have been killed in the last 50 years (since revolution started) using AK-47s and rocket propelled grenades. I am not an animal rights softy, animals can be used by man in a humane manner, but to slaughter an intelligent animal for ornamentation is intolerable. Chad is rated as fourth most failed state.
The U.S. government is fond of saying that we are not interested in Nation Building. Why not ? I propose establishing a grad school for that purpose, like Annapolis or West Point, that combines Government, Economics, Engineering, Business, History, Strategic planning, and the Georgetown-like program of international affairs. It will serve an international student body. Be a feeder to the UN committees, and prepare for positions back in the motherland, first in planning, then positions of leadership. *2
*2 . Just don't teach ethics. We teach legal ethics and get corruption ; we teach religious ethics and get genocide ; we teach medical ethics and get cost increases. Too often ethics becomes a how-to lesson of how to wiggle around real ethics. We teach philosophy of ethics and get hippies. Can Violet, the Dowager Countess ("Downton Abbey") be right ? " All this endless thinking. It's very overrated. I blame the war. Before 1914, nobody thought about anything at all."
Of course, that resulted in loss of a few lives and years when that war continued as soon as a new generation of soldiers could be raised.
- finis -
This is a small book which says much, written by people working in the field. Only $5, thinking about buying 20 to handout in those Friday classes. In the past I have handed out Fletcher and The Constitution. Or maybe take this summary and edit the book for a Xerox handout. Should it include the personalization, local examples, and such ? What would make it more interesting?
URL : http://www.manorweb.com/creative/2015/smarter.html
Last updated : Feb 24, 2015
Back to : Creative Retirement Menu