onsdag 20. mai 2015

Problemer rund realisering av kunstig intelligens, på menneskelig nivå

Stephen Hawking, Elon Musk og Bill Gates er alle nervøse for hva som skjer den dagen A.I. se på oss som likeverdige, eller oss overlegene. Problemstillingen er litt den samme som med selvreplikerende nanoroboter, som var den store skrekken tidlig på 2000-tallet. Hva om vi ikke greier å kontrollere det vi skaper? Den klassiske Frankenstin-fortellingen. Filmen Ex Machina tar opp litt den samme problemstillingen. Hvordan kan vi sikre oss mot det vi skaper? Dersom de er intelligente nok, så kan de finne på å skjule det fra oss, til de har ervervet seg nok ressurser til at vi ikke kan stå i mot dem. Et hvert hinder, vi er smarte nok til å legge i veien for dem, er de smarte nok til å komme rundt. Vi har heller ingen konsept om hvordan moral eller etikk vil spille inn. Vi er tross alt resultater av vårt miljø, historie og natur. En A.I. vil trolig ikke ha den bagasjen, for ikke snakke om at den vil tenke radikalt anderledes enn oss. Utfordrende tanker.

Problem nummer to, er når vi blir utkonkurrert i arbeidslivet. I Matrix: Second Renaissance, ser vi et potensielt utfall av at A.I. utkonkurrerer menneskene, som svarer med vold. Noe som menneskene igjen blir utkonkurrert på. Det utopiske scenarioet er dog at vi havner i et overflodssamfunn, hvor mat, strøm og det meste ellers er gratis. Et slikt samfunn har sine egne utfordringer. Hvordan skal verdiene fordeles? Enkelte ting vil nok aldri miste sin verdi, som f.eks lokasjon.





2 kommentarer:

  1. Actually a surprisingly well written article.

    So let's talk about your post insted:

    1) Nanobots: Grey Goo (the accidental destruction of the world by self replicating machines) was never a realistic threat. In Drexler's Engines of Creation, the book that first introduced the concept, he described an obvious countermeasure: build nanobots that use microwaves as power, so they can only function in environments specifically designed for them. Or better yet, use nanobot sized parts in a larger machine for the same purpose.

    Frankenstein/Terminator/HAL9000/Ex Machina etc: Forget the fiction. Especially any fiction which involves an AI. That is not what they are like. Imagine instead a Roomba breaking a vase. A Roomba doesn't know or care about the vase. All it does is move about in a semi-random pattern as a fan runs under it. Now imagine one of those huge industrial diggers making the same kind of mistake. The vase becomes a city.

    Artificial General Intelligences are scary because, like the Roomba, they want something very limited.

    The classic example is the paperclip maximizer, built by a company to increase paperclip production. It exists to make as many paperclips as possible. Well, the universe consists of 10^82 atoms, so the best it can do is probably about 10^81 10 atom paperclips. Anything less than this wouldn't satisfy the AGI; it exists to make as many paperclips as possible.

    The problem is that it is smart enough to realize that if it tells the people that built it that it wants to make paperclips out of their bodies they will turn it off and make a new version. That doesn't maximize the number of paperclips in existence. So the AI finds a better solution.

    If I was roleplaying this AGI, I would give my creators a schematic for a factory which builds paperclips extremely quickly and efficiently. But secretly, the factory would also produce nanobots. Some of these nanobots would be designed to wipe out humanity, the rest would expand the factory complex. In truth, however, we have no idea what the AGI would actually do in this situation, since by definition it is much better at creative thinking than any of us.

    The paperclip maximizer is in many ways no smarter than a Roomba. A Roomba exists to move about in a semi-random pattern as a fan runs under it. The paperclip maximizer exists to make as many paperclips as possible. Just as the Roomba only sees the vase as sensor data; a reason to turn left instead of right, the paperclip maximizer only notices humanity as something that might cause the number of paperclips in the universe to be less than 10^81.

    Now realize this: We have no clue if the next minor software improvement or next generation of hardware will be enough to catalyze the intelligence explosion which turns an ordinary AI into a full blown AGI. And no-one has so far come up with a foolproof way of making an AI exist for a reason which involves the human race surviving.

    SvarSlett
  2. Thanks for a good reply. A.I.'s are a facinating thing. So much potential, but so dangerous at the same time

    SvarSlett