

DANGEROUS TRANSITION FROM NARROW TO GENERAL AI
The transition from narrow AI to AGI (Artificial General Intelligence) will pose great dangers if not done in the right way. It is incredibly challenging to understand the inner workings of an AI decision making process—effectively turning AI into black boxes that take decisions—, and to specify exactly the goals we want it to follow —its utility function (see the paperclip maximiser thought experiment). In addition, human intelligence is constrained by many factors that are much less limiting for AI (speed, compute, serial- and parallel-depth, etc.), which implies that any AGI can very plausibly possess super- human intelligence, or at least easily become super-intelligent (e.g., by granting it access to more compute). Controlling a super-human intelligence that cannot be scrutinised is most likely impossible. Therefore, it is of paramount importance to set the initial conditions to transition to AGI right (namely that it pursues our goals even if they are implicit) before any AGI is developed, as there most likely will not be room for trial and error. Therefore, part of the AI community is starting to shift to viewing building AGI as something morally dubious.