Artificial intelligence (AI) and increasingly complex algorithms currently influence our lives and our civilization more than ever before. The areas of AI application are diverse and the possibilities far-reaching, and thanks to recent improvements in computer hardware, certain AI algorithms already surpass the capacities of today’s human experts. As AI capacity improves, its field of application will continue to grow. In concrete terms, it is likely that the relevant algorithms will start optimizing
themselves to an ever greater degree and may one day attain superhuman levels of intelligence.
Artificial Intelligence: Opportunities and Risks
This technological progress is likely to present us with historically unprecedented ethical challenges. Many experts believe that, alongside global opportunities, AI poses global risks surpassing those of e.g. nuclear technology (whose risks were severely underestimated prior to their development). Furthermore, scientific risk analyses suggest that high potential damages resulting from AI should be taken very seriously—even if the probability of their occurrence were low.
Progress in AI research makes it possible to replace increasing amounts of human jobs with machines. Many economists assume that this increasing automation could lead to a massive increase in unemployment within even the next 10-20 years.
Many AI experts consider it plausible that this century will witness the creation of AIs whose intelligence surpasses that of humans in all respects. The goals of such AIs could in principle take on any possible form (of which human ethical goals represent only a tiny proportion) and would influence the future of our planet decisively in ways that could pose an existential risk to humanity. Our species only dominates Earth (and, for better or worse, all other species inhabiting it) because it currently has the highest level of intelligence
As with all other technologies, care should be taken to ensure that the (potential) advantages of AI research clearly outweigh the (potential) disadvantages. The promotion of a factual, rational discourse is essential so that irrational prejudices and fears can be broken down. Current legal frameworks have to be updated so as to accommodate the challenges posed by new technologies.
Text, that where it came from it
An elective improvement in the safety of artificial intelligence research begins with awareness on the part of experts working on AI, investors, and decision-makers. Information on the risks associated with AI progress must, therefore, be made accessible and understandable to a wide audience. Organizations
supporting these concerns include the Future of Humanity Institute (FHI) at the University of Oxford, the Machine Intelligence Research Institute (MIRI) in Berkeley, the Future of Life Institute (FLI) in Boston, as well as the Foundation Research Institute (FRI).
all of which would apply to machines as well as animals
- A phenomenal self-model.
- The ability to register negative value (that is, violated subjective preferences) within the self-model.
- Transparency (that is, perceptions feel irrevocably
Global cooperation and coordination
Economic and military incentives create a competitive environment in which a dangerous AI arms race will almost certainly arise. In the process, the safety of AI research will be reduced in favor of more rapid progress and reduced cost. Stronger international cooperation can counter this dynamic. If international coordination succeeds, then a “race to bottom” in safety standards (through the relocation of scientific and industrial AI research) would also be avoided.