Nick Bostrom

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think about global priorities and big questions for humanity.

Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and the academic book Superintelligence: Paths, Dangers, Strategies (OUP, 2014), which became a New York Times bestseller. He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology; and (v) implications of consequentialism for global strategy.

He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed on Foreign Policy’s Top 100 Global Thinkers list; and he was included on Prospect magazine’s World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 24 languages. There have been more than 100 translations and reprints of his works.

CRUCIAL CONSIDERATIONS

A thread that runs through my work is a concern with “crucial considerations”. A crucial consideration is an idea or argument that might plausibly reveal the need for not just some minor course adjustment in our practical endeavours but a major change of direction or priority.

If we have overlooked even just one such consideration, then all our best efforts might be for naught—or less. When headed the wrong way, the last thing needed is progress. It is therefore important to pursue such lines of inquiry as might disclose an unnoticed crucial consideration.

Some of the relevant inquiries are about moral philosophy and values. Others have to do with rationality and reasoning under uncertainty. Still others pertain to specific issues and possibilities, such as existential risks, the simulation hypothesis, human enhancement, infinite utilities, anthropic reasoning, information hazards, the future of machine intelligence, or the singularity hypothesis.

High-leverage questions associated with crucial considerations deserve to be investigated. My research interests are quite wide-ranging; yet they all stem from the quest to understand the big picture for humanity, so that we can more wisely choose what to aim for and what to do. Embarking on this quest has seemed the best way to try to make a positive contribution to the world.

BACKGROUND

I was born in Helsingborg, Sweden, and grew up by the seashore. I was bored in school. At age fifteen or sixteen I had an intellectual awakening, and feeling that I had wasted the first one and a half decades of my life, I resolved to focus on what was important. Since I did not know what was important, and I did not know how to find out, I decided to start by trying to place myself in a better position to find out. So I began a project of intellectual self-development, which I pursued with great intensity for the next one and a half decades.

As an undergraduate, I studied many subjects in parallel, and I gather that my performance set a national record. I was once expelled for studying too much, after the head of Umeå University psychology department discovered that I was concurrently following several other full-time programs of study (physics, philosophy, and mathematical logic), which he believed to be psychologically impossible.

For my postgraduate work, I went to London, where I studied physics and neuroscience at King’s College, and obtained a PhD from the London School of Economics. For a while I did a little bit stand-up comedy on the vibrant London pub and theatre circuit.

During those years, I co-founded, with David Pearce, the World Transhumanist Association, a nonprofit grassroots organization. Later, I was involved in founding the Institute for Ethics and Emerging Technologies, a nonprofit virtual think tank. The objective was to stimulate wider discussion about the implications of future technologies, in particular technologies that might lead to human enhancement. (These organizations have since developed on their own trajectories, and it is very much not the case that I agree with everything said by those who flock under the transhumanist flag.)

Since 2006, I’ve been the founding director of the Future of Humanity Institute at Oxford University. This unique multidisciplinary research aims to enable a select set of intellects to apply careful thinking to big-picture question for humanity and global priorities. The Institute belongs to the Faculty of Philosophy and the Oxford Martin School. Since 2011, I also direct the Programme on the Impacts of Future Technology.

I am in a very fortunate position. I have no teaching duties. I am supported by a staff of assistants and brilliant research fellows. There are virtually no restrictions on what I can work on. I must try very hard to be worthy of this privilege and to cast some light on matters that matter.

Source: Nick Bostrom’s Home Page


Nick Bostrom (English: /ˈbɒstrəm/; Swedish: Niklas Boström, IPA: [ˈbuːˌstrœm]; born 10 March 1973)[1] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[2] and he is currently the founding director of the Future of Humanity Institute[3] at Oxford University.

He is the author of over 200 publications,[4] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[5] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[6] In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.[7][8] Bostrom's work on superintelligence – and his concern for its existential risk to humanity over the coming century – has brought both Elon Musk and Bill Gates to similar thinking.[9][10]

Biography

Bostrom was born in 1973[11] in Helsingborg, Sweden.[4] At a young age, he disliked school, and he ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[12] Despite what has been called a "serious mien", he once did some turns on London's stand-up comedy circuit.[4]

He holds a B.A. in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg and master's degrees in philosophy and physics, and computational neuroscience from Stockholm University and King's College London, respectively. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[12] In 2000, he was awarded a PhD in philosophy from the London School of Economics. He held a teaching position at Yale University (2000–2002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (2002–2005).[6][13]

Philosophy

Existential risk

Aspects of Bostrom's research concern the future of humanity and long-term outcomes.[14][15] He introduced the concept of an existential risk,[16] which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan Ćirković characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[17] and the Fermi paradox.[18][19]

In 2005, Bostrom founded the Future of Humanity Institute,[12] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[15]

Superintelligence

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that with "the creation of a superintelligent being represents a possible means to the extinction of mankind", and "there are actions that can be taken to reduce this risk," such as "the creation of a ‘friendly’ superintelligent being."[20] In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open letter warning of the potential dangers of AI.[21] The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today."[22]

Anthropic reasoning

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[23]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduced the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA) and showed how they lead to different conclusions in a number of cases. He pointed out that each is affected by paradoxes or counterintuitive implications in certain thought experiments (the SSA in e.g. the Doomsday argument; the SIA in the Presumptuous Philosopher thought experiment). He suggested that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition by "observer-moments". This could allow for the reference class to be relativized (and he derived an expression for this in the "observation equation").

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[24] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Simulation argument

Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:[25][26]

  1. The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
  2. The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
  3. The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

The idea has influenced the views of Elon Musk.[27]

Ethics of human enhancement

Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",[28][29] as well as a critic of bio-conservative views.[30]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[28] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential."[31]

With philosopher Toby Ord, he proposed the reversal test. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[32]

Technology strategy

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[33][34]

Policy and consultations

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[35] He is an advisory board member for the Machine Intelligence Research Institute,[36]Future of Life Institute,[37]Foundational Questions Institute[38] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[39][40]

Bibliography

Books

Journal articles (selected)

References

  1. ^ "nickbostrom.com". Nickbostrom.com. Retrieved 16 October 2014. 
  2. ^ "Professor Nick Bostrom : People". Oxford Martin School. Retrieved 16 October 2014. 
  3. ^ "Future of Humanity Institute – University of Oxford". Fhi.ox.ac.uk. Retrieved 16 October 2014. 
  4. ^ a b c Thornhill, John (14 July 2016). "Artificial intelligence: can we control it?". Financial Times. Retrieved 10 August 2016.  (subscription required)
  5. ^ "Best Selling Science Books". The New York Times. Retrieved 19 February 2015. 
  6. ^ a b "Nick Bostrom on artificial intelligence". Oxford University Press. 8 September 2014. Retrieved 4 March 2015. 
  7. ^ Frankel, Rebecca. "The FP Top 100 Global Thinkers". Foreign Policy. Retrieved 5 September 2015. 
  8. ^ "Nick Bostrom: For sounding the alarm on our future computer overlords.". foreignpolicy.com. Foreign Policy magazine. Retrieved 1 December 2015. 
  9. ^ "Bill Gates Is Worried About the Rise of the Machines". The Fiscal Times. Retrieved 19 February 2015. 
  10. ^ Bratton, Benjamin H. (23 February 2015). "Outing A.I.: Beyond the Turing Test". The New York Times. Retrieved 4 March 2015. 
  11. ^ Kurzweil, Ray (2012). How to create a mind the secret of human thought revealed. New York: Viking. ISBN 9781101601105. 
  12. ^ a b c Khatchadourian, Raffi (23 November 2015). "The Doomsday Invention". The New Yorker. Condé Nast. XCI (37): 64–79. ISSN 0028-792X. 
  13. ^ "Nick Bostrom : CV" (PDF). Nickbostrom.com. Retrieved 16 October 2014. 
  14. ^ Bostrom, Nick (March 2002). "Existential Risks". Journal of Evolution and Technology. 9. 
  15. ^ a b Andersen, Ross. "Omens". Aeon Media Ltd. Retrieved 5 September 2015. 
  16. ^ Khatchadourian, Raffi. "The Doomsday Invention". The New Yorker. Retrieved 17 March 2017. 
  17. ^ Tegmark, Max; Bostrom, Nick (2005). "Astrophysics: is a doomsday catastrophe likely?" (PDF). Nature. 438 (7069): 754. PMID 16341005. doi:10.1038/438754a. 
  18. ^ Bostrom, Nick (May–June 2008). "Where are they? Why I Hope the Search for Extraterrestrial Life Finds Nothing" (PDF). MIT Technology Review: 72–77. 
  19. ^ Overbye, Dennis (August 3, 2015). "The Flip Side of Optimism About Life on Other Planets". The New York Times. Retrieved October 29, 2015. 
  20. ^ Thorn, Paul D. (1 January 2015). "Nick Bostrom: Superintelligence: Paths, Dangers, Strategies". Minds and Machines. 25 (3): 285–289. Retrieved 17 March 2017. 
  21. ^ Loos, Robert (23 January 2015). "Artificial Intelligence and The Future of Life". Robotics Today. Retrieved 17 March 2017. 
  22. ^ "The Future of Life Institute Open Letter". The Future of Life Institute. Retrieved 4 March 2015. 
  23. ^ Bostrom, Nick (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy (PDF). New York: Routledge. pp. 44–58. ISBN 0-415-93858-9. Retrieved 22 July 2014. 
  24. ^ "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" (PDF). Nickbostrom.com. Retrieved 16 October 2014. 
  25. ^ Bostrom, Nick (19 January 2010). "Are You Living in a Computer Simulation?". 
  26. ^ Nesbit, Jeff. "Proof Of The Simulation Argument". US News. Retrieved 17 March 2017. 
  27. ^ Rothman, Joshua (9 June 2016). "What Are the Odds We Are Living in a Computer Simulation?". The New Yorker. Retrieved 17 March 2017. 
  28. ^ a b Sutherland, John (9 May 2006). "The ideas interview: Nick Bostrom; John Sutherland meets a transhumanist who wrestles with the ethics of technologically enhanced human beings". The Guardian. 
  29. ^ Bostrom, Nick (2003). "Human Genetic Enhancements: A Transhumanist Perspective" (PDF). Journal of Value Inquiry. 37 (4): 493–506. doi:10.1023/B:INQU.0000019037.67783.d5. 
  30. ^ Bostrom, Nick (2005). "In Defence of Posthuman Dignity". Bioethics. 19 (3): 202–214. PMID 16167401. doi:10.1111/j.1467-8519.2005.00437.x. 
  31. ^ "The FP Top 100 Global Thinkers – 73. Nick Bostrom". Foreign Policy. December 2009. 
  32. ^ Bostrom, Nick; Ord, Toby (2006). "The reversal test: eliminating status quo bias in applied ethics" (PDF). Ethics. 116 (4): 656–679. doi:10.1086/505233. 
  33. ^ Bostrom, Nick (2002). "Existential Risks: Analyzing Human Extinction Scenarios".  9 Journal of Evolution and Technology Jetpress ORA
  34. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. pp. 229–237. ISBN 0199678111. 
  35. ^ "Digital Skills Committee - timeline". UK Parliament. Retrieved 17 March 2017. 
  36. ^ "Team - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved 17 March 2017. 
  37. ^ "Team - Future of Life Institute". Future of Life Institute. Retrieved 17 March 2017. 
  38. ^ "FQXi - Foundational Questions Institute". fqxi.org. Retrieved 17 March 2017. 
  39. ^ "nickbostrom.com". Nickbostrom.com. Retrieved 19 February 2015. 
  40. ^ McBain, Sophie (4 October 2014). "Apocalypse Soon: Meet The Scientists Preparing For the End Times". New Republic. Retrieved 17 March 2017. 

External links

source: https://en.wikipedia.org/wiki/Nick_Bostrom

Comments are closed