000 02073cam a2200325 i 4500
001 s53c6bw7qd07q56c
003 SE-LIBR
005 20221117094935.0
008 201110s2020 nyu||||||b||||001 0|eng|
020 _a9780393635829
020 _z9780393635836 ((epub)
040 _aZ
_dOCLCO
_dOCLCF
_dUAP
_dYDX
_dCPP
_dTCH
_dZ
_dSipr
041 _aeng
100 1 _aChristian, Brian,
_d1984-
245 1 4 _aThe alignment problem :
_bmachine learning and human values /
_cBrian Christian
250 _aFirst edition 2020.
260 _aNorton :
_bNew York, NY,
_c2021
300 _axii, 476 pages
_c25 cm
500 _a"First published as a Norton paperback 2021"
504 _aIncludes bibliographical references (pages [401]-451) and index.
520 _a"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--
650 0 _asocial sciences
650 0 _acomputers
_xsafety
650 7 _aartificial intelligence
_xethics
653 _asoftware failures
653 _amachine learning
653 _amoral and ethical aspects
852 _h621.39 Christian
942 _cMONO
999 _c80255
_d80255