000 01943 a2200229 4500
003 OSt
005 20211105163848.0
008 211105b |||||||| |||| 00| 0 eng d
020 _a9781786494313
040 _cIIMV
082 _a174.90063
_bCHR
100 1 _aChristian, Brian
_931262
_d1984-
245 1 4 _aThe alignment problem:
_bmachine learning and human values/
_cby Brian Christian.
260 _bAtlantic Books,
_c2020.
_aLondon
300 _axii, 476p. ;
_c25 cm
505 _aProphecy. Representation -- Fairness -- Transparency -- Agency. Reinforcement -- Shaping -- Curiosity -- Normativity. Imitation -- Inference -- Uncertainty.
520 _a"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--Provided by publisher.
650 1 _aArtificial intelligence
_xMoral and ethical aspects.
_931264
650 1 _aArtificial intelligence
_xSocial aspects.
_931265
650 1 _aMachine learning
_xSafety measures.
_931266
942 _2ddc
_cBK
999 _c5672
_d5672