000 01874nam a22002657a 4500
003 IIITD
005 20250902153317.0
008 250820b |||||||| |||| 00| 0 eng d
020 _a9780393868333
040 _aIIITD
082 _a174.90
_bCHR-A
100 _aChristian, Brian
245 _aThe alignment problem :
_bmachines learning and human values
_cby Brian Christian
260 _aNew York :
_bW.W. Norton & Company,
_c©2020
300 _axvi, 476 p. ;
_c20 cm.
504 _aIncludes bibliographical references and index.
505 _tI.Prophecy
505 _tII.Agency
505 _tIII.Normativity
520 _aA jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel-- Provided by publisher.
650 _aArtificial intelligence -- Moral and ethical aspects
650 _aMachine learning -- Safety measures
650 _aSoftware failures
942 _cBK
_2ddc
999 _c208667
_d208667