Strengthening deep neural networks : making AI less susceptible to adversarial trickery / Katy Warr
Material type:
- 9789352138739
- 006.32 WAR-K
Item type | Current library | Collection | Shelving location | Call number | Status | Date due | Barcode | Item holds | |
---|---|---|---|---|---|---|---|---|---|
![]() |
BITS Pilani Hyderabad | 003-007 | General Stack (For lending) | 006.32 WAR-K (Browse shelf(Opens below)) | Available | 41197 |
Browsing BITS Pilani Hyderabad shelves, Shelving location: General Stack (For lending), Collection: 003-007 Close shelf browser (Hides shelf browser)
As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much of AI—are used daily to process image, audio, and video data.
Author Katy Warr considers attack motivations, the risks posed by this adversarial input, and methods for increasing AI robustness to these attacks. If you’re a data scientist developing DNN algorithms, a security architect interested in how to make AI systems more resilient to attack, or someone fascinated by the differences between artificial and biological perception.
There are no comments on this title.