Boffins bust AI with corrupted training data
If you don’t know what your AI model is doing, how do you know it’s not evil?
Boffins from New York University have posed that question in a paper at arXiv, and come up with the disturbing conclusion that machine learning can be taught to include backdoors, by attacks on their learning data.
Tags:
Read more: Boffins bust AI with corrupted training data
Story added 28. August 2017, content source with full text you can find at link above.