About this Event
Peter Bajcsy is a project lead at the National Institute of Standards and Technology, Gaithersburg, Maryland, U.S.A. His current research interests include foundational AI-based modeling, terabyte-sized image-based measurements, and metrology in computer vision applications. Peter received his Ph.D. in electrical and computer engineering from the University of Illinois at Urbana-Champaign. He is a Senior Member of the IEEE Computer Society. Contact him at peter.bajcsy@nist.gov.
Summary: With the growing complexity of artificial intelligence (AI) models and the lack of AI model interpretability and performance explainability, there are many ways in which AI models can be attacked by adversaries. This presentation overviews basic attacks via poisoning training datasets or planting backdoors in AI model code. To enable quick learning about data poisoning and backdoor planting, we designed a web-based neural network calculator that enables simulations of planting, activating, and defending cryptographic backdoors in neural networks (NN), and injecting Trojans into training datasets. The online simulations are available at https://pages.nist.gov/nn-calculator.
Click here to register for Interactive Measurements in Neural Networks with Trojans and Backdoors
0 people are interested in this event
User Activity
No recent activity