University of California San Francisco Give to UCSF

Peter Bajcsy is a project lead at the National Institute of Standards and Technology, Gaithersburg, Maryland, U.S.A. His current research interests include foundational AI-based modeling, terabyte-sized image-based measurements, and metrology in computer vision applications. Peter received his Ph.D. in electrical and computer engineering from the University of Illinois at Urbana-Champaign. He is a Senior Member of the IEEE Computer Society. Contact him at peter.bajcsy@nist.gov.

Summary: With the growing complexity of artificial intelligence (AI) models and the lack of AI model interpretability and performance explainability, there are many ways in which AI models can be attacked by adversaries. This presentation overviews basic attacks via poisoning training datasets or planting backdoors in AI model code.  To enable quick learning about data poisoning and backdoor planting, we designed a web-based neural network calculator that enables simulations of planting, activating, and defending cryptographic backdoors in neural networks (NN), and injecting Trojans into training datasets. The online simulations are available at https://pages.nist.gov/nn-calculator

Click here to register for Interactive Measurements in Neural Networks with Trojans and Backdoors

Event Details

See Who Is Interested

0 people are interested in this event

User Activity

No recent activity

UCSF promotes the exchange of diverse ideas and perspectives, acknowledging that the views and opinions of our guest speakers on campus are their own and may not reflect the perspective of the University. We embrace free speech in the pursuit of greater understanding, consistent with our obligations as a public university under the First Amendment.