OpenAI Forum
+00:00 GMT
Event Replay
October 2, 2024

On Trust: Backdoor Vulnerabilities and their Mitigation with Turing Recipient, Shafi Goldwasser

On Trust: Backdoor Vulnerabilities and their Mitigation with Turing Recipient, Shafi Goldwasser
# STEM
# Security
# Higher Education

Shafi Goldwasser’s talk, On Trust: Backdoor Vulnerabilities and Their Mitigation, explores how cryptographic principles can be applied to secure machine learning models, especially in the presence of adversaries.

Shafi Goldwasser
Shafi Goldwasser
On Trust: Backdoor Vulnerabilities and their Mitigation with Turing Recipient, Shafi Goldwasser


Key Takeaways

  1. Cryptography Builds Trust in AI: Goldwasser emphasizes that cryptographic tools like homomorphic encryption and secure computation are crucial for securing AI systems, particularly in adversarial settings, by ensuring privacy and trust.
  2. Backdoor Vulnerabilities Threaten AI Integrity: Backdoor attacks allow adversaries to manipulate machine learning models undetected, altering outcomes like financial decisions, which highlights the severity of this growing threat.
  3. Auditing Falls Short in Detecting Tampering: Standard auditing methods are often ineffective at spotting hidden backdoors in AI models, making stronger cryptographic defenses necessary to maintain model integrity.
  4. Proactive Mitigation is Essential: Goldwasser advocates for using advanced strategies like verifiable computing and input perturbation to preemptively combat backdoor vulnerabilities instead of relying solely on audits.
  5. Scaling Cryptographic Solutions is Complex: While promising, cryptographic methods face challenges when applied to large machine learning models, requiring further research to ensure security at scale.
  6. Transparency and Ongoing Verification are Key: Building transparency and continuous verification into AI training processes is critical to preventing undetected attacks and ensuring long-term trust in AI systems.


Extended Summary

In her talk, On Trust: Backdoor Vulnerabilities and Their Mitigation, Turing Award recipient Shafi Goldwasser addresses the challenges of ensuring trust in machine learning (ML) systems, particularly in adversarial settings. Goldwasser begins by outlining how cryptographic principles developed decades ago, such as encryption and secure computation, can be applied to modern ML systems to safeguard against threats. Cryptography, she explains, is essential in creating secure environments where adversaries are anticipated, making it possible to trust systems even when they are under threat.

Goldwasser focuses on backdoor vulnerabilities, which are hidden mechanisms that adversaries can use to manipulate machine learning models without detection. She highlights that these backdoors can reverse decisions (such as approving a loan application that should be rejected) by slightly perturbing the input data. The malicious alterations, once embedded in the model, are nearly impossible to detect through standard auditing methods.

A significant portion of the talk delves into homomorphic encryption, a cryptographic technique that allows computations to be performed on encrypted data. This ensures that sensitive data can be used to train machine learning models without exposing the data itself. Goldwasser explains that this technique holds promise for privacy-preserving machine learning, especially in sensitive areas like genomics, where privacy concerns are paramount.

Despite the promise of cryptographic tools like homomorphic encryption, Goldwasser acknowledges that these methods have practical limitations, particularly when scaling to large models and handling the computational complexity involved. She also touches on the role of verifiable computing, where the model’s training process is constantly verified to prevent tampering. Verifiable computing ensures that the model is behaving as expected by proving that each step in the training process adheres to agreed-upon rules and data.

The talk concludes with a discussion on potential mitigations for backdoor attacks, such as perturbing inputs to eliminate their impact or retraining models to remove hidden vulnerabilities. Goldwasser emphasizes the need for proactive approaches, such as forcing transparency and verification during model training, rather than relying on post-hoc auditing, which may fail to detect sophisticated attacks.

Ultimately, Goldwasser’s lecture highlights both the promise and the challenges of ensuring trust in AI systems, calling for continued research and development of cryptographic methods that can secure machine learning models against increasingly sophisticated adversaries.

Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to OpenAI Forum’s Terms of Service, Code of Conduct and Privacy Policy.
Dive in
Related
1:03:48
video
On Trust: Backdoor Vulnerabilities and their Mitigation with Turing Recipient, Shafi Goldwasser
By Caitlin Maltbie • May 3rd, 2024 Views 18.5K