Skip to content

Commit

Permalink
Fix table references
Browse files Browse the repository at this point in the history
  • Loading branch information
profvjreddi committed May 10, 2024
1 parent edd777b commit 51a4e91
Showing 1 changed file with 6 additions and 2 deletions.
8 changes: 6 additions & 2 deletions contents/robust_ai/robust_ai.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -307,7 +307,7 @@ In this Colab, play the role of an AI fault detective! You'll build an autoencod

### Summary

Below is a table providing an extensive comparative analysis of transient, permanent, and intermittent faults. It outlines the primary characteristics or dimensions that distinguish these fault types from one another. Here, we summarize the relevant dimensions we examined earlier and explore the nuances that differentiate transient, permanent, and intermittent faults in greater detail.
@tbl-fault_types provides an extensive comparative analysis of transient, permanent, and intermittent faults. It outlines the primary characteristics or dimensions that distinguish these fault types from one another. Here, we summarize the relevant dimensions we examined earlier and explore the nuances that differentiate transient, permanent, and intermittent faults in greater detail.

| Dimension | Transient Faults | Permanent Faults | Intermittent Faults |
|-----------|------------------|------------------|---------------------|
Expand All @@ -319,6 +319,8 @@ Below is a table providing an extensive comparative analysis of transient, perma
| Detection | Error detection codes, comparison with expected values | Built-in self-tests, error detection codes, consistency checks | Monitoring for anomalies, analyzing error patterns and correlations |
| Mitigation | Error correction codes, redundancy, checkpoint and restart | Hardware repair or replacement, component redundancy, failover mechanisms | Robust design, environmental control, runtime monitoring, fault-tolerant techniques |

: Comparison of transient, permanent, and intermittent faults. {#tbl-fault_types}

## ML Model Robustness

### Adversarial Attacks
Expand Down Expand Up @@ -373,7 +375,7 @@ Physical-world attacks bring adversarial examples into the realm of real-world s

**Summary**

Below is a table providing a concise overview of the different categories of adversarial attacks, including gradient-based attacks (FGSM, PGD, JSMA), optimization-based attacks (C&W, EAD), transfer-based attacks, and physical-world attacks (adversarial patches and objects). Each attack is briefly described, highlighting its key characteristics and mechanisms.
@tbl-attack_types a concise overview of the different categories of adversarial attacks, including gradient-based attacks (FGSM, PGD, JSMA), optimization-based attacks (C&W, EAD), transfer-based attacks, and physical-world attacks (adversarial patches and objects). Each attack is briefly described, highlighting its key characteristics and mechanisms.

| Attack Category | Attack Name | Description |
|-----------------------|-------------------------------------|-----------------------------------------------------------------------------------------------------------------|
Expand All @@ -386,6 +388,8 @@ Below is a table providing a concise overview of the different categories of adv
| Physical-world | Adversarial Patches | Small, carefully designed patches placed on objects to fool object detection or classification models. |
| | Adversarial Objects | Physical objects (e.g., 3D-printed sculptures, modified road signs) crafted to deceive ML systems in real-world scenarios. |

: Different attack types on ML models. {#@tbl-attack_types}

The mechanisms of adversarial attacks reveal the intricate interplay between the ML model's decision boundaries, the input data, and the attacker's objectives. By carefully manipulating the input data, attackers can exploit the model's sensitivities and blind spots, leading to incorrect predictions. The success of adversarial attacks highlights the need for a deeper understanding of the robustness and generalization properties of ML models.

Defending against adversarial attacks requires a multifaceted approach. Adversarial training, where models are trained on adversarial examples to improve robustness, is one common defense strategy. By exposing the model to adversarial examples during training, it learns to classify them correctly and becomes more resilient to attacks. Defensive distillation, input preprocessing, and ensemble methods are other techniques that can help mitigate the impact of adversarial attacks.
Expand Down

0 comments on commit 51a4e91

Please sign in to comment.