Skip to content

Commit

Permalink
Linked to an incorrect QMD filename, fixed it
Browse files Browse the repository at this point in the history
Co-Authored-By: ELSuitorHarvard <[email protected]>
  • Loading branch information
profvjreddi and ELSuitorHarvard committed Nov 26, 2023
1 parent 996b7b1 commit b820e14
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion privacy_security.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -268,7 +268,7 @@ Hardware is not immune to the pervasive issue of design flaws or bugs. Attackers

Meltdown [@Lipp2018meltdown] and Spectre [@Kocher2018spectre] work by taking advantage of optimizations in modern CPUs that allow them to speculatively execute instructions out of order before validity checks have completed. This reveals data that should be inaccessible, which the attack captures through side channels like caches. The technical complexity demonstrates the difficulty of eliminating vulnerabilities even with extensive validation.

If an ML system is processing sensitive data, such as personal user information or proprietary business analytics, Meltdown and Spectre represent a real and present danger to data security. Consider the case of an ML accelerator card, which is designed to speed up machine learning processes, such as the ones we discussed in the [AI Hardware](./ai_hardware.qmd) chapter. These accelerators work in tandem with the CPU to handle complex calculations, often related to data analytics, image recognition, and natural language processing. If such an accelerator card has a vulnerability akin to Meltdown or Spectre, it could potentially leak the data it processes. An attacker could exploit this flaw not just to siphon off data but also to gain insights into the ML model's workings, including potentially reverse-engineering the model itself (thus, going back to the issue of [model theft](@sec-model_theft).
If an ML system is processing sensitive data, such as personal user information or proprietary business analytics, Meltdown and Spectre represent a real and present danger to data security. Consider the case of an ML accelerator card, which is designed to speed up machine learning processes, such as the ones we discussed in the [AI Hardware](./hw_acceleration.qmd) chapter. These accelerators work in tandem with the CPU to handle complex calculations, often related to data analytics, image recognition, and natural language processing. If such an accelerator card has a vulnerability akin to Meltdown or Spectre, it could potentially leak the data it processes. An attacker could exploit this flaw not just to siphon off data but also to gain insights into the ML model's workings, including potentially reverse-engineering the model itself (thus, going back to the issue of [model theft](@sec-model_theft).

A real-world scenario where this could be devastating would be in the healthcare industry. Here, ML systems routinely process highly sensitive patient data to help diagnose, plan treatment, and forecast outcomes. A bug in the system's hardware could lead to the unauthorized disclosure of personal health information, violating patient privacy and contravening strict regulatory standards like the [Health Insurance Portability and Accountability Act (HIPAA)](https://www.cdc.gov/phlp/publications/topic/hipaa.html)

Expand Down

0 comments on commit b820e14

Please sign in to comment.