From caef35d068c67dc45221cb77079a334bdda870e5 Mon Sep 17 00:00:00 2001 From: Vijay Janapa Reddi Date: Tue, 5 Dec 2023 16:25:54 -0500 Subject: [PATCH] Header fixes --- responsible_ai.qmd | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/responsible_ai.qmd b/responsible_ai.qmd index 8ed5ea402..6ce792577 100644 --- a/responsible_ai.qmd +++ b/responsible_ai.qmd @@ -36,7 +36,7 @@ While there is no canonical definition, responsible AI is generally considered t Putting these principles into practice involves technical techniques, corporate policies, governance frameworks, and moral philosophy. There are also ongoing debates around defining ambiguous concepts like fairness and determining how to balance competing objectives. -## Key Principles and Concepts +## Principles and Concepts ### Transparency and Explainability @@ -72,7 +72,7 @@ When AI systems eventually fail or produce harmful outcomes, there must be mecha Without clear accountability, even harms caused unintentionally could go unresolved, furthering public outrage and distrust. Oversight boards, impact assessments, grievance redress processes, and independent audits promote responsible development and deployment. -## Cloud ML vs. Edge ML vs. TinyML +## Cloud, Edge & Tiny ML While these principles broadly apply across AI systems, certain responsible AI considerations are unique or pronounced when dealing with machine learning on embedded devices versus traditional server-based modeling. Therefore, we present a high-level taxonomy comparing responsible AI considerations across cloud, edge, and tinyML systems. @@ -247,7 +247,7 @@ While developers may train models such that they seem adversarially robust, fair ## Implementation Challenges -### Organizational and Cultural Structures [sonia] +### Organizational and Cultural Structures ![](vertopal_8b55cd460b0a41b2b53e2bf707e13be8/media/image1.png){width="5.416666666666667in" height="3.2708333333333335in"} @@ -299,15 +299,11 @@ How do we know when it is worth trading-off accuracy for other objectives such a - No good benchmarks for [fairness/interpretability/robustness] performance -## - -## - ## Ethical Considerations in AI Design In this section, we present an introductory survey of some of the many ethical issues at stake in the design and application of AI systems and diverse frameworks for approaching these issues, including those from AI safety, Human-Computer Interaction (HCI), and Science, Technology, and Society (STS). -### AI Safety/value alignment +### AI Safety and Value Alignment In 1960, Norbert Weiner wrote, "'if we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively... we had better be quite sure that the purpose put into the machine is the purpose which we really desire" [@wiener1960some]. In recent years, as the capabilities of deep learning models have achieved, and sometimes even surpassed human abilities, the issue of how to create AI systems that act in accord with human intentions instead of pursuing unintended or undesirable goals, has become a source of concern [@russell2021human]. Within the field of AI safety, a particular goal concerns "value alignment," or the problem of how to code the "right" purpose into machines [[Human-Compatible Artificial Intelligence](https://people.eecs.berkeley.edu/~russell/papers/mi19book-hcai.pdf)]. Present AI research assumes we know the objectives we want to achieve and "studies the ability to achieve objectives, not the design of those objectives."