Skip to content

Commit

Permalink
Header fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
profvjreddi committed Dec 5, 2023
1 parent 32bd413 commit caef35d
Showing 1 changed file with 4 additions and 8 deletions.
12 changes: 4 additions & 8 deletions responsible_ai.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ While there is no canonical definition, responsible AI is generally considered t

Putting these principles into practice involves technical techniques, corporate policies, governance frameworks, and moral philosophy. There are also ongoing debates around defining ambiguous concepts like fairness and determining how to balance competing objectives.

## Key Principles and Concepts
## Principles and Concepts

### Transparency and Explainability

Expand Down Expand Up @@ -72,7 +72,7 @@ When AI systems eventually fail or produce harmful outcomes, there must be mecha

Without clear accountability, even harms caused unintentionally could go unresolved, furthering public outrage and distrust. Oversight boards, impact assessments, grievance redress processes, and independent audits promote responsible development and deployment.

## Cloud ML vs. Edge ML vs. TinyML
## Cloud, Edge & Tiny ML

While these principles broadly apply across AI systems, certain responsible AI considerations are unique or pronounced when dealing with machine learning on embedded devices versus traditional server-based modeling. Therefore, we present a high-level taxonomy comparing responsible AI considerations across cloud, edge, and tinyML systems.

Expand Down Expand Up @@ -247,7 +247,7 @@ While developers may train models such that they seem adversarially robust, fair

## Implementation Challenges

### Organizational and Cultural Structures [sonia]
### Organizational and Cultural Structures

![](vertopal_8b55cd460b0a41b2b53e2bf707e13be8/media/image1.png){width="5.416666666666667in" height="3.2708333333333335in"}

Expand Down Expand Up @@ -299,15 +299,11 @@ How do we know when it is worth trading-off accuracy for other objectives such a

- No good benchmarks for [fairness/interpretability/robustness] performance

##

##

## Ethical Considerations in AI Design

In this section, we present an introductory survey of some of the many ethical issues at stake in the design and application of AI systems and diverse frameworks for approaching these issues, including those from AI safety, Human-Computer Interaction (HCI), and Science, Technology, and Society (STS).

### AI Safety/value alignment
### AI Safety and Value Alignment

In 1960, Norbert Weiner wrote, "'if we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively... we had better be quite sure that the purpose put into the machine is the purpose which we really desire" [@wiener1960some]. In recent years, as the capabilities of deep learning models have achieved, and sometimes even surpassed human abilities, the issue of how to create AI systems that act in accord with human intentions instead of pursuing unintended or undesirable goals, has become a source of concern [@russell2021human]. Within the field of AI safety, a particular goal concerns "value alignment," or the problem of how to code the "right" purpose into machines [[Human-Compatible Artificial Intelligence](https://people.eecs.berkeley.edu/~russell/papers/mi19book-hcai.pdf)]. Present AI research assumes we know the objectives we want to achieve and "studies the ability to achieve objectives, not the design of those objectives."

Expand Down

0 comments on commit caef35d

Please sign in to comment.