Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
mensch72 authored Mar 4, 2024
1 parent bacab1c commit 43cebe8
Showing 1 changed file with 12 additions and 4 deletions.
16 changes: 12 additions & 4 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -22,19 +22,27 @@
<img src="assets/img/logo_colored.svg" width="100%">
<h2>The SatisfIA project</h2>
<p>
We are an interdisciplinary research team developing aspiration-based designs for intelligent agents. The project is hosted by the <a href="https://www.pik-potsdam.de/en/institute/futurelabs/gane/futurelab-gane">FutureLab on Game Theory &amp; Networks of Interacting Agents</a> at PIK in collaboration with the <a href="https://aisafety.camp">AI Safety Camp</a>, <a href="https://sparai.notion.site/Supervised-Program-for-Alignment-Research-SPAR-4da6be132e974823961abfdd0c218536">SPAR</a>, and <a href="https://www.ens.psl.eu/en">ENS Paris</a>, led by <a href="https://www.pik-potsdam.de/members/heitzig">Jobst Heitzig</a>.
We are an interdisciplinary research team developing aspiration-based designs for intelligent agents.
The project is hosted by the <a href="https://www.pik-potsdam.de/en/institute/futurelabs/gane/futurelab-gane">FutureLab on Game Theory &amp; Networks of Interacting Agents</a> at PIK
in collaboration with the <a href="https://aisafety.camp">AI Safety Camp</a>,
<a href="https://sparai.notion.site/Supervised-Program-for-Alignment-Research-SPAR-4da6be132e974823961abfdd0c218536">SPAR</a>,
and <a href="https://www.ens.psl.eu/en">ENS Paris</a>, led by <a href="https://www.pik-potsdam.de/members/heitzig">Jobst Heitzig</a>.
</p>
<p>
Our original contribution to AI safety research is our focus on <strong>non-maximizing agents</strong>. The project’s approach diverges from traditional AI designs that are based on the idea of maximizing objective functions, which is unsafe if these objective functions are not perfectly aligned with actually desired outcomes.
Our original contribution to AI safety research is our focus on <strong>non-maximizing agents</strong>.
The project’s approach diverges from traditional AI designs that are based on the idea of maximizing objective functions,
which is unsafe if these objective functions are not perfectly aligned with actually desired outcomes.
</p>
<p>
Instead, SatisfIA’s AI agents are designed to fulfill goals specified through constraints known as <strong>aspirations</strong>, reducing the likelihood of extreme actions and increasing safety.
Instead, SatisfIA’s AI agents are designed to fulfill goals specified through constraints known as <strong>aspirations</strong>,
reducing the likelihood of extreme actions and increasing safety.
</p>
<p>
This is part of a broader agenda of designing agents in safer ways that you can learn about
in <a href="https://pik-potsdam.zoom-x.de/rec/share/nl-EAnoEGGxqvwSZvh12tovUkM784Hlo7ogDezTWCA1rvuUMUDunLdAXsp8Qy4-k.QbpcNkpL1V_aaxw_">this talk at the ENS Paris</a>.
There's also an earlier <a href="https://www.youtube.com/watch?v=zX0qq0K5z9c">interview on Will Petillo's Y*uTube channel</a>
that is mostly about the alternative idea of “satisficing” (see below for how that is different from our current approach).
where Jobst talks about the rationale of non-maximizing
(and also about “satisficing”, an alternative but related idea to tihis project's, see below).
</p>

<h2>Research focus</h2>
Expand Down

0 comments on commit 43cebe8

Please sign in to comment.