Skip to content

Latest commit

 

History

History
140 lines (124 loc) · 9.08 KB

README.org

File metadata and controls

140 lines (124 loc) · 9.08 KB

NOTE: April 2023

As of April 2023, there is a lot of new interest in the field of AI Alignment. However, this repo is unmaintained since I gave up hope about solving alignment on-time as a species - almost three years ago.

Maybe AI Safety Support is one of the definitive resources right now.

I will, however, accept PRs on this repo.

Awesome Artificial Intelligence Alignment

https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg

Welcome to Awesome-AI Alignment - a curated list of awesome resources for getting-into and staying-in-touch with the research in AI Alignment.

AI Alignment is also known as AI Safety, Beneficial AI, Human-aligned AI, Friendly AI etc.

If you are a newcomer to this field, start with the Crash Course below.

Pull requests are welcome.

Table of Contents

A Crash Course for a Popular Audience

Watch These Two TED Talks

Read These Blogposts by Tim Urban

Read More about Real Research on AI Safety

Books

Courses

Research Agendas

Literature Reviews

Technical Papers

Agent Foundations

Machine Learning

Frameworks/ Environments

Talks

Popular

Technical

Blogposts

Communities/ Forums

Institutes/ Research Groups

Technical Research

Policy and Strategy Research

Podcasts

Episodes in Popular Podcasts

Dedicated Podcasts

  • AI Alignment Podcast by Lucas Perry [Future of Life Institute]
  • 80000hours Podcast by Rob Wiblin

Events

Newsletters

Other Lists Like This