Approach
We aim to gain a better insight into the dynamics of competition for the
development of transformative AI and potential opportunities for
effective altruists (EAs) to increase safety among competitors.
Furthermore, we want to take steps toward turning the computational
modeling of AI competition into an academic field that has safety at its
core. Lastly, we aim to create technical and conceptual foundations that
enable future researchers to build computational models with different
assumptions about the important features. By making our work possible to
build upon we can make future work in this area faster and more
comparable.
We believe that computational modeling can reduce the risks of bad
outcomes of AI competition by:
-
Assisting in systematically discovering possible scenarios, even ones
not found through qualitative reasoning
-
Allowing researchers to interactively and visually explore different
scenarios and their counterfactuals
- Predicting possible outcomes of different scenarios
-
Suggesting actions EAs can take to improve safety among competitors
-
Generating data for possible scenarios that haven’t happened yet
-
Helping to prevent dangerous competition for transformative AI from
happening in the first place
-
Preparing EAs for how to behave if a dangerous competition for
transformative AI happens
-
Positioning the AI governance community to be a first mover and
important player in a future AI competition modeling field and thereby
being able to influence a competition for transformative AI if it
happens in the future