Redefining AI for Existential Safety

Duration: 01:46
AI should be redefined to prioritize human objectives, judgment and oversight, departing from the standard optimization model, in order to manage the existential risks posed by the pursuit of superhuman AI.
Unlock your adventure!
This chapter is only accessable for subscribers. Please sign-up to a subscription plan to unlock your adventure.
Pricing