-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Layers/ Difficulty of Alignment/ Governance page #90
Comments
Whenever we write this, I will try to stay away from the term Alignment, and I will add for clarification how bafflingly bad the term is and the million meanings that it has. While almost no one trying to communicate AIS better And also we could add why we are pessimistic on other efforts and regulations from AIS Governance and why they don't seem enough. At the end of the day that's the reason a lot of us are here and could be a good page to bring people from EA and even crosspost it in their forum. |
Whenever me or someone else write this I do think is quite important to talk about how not only Alignment, but Governance too seems impossible. I would suspect that if someone in the AGI companies is taking the problem seriously, they probably want to do some kind of Coherent Extrapolated Volition or Pivotal Act. I just don't see it. I just don't see how they can have a whole leadership of a company agree on such a crazy thing, and without any leaks that that is the case. The media and/ or governments would react really strongly against something like that, people have all kind of beliefs of good and wrong and quite strongly. Most of them are not moral antirealists who think that preferences and likes are what is good (what CEV would mean) and that doing pivotal acts is wrong. So then it would become obvious they cannot actually do something like that and we don't know how to govern an AI superintelligence. Also, as I said before, other attempts to govern it which are less ambitious are probably too weak. So, in conclusion: we cannot govern higher alien intelligences. We must pause and d/acc. |
Some people think we can let companies to solve alignment, but we can explain with the layers that Joep proposed on the Discord or maybe in a not so discretized way, why we think Alignment is actually really hard/ impossible for just a company/ government to solve.
Probably the FAQ should link to this, maybe another page.
I would like this for sure to link to my proposed "Paths to superintelligence" page.
The text was updated successfully, but these errors were encountered: