Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Layers/ Difficulty of Alignment/ Governance page #90

Open
Pato-desu opened this issue Feb 4, 2024 · 2 comments
Open

Layers/ Difficulty of Alignment/ Governance page #90

Pato-desu opened this issue Feb 4, 2024 · 2 comments
Labels

Comments

@Pato-desu
Copy link
Collaborator

Some people think we can let companies to solve alignment, but we can explain with the layers that Joep proposed on the Discord or maybe in a not so discretized way, why we think Alignment is actually really hard/ impossible for just a company/ government to solve.

Probably the FAQ should link to this, maybe another page.

I would like this for sure to link to my proposed "Paths to superintelligence" page.

@Pato-desu Pato-desu changed the title Layers/ Difficulty of Alignment Layers/ Difficulty of Alignment page Feb 4, 2024
@Pato-desu
Copy link
Collaborator Author

Pato-desu commented Jun 14, 2024

Whenever we write this, I will try to stay away from the term Alignment, and I will add for clarification how bafflingly bad the term is and the million meanings that it has. While almost no one trying to communicate AIS better

And also we could add why we are pessimistic on other efforts and regulations from AIS Governance and why they don't seem enough. At the end of the day that's the reason a lot of us are here and could be a good page to bring people from EA and even crosspost it in their forum.

@Pato-desu Pato-desu changed the title Layers/ Difficulty of Alignment page Layers/ Difficulty of Alignment/ Governance page Aug 10, 2024
@Pato-desu
Copy link
Collaborator Author

Pato-desu commented Aug 10, 2024

Whenever me or someone else write this I do think is quite important to talk about how not only Alignment, but Governance too seems impossible.

I would suspect that if someone in the AGI companies is taking the problem seriously, they probably want to do some kind of Coherent Extrapolated Volition or Pivotal Act. I just don't see it. I just don't see how they can have a whole leadership of a company agree on such a crazy thing, and without any leaks that that is the case. The media and/ or governments would react really strongly against something like that, people have all kind of beliefs of good and wrong and quite strongly. Most of them are not moral antirealists who think that preferences and likes are what is good (what CEV would mean) and that doing pivotal acts is wrong.

So then it would become obvious they cannot actually do something like that and we don't know how to govern an AI superintelligence.

Also, as I said before, other attempts to govern it which are less ambitious are probably too weak. So, in conclusion: we cannot govern higher alien intelligences. We must pause and d/acc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant