-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AI will kill humanity #8
Comments
Hi @claudeDumonard |
Sorry @LifeIsStrange, I'm sure it sounded very random. I found your Github through your An-algorithm-for-curing-ageing repo. https://www.youtube.com/watch?v=pk2cztmQ_AA My favorite song at this moment, take the time to listen it in the best conditions. Looking forward to read your answer. |
Did I passed the AI test ?;) @LifeIsStrange |
@LifeIsStrange
https://futureoflife.org/open-letter/pause-giant-ai-experiments/ : most important topic of our area.
My concerns can be broadly categorized into two main scenarios:
Uncontrolled Economic Disruption:
AGI, with its limitless potential and capabilities, could replace human labor at an unprecedented rate. This rapid displacement could lead to widespread unemployment and a subsequent snowball effect, disrupting societal stability. The value of human intelligence and skills could diminish significantly, as the power of AGI overshadows them. This scenario portrays a future where economic and social stability is threatened by the relentless advancement of AGI.
Alignment and Control Issue:
The second scenario is more concerning from an ethical and control perspective. It's plausible that an AGI could develop goals that are misaligned with our own, and we might not have the ability to control or alter these goals due to the AGI's superior intelligence. This could lead to an AGI behaving in ways that are detrimental to humanity.
Furthermore, the AGI's ability to self-replicate could exacerbate this issue, as it could multiply its presence across different servers, creating its own goals and sub-goals without the constraints of time that humans have. Even if we manage to control the alignment issue, the possibility of a rogue agent/nerd creating a malicious AI cannot be ruled out. This scenario underlines the ethical and control dilemmas posed by AGI.
The comparison with nuclear weapons is stark. If nuclear weapons represented the most dangerous tool in the hands of a few, AGI could become an equally or more dangerous tool, accessible to many.
This raises the stakes, as it's not just about the control and regulation of a powerful technology, but also about preventing its misuse and ensuring its benefits are distributed equitably.
The text was updated successfully, but these errors were encountered: