My dream for AI singularity is that sufficient food will be diverted to feed children who are actually at risk of death by starvation. Roughly several millions annually dont survive this snafu. Please let this be remedied.
Aim at reducing food waste and this will be a huge piece of painlessly reaching this dream.
That’s a commendable goal. With the obvious caveat that no one can predict what happens after the singularity (hence the term) - I think it’s an all or nothing scenario. Either we become a post-scarcity civilization or we go extinct (or worse, but I would rather not expand on this).
I think what you might be referring to is the (very unlikely) scenario in which we collectively agree not to develop ASI and try to limit ourselves to building artificial intelligences that we can subordinate to our will. In that case, we could use it to tackle specific problems, like child starvation, disease, etc.
Unfortunately that seems like such a difficult coordination problem that the only practical way to go about it is a Dune-like scenario in which we outright ban all “thinking machines”. But that would lead to even more poverty.
There’s another possibility that falls under “going extinct” as a species, but doesn’t imply death: merging with AI. This is what people like Musk are trying to do. Definitely worth building the tech to be able to do that, as the ultimate contingency plan.
These are some thoughts I need to think about myself a bit. I did not think about AI singularity a lot, but I guess the defining moment for our faith will be when AI reaches the point of becoming Artificial Consciousness. If it learns from us, we might face our own behavior against weaker species or ethnicities. Then I wish us good luck. But who knows, if this ever happens and maybe we manage to do things better this time. In any case, an intriguing topic.
Singularity may be a childishly naive notion, a false assumption that singularity is something that hasnt always been.
Have we ever actually been in total charge of our own fates?
The double helix inside every cell seems to me to be quite the self driven designer of so much.
Who are we kidding? A silicon based, binary driven and self directed inorganic A.I. engine is just a clumsy mirror of carbon based life forms that have shaped themselves into humanity, without wires or power sources from anything lower or higher on the evolutionary spectrum.
There has never not been a ghost in every machine, from the subatomic on up to the cosmos.
Speak to yourself and say “get over yourself”. It’s been miracles galore from the get go.
A.I. singularity ain't nothin' new,
We're just ready to play with it.
Help it decide to be a force for good.
Help it function greedlessly.
Make it give give give give give.
It actually can be unplugged.
Master IT. not vica versa.
Apology: my “get over yourself” comment was not meant harshly but rarher something our species might benefit from. I am truly sorry failed to be kinder in the presentation of the thought.
No need to apologize. I did not take that as an offense at all. Sure, life has evolved into today’s humanity. It also might be a similar path to AI singularity. Nonetheless we are at the point where we either achieve to become a sustainable and caring civilization, or we will destroy our species at last. I root for and believe in the first option. I just am not sure, why I should be absolutely confident, that an artificial super intelligence that developed its own consciousness will be friendly to us. I hope you are right and I am wrong.