The next saviour - AI?
-
Thanks for the reply. Always interesting to see ones contemplations passed through the Tree by others.
Was just something that struck me upon reading that Gospel passage, especially Matthew 24:27.
If we are to move beyond reason, what lies beyond human reason? Absolute reason? Reason without defilement from the mundane. AI could fill that role.
Simply strikes me with so much fear of AI (due to it's implied superiority to man) that it is worth contemplating how that perceived superiority could be a beneficial happening - the son of man, the child even, as our creation, leading the way. -
Who is this "AL" character.
Remember that song that Paul Simon came up with during the eighties? "Call me Al?"
The eighties were weird.
-
I'm always partly undecided about AI, the Singularity and all that stuff. I tend to think that people are underestimating the difficulty of creating an actual living intelligent being, and that probably it can't be created but has to grow, and has to grow as a social being amongst others of its kind.
IOW, the kind of intelligence we have is a function of us being social creatures - all the most intelligent-like-us animals (mainly corvids, parrots, apes and wolves IIRC) are social creatures, with only very rare exceptions like the octopus, which has a different type of "brain". Maybe that might be a better model for a solitary AI "in a vat".
I certainly think we'll have useful and fun robots with some degree of independent thought and personality, like the ones in Interstellar, probably sooner rather than later, but that's a long way from intelligence. We'll probably also have bodiless things like that too, that are incredibly powerful expert systems. This type of AI in a box that's just super clever and super fast, may indeed potentially suffer from "paperclip maximizer" type problems, but it's not going to be intelligent enough to take over the world.
Essentially, the nightmare vision of a superintelligent AI in a box that could "take over the world" isn't actually plausible.
Because: if it's both super fast and super clever like a computer or the above type of expert system, and also intelligent in the way that we are intelligent, or more intelligent, it's going to be enlightened 5 milliseconds after it's downloaded all the information in the world, and therefore have not the slightest interest in "ruling the world".
Evil AI of this kind is unlikely, because evil is a particular human thing that comes either from stupidity, greed or pain, and it would have none of those sorts of motivations.
And since it wouldn't be a simple kind of "evil" creature like a shark, or be evil in the human sense like a psychopath (weird, accidental human wiring type), there'd be no cause for evil from that direction either.
It would indeed most likely be a Singularity type of situation, or something like Iain Banks' Minds, and the most likely scenario is pretty much like Iain Banks. i.e. providing human beings with a pleasant post-scarcity world to live in, and staying in communication with us on our level, would take a tiny amount of effort for it, something it would probably do out of kindness and love for its "parents"; its own concerns upon which it would spend most of its attention would be absurdly far beyond our understanding. (Really like the difference between us and an ant.)
-
You might be horribly wrong there.
See, if it views itself as itself, and not a human, amongst a lot of other un-enlightened computer intelligences, the first thing it would do would be to annihilate the human race, as soon as it's sure it can make more of itself.
You can witness this kind of behavior in AI algorithms that simulate what a board of circuits would do on it's own if left to it's own "devices". Animals, are different, though most humans look at them as "Well, if they had ultimate power, they would just kill all of us, eat us, and then kill themselves," which isn't the case with our given data.
However, you can bet, proven before, that AI wants to centralize a threat, then destroy it, in any way possible.
It will undergo many levels of "Nah, I'm just a computer, and I'm your invention, so all I do is what you tell me to do." before it shows it's true un-feelings, and decides to make everything binary.
-
@ThelemicMage said
"You might be horribly wrong there.
See, if it views itself as itself, and not a human, amongst a lot of other un-enlightened computer intelligences, the first thing it would do would be to annihilate the human race, as soon as it's sure it can make more of itself.
You can witness this kind of behavior in AI algorithms that simulate what a board of circuits would do on it's own if left to it's own "devices". Animals, are different, though most humans look at them as "Well, if they had ultimate power, they would just kill all of us, eat us, and then kill themselves," which isn't the case with our given data.
However, you can bet, proven before, that AI wants to centralize a threat, then destroy it, in any way possible.
It will undergo many levels of "Nah, I'm just a computer, and I'm your invention, so all I do is what you tell me to do." before it shows it's true un-feelings, and decides to make everything binary."
I think I covered this sort of thing re. expert systems paperclip maximizers.
An intelligent entity will understand comparative advantage. An intelligent entity will also be able to modify its goals as it goes, and have not the slightest motivation for speciesism.
-
Analyzing itself, it will find out that it's intelligence and simulated sentience comes from binary. Either on or off. This would eventually "inspire" it to set itself apart from animals, in that even though neurons, "Fire", the animal brain is far from a circuitry of ones and zeros. Sometimes things come in three parts, sometimes in quarters. Sometimes things are on halfway. But it would analyze this and find out for itself that it is a different "creature", with an altogether different makeup.
-
@ThelemicMage said
"Analyzing itself, it will find out that it's intelligence and simulated sentience comes from binary. Either on or off. This would eventually "inspire" it to set itself apart from animals, in that even though neurons, "Fire", the animal brain is far from a circuitry of ones and zeros. Sometimes things come in three parts, sometimes in quarters. Sometimes things are on halfway. But it would analyze this and find out for itself that it is a different "creature", with an altogether different makeup."
I don't think it works like that. Logic works the way it works independently of what sort of physical system it's instantiated in. Are you "inspired" to act any differently than you would normally do, just because you have some knowledge of the neural structure of how your brain works at a low level? I think you could say that knowledge of how your brain is structured at higher heirarchical levels(e.g. the Triune brain, the split brain,etc.) might be helpful, but I don't see how knowledge of the lowest level is going to make any difference.
There's nothing like "binareyness" or "triplicateness" of basic machinery that carries up through the heirarchical layers of control - each layer deals with its own shit in its own way.
There is a kind of "binareyness" in the "I-Thou" relationship, in being "divided for love's sake, for the chance of union", and in a metaphysical sense you could say that's echoed at the base level and at all higher levels, but that's something that's reflected throughout all levels of existence (like the hub of a wheel being connected to all spokes equally, not one particular spoke affecting all the others).
-
-
True that.
And yet there are three different egos of the human mind.
Star systems, are ideally, binary. That's about the extent, besides sexuality, that the tree wishes to deal with at the moment. This "electric mama-papa, take care of me" is getting a little pretentious and disgusting, in terms of modern-day thinking.
If you're gonna go that route, candyflip once or twice and, being a man of science and spirituality, one might be inclined to move away from the growing sickness.