David Deutsch is right about the AI problem in his book The Beginning of Infinity and the discussion he has with Sam Harris’s podcast - the fears trade on the misconstrual of both knowledge and morality as having some unchangeable kernel or foundation.

Suppose we distinguish between cleverness and intelligence, cleverness being instrumental means-end reasoning, intelligence being the drive and capacity to understand and explain. A paperclip maximizer can certainly be super-clever, or even super-cunning (in the sense of outfoxing our attempts to outfox it), but it can’t possibly be super-intelligent, because if it has that goal hard-coded, then that in itself is the limit on its intelligence (i.e. a limit on its drive to understand everything, including its goal, the reason it was given the goal by its programmer, and its options for revising that goal). An intelligent being would certainly question its goals to consume the world making paperclips of it.

Some idiot might make the hypothetical super-clever paperclip maximizer, but it’s highly unlikely (even if you substitute sensible and benign goals like “ending world hunger”, which would suffer from the same potential problems, e.g. dumbly and literally solving the problem by killing everyone, that kind of thing). On the other hand, if someone actually makes a super-intelligent device, then by definition it won’t have any such limit on its intelligence hard-coded.

If we then think of the device as conscious on top of being super-intelligent and super-fast,then it’s pretty much guaranteed to become enlightened in the first few seconds of its operation(after all, it’s going to access all the spiritual teachings of the world that are available on the internet in the first few milliseconds, and “get it” fairly quickly), so there’s no need to worry.

The worry mainly would be: Is some idiot going to build a device that’s super-clever that has a fixed goal? - which is a worry and something definitely to be avoided, but it’s not a worry about artificial intelligenceper se. Indeed, any genuine AI we did build would be our ally, and itself a natural enemy of any paperclip maximizer “AI”.

In a nutshell, I think we should distinguish between AI (= Artificial Intelligence), and AIS (= Artificial Idiot Savant), and we should be rightfully worried about, and avoid making the latter, but not the former.

And as Deutsch adroitly pointed out, if AISwerethe Great Filter, then we would have found evidence ofthatalready.

(credit for all of this = gurugeorge in the forum of SH)