At least in Stellaris, they are not that different. It seems they are just a Software Update (or trivial Hardware Upgrade) appart.
But mostly it is about avoiding that can of worms they had to deal with since 1.0, wich came from having de-facto 3 Robot Species.
Because of that funny little thing called a "Convergent Goal". Wich are most often Intermediate goals:
Even with a dumb AI agent, this danger is well known. It would be way more severe with a AGI.
Hmm, interesting.
I'd had this thought and discussed it before (though never heard the phrase "convergent instrumental goals" to describe it). But, always in one of two contexts:
1. Dumb agents (turn whole planet into paperclips).
2. Intelligent agents whose programmed goals (fighting wars for example) would be significantly impeded by putting a high value on organic life.
The reason for this, I think, is because I've always worked from the assumption that any smart AI would be programmed to value life, specifically the lives of their creators, to some extent. (Which is the last thing this video mentioned.) So, only AI too stupid to have more than 1 goal, or AI programmed out of necessity to not care as much about life. Otherwise, any instrumental goal that competed with the terminal goal of preserving life would be low in priority.
Meanwhile, humans are a mix of these 2. Our sense of self-preservation was programmed in by evolution because it improved chances to procreate, but it is often driven to irrational levels since it's a blunt chemical tool. Similarly, our sense of empathy was programmed in because we're social animals and it helped us and our cousins in the tribe procreate. We
are an example of extremely poorly designed but very heavily tested AI. If/when we create AI, it will be better designed but less well-tested (vs nature's "random code change or copy/paste then test it to death" strategy).
And, while a human's self-preservation was programmed in because it was instrumental, it was programmed in as a terminal goal. If we leave it as is only an instrumental goal in AI, while making satisfaction of empathy a high-priority terminal goal... It could turn out better. Or disastrously! Billions of years of testing through evolution is hard to beat with 100 years of design