all I want personally is.... machine empires NOT to be gestalts, and acutally jsut be a single ai controlling everything by itself (which isnt a gestalt)
And how would that be different, mechanically, from how machine intelligences are currently implemented?all I want personally is.... machine empires NOT to be gestalts, and acutally jsut be a single ai controlling everything by itself (which isnt a gestalt)
Terraforming Gaia worlds are also beyond the reach of starting nations, but you can start on one anyway.In Stellaris, independently sapient machines are an advanced technology far beyond the reach of starting nations. Asking to change that is like asking to start with the ability to build battleships - even if it didn't provide any mechanical advantage, it wouldn't fit with the lore of the game.
From what I can tell, you want:Can you please make it possible to make non-gestalt machines? I can't be the only one who wants that.
from what i can tell they want to be able to have democratic sentient bots at year 2200 without any of the synth bonusesAnd how would that be different, mechanically, from how machine intelligences are currently implemented?
Or imperial sentient bots! With sentient bot heirs for when your sentient bot leader suffers an "unfortunate accident".from what i can tell they want to be able to have democratic sentient bots at year 2200 without any of the synth bonuses
pretty much, and while i don't dislike the idea it would require some heavy balancing to make the empire in question not brokenly strong. although then again i don't remember the last time balance was a significant concern in stellaris.Or imperial sentient bots! With sentient bot heirs for when your sentient bot leader suffers an "unfortunate accident".
I think the gist of it is, "can I have a normal empire, with machine pops instead of bio pops?"
That's a good point - Void Dwellers start out with an advanced tech that they are able to replicate. It is basically the same situation as a hypothetical day-one synth origin, so why do I object to that but not to Void Dwellers? I think it's because I consider habitats to be weak and synths to be strong. So if a day-one synths origin was weak enough, perhaps I'd be OK with it? Something for me to ponder, I suppose.Habitats are also beyond the reach of starting nations, but you can start on some anyway.
But... that is a gestalt. Hive Minds, for instance, are literally described as "these drones are like the fingers of a body" and even autonomous drones are more akin to "that knee-jerk reaction your spine can send out because the brain's too far away" than separate people.all I want personally is.... machine empires NOT to be gestalts, and acutally jsut be a single ai controlling everything by itself (which isnt a gestalt)
I like this, but Stellaris has no precedent for that kind of unlockable content, from what I understand... If that's ultimately how they implemented this sort of origin, it'd be the first piece of content in the game's history to do so.As an Origin, maybe you're playing the people brought back to life by the In Limbo event chain -- finishing that chain unlocks this Origin for future games.
Synths *are* strong, but they're mostly strong because of all the buffs they get through techs and APs.That's a good point - Void Dwellers start out with an advanced tech that they are able to replicate. It is basically the same situation as a hypothetical day-one synth origin, so why do I object to that but not to Void Dwellers? I think it's because I consider habitats to be weak and synths to be strong. So if a day-one synths origin was weak enough, perhaps I'd be OK with it? Something for me to ponder, I suppose.
Their name, "Gestalt", also heavily hints at that. I don't think there's an exact English translation for the concept, but roughly speaking a Gestalt is a shape or form, not quite identifiable as a human, but unambiguously an entity of sorts.But... that is a gestalt. Hive Minds, for instance, are literally described as "these drones are like the fingers of a body" and even autonomous drones are more akin to "that knee-jerk reaction your spine can send out because the brain's too far away" than separate people.
A non-networked AI empire(ie: Transformers, Cylons, etc.) isn't the same as an advantage. It's like wanting a fish empire, for example.
They invented the Internet... and decided they didn't want it inside their bodies.So basically a species that have invented space travel without inventing the Internet?
Wait, what? How do you even distinguish between artificial machine consciousness (which on a certain level is just an algorithms of data transfer, analysis and process) and any kind of data transfer technology?They invented the Internet... and decided they didn't want it inside their bodies.
for one, there would be no leaders like scientists, generals, admirals, there would only be 1 leaderAnd how would that be different, mechanically, from how machine intelligences are currently implemented?
a singular ai controlling empty machines is not "machines forming a network which then creates the controlling ai"But... that is a gestalt. Hive Minds, for instance, are literally described as "these drones are like the fingers of a body" and even autonomous drones are more akin to "that knee-jerk reaction your spine can send out because the brain's too far away" than separate people.
Well, when you first make them, robots can't take ruler or specialist jobs. That might be a bit of a handicap in the early game.The part left over when you remove the techs and the APs looks pretty reasonable to me.
Drones are supposed to be nothing more than cells in the body that is the gestalt, but the reality of producing millions or billions of drones and deploying them on an interplanetary scale is, some of them go wrong. No manufacturing process is perfect, of course. Deviant drones are a kind of mutation or cancer in the gestalt's "body", parts that are acting in an unexpected and potentially harmful manner.Problem is, many events seemingly contradict this notion and paint a very muddy picture of drones as individuals, not parts of the whole that act autonomously. That's likely just a result of events largely not taking Gestalt into account properly.
Programming, of course. If your machines are designed for a highly interlinked, always-online existence, they might consider whatever arrives from the network to be their source of truth. But if they're designed for use out in the real world where communications can be spotty and transmission lag is a significant factor, they may need to subordinate the information they get from the network to what they get from their internal sensors.Wait, what? How do you even distinguish between artificial machine consciousness (which on a certain level is just an algorithms of data transfer, analysis and process) and any kind of data transfer technology?