• We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.

Palatinus Germanicus

Major
39 Badges
Apr 9, 2016
521
4
  • Europa Universalis IV: El Dorado
  • Crusader Kings III: Royal Edition
  • Crusader Kings III
  • Crusader Kings II: Holy Fury
  • Europa Universalis IV: Rule Britannia
  • Europa Universalis IV: Cradle of Civilization
  • Age of Wonders III
  • Europa Universalis IV: Mandate of Heaven
  • Crusader Kings II: Monks and Mystics
  • Europa Universalis IV: Rights of Man
  • Crusader Kings II: Reapers Due
  • Crusader Kings II: Conclave
  • Europa Universalis IV: Cossacks
  • Crusader Kings II: Horse Lords
  • Europa Universalis IV: Common Sense
  • Crusader Kings II: Way of Life
  • Victoria 2
  • Europa Universalis IV: Art of War
  • Crusader Kings II: Charlemagne
  • Crusader Kings II: Legacy of Rome
  • Crusader Kings II: The Old Gods
  • Crusader Kings II: Rajas of India
  • Crusader Kings II: Jade Dragon
  • Crusader Kings II: The Republic
  • Crusader Kings II: Sons of Abraham
  • Europa Universalis IV: Third Rome
  • Crusader Kings II: Sunset Invasion
  • Crusader Kings II: Sword of Islam
  • Europa Universalis IV
  • Europa Universalis IV: Mare Nostrum
  • Europa Universalis IV: Conquest of Paradise
  • Europa Universalis IV: Wealth of Nations
  • Europa Universalis IV: Call to arms event
  • Europa Universalis IV: Res Publica
  • Victoria: Revolutions
  • Europa Universalis IV: Pre-order
  • Crusader Kings II
  • Victoria 2: Heart of Darkness
  • Victoria 2: A House Divided
As in, technically. Not having a clue, I've always assumed that it used the system clock... using the fractions of seconds. The exact moment a 'chance factor' is to be calculated, the program glances at the system clock: it's afternoon... 14:08:43.25 to be exact,

.25 is your number, and so if you needed something that only had a 20% of happening, well then drat -- you just missed (you're just over it). It seems like this would've been the easiest thing to do, if you're a programmer. Alternatively, maybe the game has it's own internal timer running, instead (but same principle). Anyhow, I've always wondered.

Of course, it's called a "random number generator", but my question still applies... how exactly does it actually come up w/ the 'random' number? I can't think of a better way than what I described... random timing of the user input (or moment the program needs a calculation done), tied to the endless cycling of the system clock.

Or maybe it uses algorithms that are more complicated than I can possibly imagine. :confused:
 
PDX has a special department that gets deployed whenever someone downloads the game, installing a chip granting you access to their secret satellite.
Using that connection, events can get based on cosmic background noise, which gets encoded into a hash and said hash gets converted into a number between 1 and 100.

Pretty obvious if you think about it.

If you don't trust my answer you could ask devs directly by sending them a PM.
Judging from personal experience: It varies. Some things in the game are a lot less random than others.
 
I'm pretty sure it stores the state of the RNG with the save, at the very least. Eg. if you load a save and roll generals before unpausing, their stats will be the same each time you try this. There's enough AI rolls in the background that things become functionally random again after a day or two.
 
  • 1
Reactions:
I'm pretty sure it stores the state of the RNG with the save, at the very least. Eg. if you load a save and roll generals before unpausing, their stats will be the same each time you try this. There's enough AI rolls in the background that things become functionally random again after a day or two.

You can rename the save, changing your general rolls in the process.
Meaning that, yes, some random events are being stored, but they are still random and you haven't explained their randomness yet.

Btw, I'm an A.I. skeptic. I don't believe machines/software will ever truly be 'intelligent'. A sophisticated program that can use logic to find the best solutions? Sure. -Because it's programmed to do exactly that. But invent Calculus like Newton? Or compose Passacaglia and Fugue like Bach? -It's never going to happen. Unless it's specifically programmed how to do it, "A.I." can't fight it's way out of a paper bag.

Oh boy, did you miss out on A LOT of recent developments.

Shit is going to hit the fan next decade.

EDIT: Short summery of Alpha Zero
Can't find the full paper right now. Maybe I'll look for it later. It's crazy what DeepMind has created.
 
Last edited:
As in, technically. Not having a clue, I've always assumed that it used the system clock... using the fractions of seconds. The exact moment a 'chance factor' is to be calculated, the program glances at the system clock: it's afternoon... 14:08:43.25 to be exact,

.25 is your number, and so if you needed something that only had a 20% of happening, well then drat -- you just missed (you're just over it). It seems like this would've been the easiest thing to do, if you're a programmer. Alternatively, maybe the game has it's own internal timer running, instead (but same principle). Anyhow, I've always wondered.

Of course, it's called a "random number generator", but my question still applies... how exactly does it actually come up w/ the 'random' number? I can't think of a better way than what I described... random timing of the user input (or moment the program needs a calculation done), tied to the endless cycling of the system clock.

Or maybe it uses algorithms that are more complicated than I can possibly imagine. :confused:

No idea on the specifics, but it's likely just a software pseudo-random number generator. Check out the Wikipedia articles for RNG and PRNG for some info. It's basically just generating a string of chaotic - unpredictable, but deterministic - numbers from an initial seed state using an algorithm, probably something like the Mersenne Twister, or maybe a more modern one.

While the system clock can be incorporated as an entropy source for "MOR Randomness!" it's an extra expense computationally and not likely worth it for video gaming. Besides being relatively quick, "pure" PRNGs have the side benefit of producing the exact same results for a given seed state, which is useful for debugging. That comes in handy for scientific Monte Carlo simulations were you want to be able to reproduce results exactly and could be useful for game debugging (but maybe not, given all the other factors...).

When you want to crank up your randomness game for things like cryptography, Linux uses the milli- and microseconds on the system clock to generate an "entropy pool." The pool can be "depleted," however, so there's even hardware generators for pro-league randomness. From that link, it sounds like some Intel processors even have an entropy source for random numbers. Not sure if it's good for gaming, given how many random decisions EU4 has to make each day.

So, you can program a computer for this specific (and VERY) narrow function, and yes... manage to defeat human opponents. But that's not 'intelligence'. Programming a game to be good enough (at what it does -- the ONLY thing it does) to beat a human, is not intelligence... it's just an efficient, finely developed piece of software.

And what about Watson? He's just a really optimized search engine, with a plethora of data at his fingertips. There's no 'intelligence'.

I'm telling you... movies like 'Ex Machina' are never going to materialize IRL. -That's just the human imagination at work. And besides, you're never going to be able to give software a 'will'. Machines are never going to 'want to take over the world', because they're never going to 'want' anything... ever. They have no will. -And you can't 'program' it. But this is just a side track. The main point is that you cannot 'program' intelligence. You can program software to do what YOU want... but it's going to stay within the framework of that box, and never come out of it.

I agree with you in general, especially in regards to the current state of AI, but "never" is a very long time....
 
Those who don't believe in quantum physics and/or general relativity?
We're getting a bit OT here, but what does General Relativity have to do with randomness? Its incompatibility with the randomness of QM is one of the Big Problems currently....

As for QM, unpredictable and random != not deterministic. See e.g. turbulence, it's purely deterministic (the Navier-Stokes eqns are Newtonian) but completely unpredictable and currently "understood" via statistical mechanics. The implications of QM for determinism are still an open question among physicists and philosophers.

EDIT: I'm using "deterministic" in the philosophical sense here (i.e. all events determined by pre-existing causes) not in the mathematical sense (where it's literally the opposite of random) Also note that turbulence satisfies both definitions of deterministic, and QM appears to be indeterministic in the math sense)
 
Last edited:
We're getting a bit OT here, but what does General Relativity have to do with randomness? Its incompatibility with the randomness of QM is one of the Big Problems currently....

As for QM, unpredictable and random != not deterministic. See e.g. turbulence, it's purely deterministic (the Navier-Stokes eqns are Newtonian) but completely unpredictable and currently "understood" via statistical mechanics. The implications of QM for determinism are still an open question among physicists and philosophers.

EDIT: I'm using "deterministic" in the philosophical sense here (i.e. all events determined by pre-existing causes) not in the mathematical sense (where it's literally the opposite of random) Also note that turbulence satisfies both definitions of deterministic, and QM appears to be indeterministic in the math sense)
The way from QM -> deterministic macro effects is simply the law of great numbers. Nothing surprising here.

@OP: Good question. There certainly is a seed stored in the save game from which all required random numbers are then generated. You can see that if you save shortly before some event and then reload you will usually get exactly the same result over and over again because the sequence of random events beginning with that seed is deterministic. Though that changes pretty fast. Might simply be due to parallel computing. If the random events come in different order, the same seed will quickly lead to very different results.
 
Do you play chess? It's the perfect arena for a computer program to showcase it's 'talents'. Imagine a very broad spectrum, and on this spectrum is every talent that exists. Each talent/skill/ability has a certain 'depth' to it, as well. So in the case of 'making a bed' -- that's not a very deep talent. You can only go so far w/ that. Anyway, chess is a very narrow band on the spectrum... but it goes REAL, real deep.

So, you can program a computer for this specific (and VERY) narrow function, and yes... manage to defeat human opponents. But that's not 'intelligence'. Programming a game to be good enough (at what it does -- the ONLY thing it does) to beat a human, is not intelligence... it's just an efficient, finely developed piece of software.

And what about Watson? He's just a really optimized search engine, with a plethora of data at his fingertips. There's no 'intelligence'.

I'm telling you... movies like 'Ex Machina' are never going to materialize IRL. -That's just the human imagination at work. And besides, you're never going to be able to give software a 'will'. Machines are never going to 'want to take over the world', because they're never going to 'want' anything... ever. They have no will. -And you can't 'program' it. But this is just a side track. The main point is that you cannot 'program' intelligence. You can program software to do what YOU want... but it's going to stay within the framework of that box, and never come out of it.

Make believe is fun, though. I was a child once, I still remember.

I am sorry friend, but I don't think you have researched AI enough to descibe it's process. I have studied machine learning and data science academically, as well as worked on a few AI personal AI projects.

The state-of-the-art current AI-systems work through self-learning. This learning can be through examples (i.e. 'here you have 1000 patient files of patients with and without cancer. Knowing these, does patient 1001 have cancer?'), or without example (called 'unsupervised learning').

The last bastion of defense against AI superiority (just a way of speech =P) was the game of Go, an ancient chinese/japanese game with possible moves each turn about 10 times bigger than those in chess (0 to 361 for Go, to about 24-36 for chess each turn). This meant that the game could not be 'cracked' before the rise of machine learning and advancements in hardware.

Last year, AlphaGo, from Google's spin-off deepmind, managed to defeat Yi Se-dol, a legendary and agressive go player, 5 to 0 in a best of five. Their AI was developed using a combination of supervised (i.e. with examples) and unsupervised (i.e. without examples) learning.

The new version of AlphaGo, AlphaZero, was trained fully with unsupervised learning. This means it has learned to play go better than the best of professionals without any example of what a go game looks like. Of course, structure, learning process and objective functions were still defined to allow it to train itself.

I would say that you misjudge the advances of AI (I could go on and on about contemporary implementations, like in Google Search, Alexia, the ads you see, or stock trading).

Then again, the question of 'intelligence' is a fully different one. Whether these advancements comprise 'intelligence' and if not, what DOES trully comprises 'intelligence' is a topic so abstract that I am not really interested in discussing it. The fact is that if we look around us, machines and mechanisms are better at most things than us humans.
 
With this thread I just exited a four hour Wikipedia loop that changed topics from RNG, to the many worlds interpretation of quantum mechanics, ever deeper down rabbit hole to an array of different topics such as a modified thought experiment of Schrodinger's cat where the subject is the cat and achieves quantum immortality. This branched off into questions of the hard problem of consciousness which of course inexorably led to philosophical zombies, I mean why wouldn't it? RNG... the horror, horror!
 
For example, if a philosophical zombie was poked with a sharp object it would not feel any pain sensation, yet could behave exactly as if it does feel pain (it may say "ouch", recoil from the stimulus, and say that it is feeling pain).

Sounds like my Mondays.

And deranging the thread is no problem at all. None of us are able to answer the original question. Thread was DOA.
If someone wants to know they should ask devs.

As for the AI discussion, you have a preconceived notion and are unwilling to expand your horizon.
The AI very much can find its way out of a paper bag. We have crossed a line and it managed to interpret and learn on its own.

The last few years brought us programs that are capable of thinking a lot more like humans.
It's no longer about computing power and their results being better than what humans can achieve.
It's about creating results the same way humans would.
Now it's just a matter of expanding into other fields.

Or as Elon Musk would put it:
LiXStk4.png


Fun fact: Google has had an AI ethics department for more than a decade. None of its employees are allowed to speak publicly. The last speech of someone working there was in 2009.
Highly doubt they're the only one to have one of these. The tech world knows about advancements and possible risks and we're preparing for them.

EDIT: It seems the full paper on Alpha Zero isn't public anymore. Too bad. Was an amazing read.
 
Last edited:
There was actually an easter-egg at the bottom of the zombie page call the Chinese Room, another thought experiment, that goes into great detail on the AI subject, highly recommend as it points toward this same debate.
 
Things like the Chinese Room touch philosophical points of the argument, such as "what is intelligence?" and "what does 'being human' mean?".
It is not a debate limited to programs.

Example: Inky the octopus said goodbye to his tank-mate, slipped through a gap left by maintenance workers, made his way across the floor to a six-inch-wide drain and made a break for the Pacific.
Now we pose the question: Did the octopus - at any given point of his escape - feel pain?
They can survive outside of the sea or a tank or any body of water, but not forever.
As they dry out, do they feel pain or are they following instincts?
What even is 'pain'?

Purely philosophical discussion, got nothing to do with his original point, namely saying that an AI can't escape a plastic bag.
They will be able to emulate humans, there is no doubt about it.
Question is how we arrive at that point and how detailed they're going to emulate us.

I tend to avoid the philosophical side of it because a layman's starting point tends to be "what if machines kill all of us without feeling a thing?".
To which I reply "there have been humans doing the same."

If anything we need to avoid making machines exactly like humans, but if we do, if we get rid of outliers, erratic behavior, if we remove everything we fear about them because we are scared of machines being more efficient than us at everything, even at killing humans, what are we left with?

Because it's most certainly not emulating humans anymore.
But then, what are we emulating?
Whose vision of the perfect human are we aiming to create?
What even is a 'perfect human'?

For no we've managed to recreate the way humans approach an issue which is already going far beyond the original Chinese Room thought experiment.
Everything else isn't up to us anymore.
Hail Corporate, and best of luck to us that they're going to make the right decision.

Next decade is going to be a blast.
 
I was using it as a joke, but this... this has been revealing.

And he's beginning to go off the deep end, launching exotic sports cars into space.

They needed to launch a weight either way. Had the choice between a block of cement or something funny to attract viewers.
I don't think I need to know more if you criticize him for "launching exotic cars into space".
 
The machines are not 'smart', they're just really fast (potentially dozens of trades, intra-SECOND)... and they're armed w/ very advanced, complex algorithms -- that have no idea what they're doing, other than to react in real time (according to the formulae), at the speed of light. The LAST thing they are, is a sentient being w/ intelligence. It's JUST math... nothing more.

Just a question... How did you arrive at the conclusion that there is no sentience there?

Note I am not disagreeing with you, per sé. It's more that in my years on this planet, I've mostly learned how little we know and (even though I am well-educated and do my best everyday to push myself) how limited we are by our own perspective on the world.

I wonder where your confidence comes from...
 
I, uh, what does any of the last like 20 posts have to do with the original question?

Easy answer is that Paradox almost certainly uses the system random number generator. This uses a "truly" random seed as the original state plus some algorithm for determining the next state. I do not think that Paradox persists their random seed, but it more-or-less doesn't matter.

The source of the variance of that seed is system dependent and not eu4's concern. Some things I've heard of being used: Current mouse position, last X characters typed, last OS boot duration, current CPU temperature (to high precision), etc. Some cooler websites (where randomness is extremely important) use sensors reading atmospheric noise.