Worst DLC and Patch Ever. Change my mind

  • We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.
In the case you are stating someone is wrong, would not be fair to provide evidence for it?

CPU is the obvious bottleneck with increasing population, where IMHO neccessity to calculate route/needs for every single cim is pretty hefty task for the CPU. Maybe "agent based simulation" is not proper terminology or the main reason for hammering CPU. You seems to know better, would you mind share your knowledge?
This game is memory bound, not CPU. During my tests it ran with a CPI > 20.
 
  • 1Like
Reactions:
This game is memory bound, not CPU. During my tests it ran with a CPI > 20.
How big city did you have during your test?

When I was playing I never had a huge city, as I was more or less doing road layout for whole map and waiting for region packs. During my time my GPU was maxed, CPU was increased once I unpaused the simulation and start plopping building in old town. I never had issue with RAM.

Here you have a link for Biffa's video where he is testing 1mil city:
CPU is at 97%, GPU 78% and memory 30%

I have to say this is first time I hear the CS2 has RAM issues, but I am not saying link above is definite proof and/or that you are right or wrong.

In the case you mean VRAM, even this would be the first time for me. IMHO at this moment there is not enough assets to max VRAM at all, at least I did not have a problem here, but again I had only 50k city at most.
 
Last edited:
  • 1Like
Reactions:
4 percent, damn, I never expected it to fall this low, that's actually impressive
1000026988.jpg

Beach properties is now the worst rated thing on Steam...
...that's an achievement... I guess
 
  • 13
  • 1Haha
  • 1
Reactions:
Paradox having published 3 of the 4 worst rated items on steam is certainly an achievement.
To be fair, Leviathan is not nearly as bad as that rating suggests from a quality/content perspective. There was a bug that slipped through QA that made the game unplayable, but it wasn't a similar situation where they just didn't put in the effort or design. It was an actual mistake.
 
  • 6Like
Reactions:
How big city did you have during your test?

When I was playing I never had a huge city, as I was more or less doing road layout for whole map and waiting for region packs. During my time my GPU was maxed, CPU was increased once I unpaused the simulation and start plopping building in old town. I never had issue with RAM.

Here you have a link for Biffa's video where he is testing 1mil city:
CPU is at 97%, GPU 78% and memory 30%

I have to say this is first time I hear the CS2 has RAM issues, but I am not saying link above is definite proof and/or that you are right or wrong.

In the case you mean VRAM, even this would be the first time for me. IMHO at this moment there is not enough assets to max VRAM at all, at least I did not have a problem here, but again I had only 50k city at most.
Don't get me wrong, but a layman (including Linus) cannot tell the difference between a CPU bound program and memory bound program. The latter also appears like a CPU bottleneck, so you look at other metrics. I couldn't profile the memory bandwidth, but from the CPI alone we can say that it is clearly memory bound.

Note that being memory-bound has nothing to do with how much of the memory is used.
 
Last edited:
Don't get me wrong, but a layman (including Linus) cannot tell the difference between a CPU bound program and memory bound program. The latter also appears like a CPU bottleneck, so you look at other metrics. I couldn't profile the memory bandwidth, but from the CPI alone we can say that it is clearly memory bound.

Note that being memory-bound has nothing to do with how much of the memory is used.
OK, so in the same moment I understand you little bit more, but also loosing you. Memory-bound issues are above my paygrade. I have zero ideas what can cause them in the game like this, especially when you see maxed CPU usage. Do you have a theory or example what is happing in the game?

Plus do you believe it is fixable or not?

EDIT:
So, I found a video that helped me understand the memory-bound problems a little bit:

And now I have at least idea why the issue with CS2 performance could be I/O bound, but it leads me back to agent system of this game, where each cim is at least one object system(hardware) has to take care of individually. In another words, more cims you have, more I/O to memory will happen, and you are bound be how fast you can R/W to memory.

What I do not know if there is a solution for a memory-bound problem. For me it seems the CO would have to change whole ideology how the game works.
 
Last edited:
  • 1
Reactions:
OK, so in the same moment I understand you little bit more, but also loosing you. Memory-bound issues are above my paygrade. I have zero ideas what can cause them in the game like this, especially when you see maxed CPU usage. Do you have a theory or example what is happing in the game?

Plus do you believe it is fixable or not?

EDIT:
So, I found a video that helped me understand the memory-bound problems a little bit:

And now I have at least idea why the issue with CS2 performance could be I/O bound, but it leads me back to agent system of this game, where each cim is at least one object system(hardware) has to take care of individually. In another words, more cims you have, more I/O to memory will happen, and you are bound be how fast you can R/W to memory.

What I do not know if there is a solution for a memory-bound problem. For me it seems the CO would have to change whole ideology how the game works.

This is going to be long, but remember you asked for it...

An archetypical (single-cycle) CPU runs one instruction per clock cycle (CPI = 1). Within the CPU an instruction is actually executed through multiple steps on different parts of the clock signal (2-4 phases.) A more reliable way of doing this is separating an instruction cycle into stages and going through a single stage at every clock cycle (multi-cycle), which opens up more architectural options, like arbitrary number of stages. So a n-stage processor completes a typical instruction in n cycles (CPI = n; but you can also run it around n times faster, so it comes down to the same speed as a single-cycle).

In multi-stage processors, like the single-stage ones, most of the CPU sits idle because only one part (or stage) of the processor is active at a time. In order to exploit this, we have pipelined processors. A pipelined processor is basically a multi-stage processor where, under optimal conditions, each stage of the CPU works on different instruction at every clock cycle. So a n-stage pipelined processor can work on n instructions at the same time, each at a different stage; meaning it completes n/n = 1 instruction per clock cycle (CPI = 1). Some modern CPUs are pipelined with high number (20+) of stages.

Finally, we have the modern multi-core CPUs. They basically multiply the amount of work that can be done by number of cores so this one is simple. A pipelined CPU with n cores, again under optimal conditions, can carry out n instructions at every clock cycle (CPI = 1/n). Note that these optimizations don't always work. For example some instructions are atomic so they reserve the entire pipeline while they are being processed, or software may not be designed for parallelism. Note also that not all instructions run in a single instruction cycle; for example floating point operations often take multiple iterations through the processor to complete.

Now let's talk about the elephant in the room...

By far the worst thing that can happen to a CPU instruction-wise is a memory operation, which can take hundreds of clock cycles to complete. I/O can also be costly but that is mostly managed asynchronously (through interrupts) so the CPU is not held while an I/O operation is carried out. However, that is not the case with memory operations as there will be further instructions that depend on their result. Therefore waiting for a memory operation puts the CPU in a stall (not idle), where it cannot do anything but wait. This is the reason why CPUs have various levels of caches, as they allow keeping copies of data close to the processor where they be accessed faster than the memory (though still rather slow.)

There are some ways to mitigate CPU stalls, for instance well optimized software schedules instructions in such a way that memory operations begin long before their results will be needed (prefetching.) Thankfully modern compilers help developers in that regard. Modern CPUs also have their own prefetching mechanism, and they might also do some house-keeping while waiting. But despite all these measures, on average a lot of CPU time is spent in stall. Anyone who wants to create high performance software has to make sure proper optimization is made here; reducing memory operations, prefetching, utilizing the cache with reference locality, making use of arrays where applicable, encoding data into quad-word chunks, etc.

Memory operations in poorly optimized software can greatly increase the CPI, as a lot of clock cycles will be wasted in stall just to complete one memory instruction. Normally a CPU-bound software runs with CPI around 1 on a pipelined processor, less in multi-core. This number increases slightly if floating point operations are used heavily. But if you see CPI >> 1, then you know those nasty memory operations are putting your CPU on stall. In my tests, C:S2 showed CPI > 20! This might be a Unity problem, but someone failed at something miserably. Ultimately, it means the game is heavily memory-bound.

Can it be fixed? Most likely not...

I don't know how Unity is implemented or how other Unity games perform, so the root cause might be there. The language they use (C#) is not ideal for optimizing for memory access, either; this kind of thing is best done in C. It ultimately comes down to poor software design due to not understanding the limitations of technologies being used. I don't think this game can be fixed, and certainly not by CO with their current structure.
 
Last edited:
  • 11
Reactions:
In the case you are stating someone is wrong, would not be fair to provide evidence for it?

CPU is the obvious bottleneck with increasing population, where IMHO neccessity to calculate route/needs for every single cim is pretty hefty task for the CPU. Maybe "agent based simulation" is not proper terminology or the main reason for hammering CPU. You seems to know better, would you mind share your knowledge?
EVERY game which has entities that react to things in the gameworld is an "agents based simulation"
Even the cats and dogs in RDR2 are agents. The ghosts in pacman are agents. Those games don't suffer from a so called "agents based simulation".

Your conclusion ("Agents = CPU issues") is totally wrong

So, there... "agents" (you probably think about "agent Smith" from the Matrix, when you first heard this new term...).
 
  • 2Like
Reactions:
I would be embarrassed to release this as a DLC.

What happened to the Colossal Order that we all loved??
We have a saying in German that sums it pretty well.
Translate to English it would be something like "Once your reputation is tarnished, you can live without restraint."
(Ist der Ruf erst runiert, lebt es sich ganz ungeniert)

Or how they would say it her in Filipino/Tagalog "Ang taong walang kahihiyan, walang pakialam sa kapwa."

However you want to say, its true. Once your reputation went down the river, you won't need to feel ashamed.
 
  • 12Like
  • 1
Reactions:
We have a saying in German that sums it pretty well.
Translate to English it would be something like "Once your reputation is tarnished, you can live without restraint."
(Ist der Ruf erst runiert, lebt es sich ganz ungeniert)

Or how they would say it her in Filipino/Tagalog "Ang taong walang kahihiyan, walang pakialam sa kapwa."

However you want to say, its true. Once your reputation went down the river, you won't need to feel ashamed.
Someone's 'burned their bridges?'
 
  • 2Haha
  • 1Like
Reactions:
So, there... "agents" (you probably think about "agent Smith" from the Matrix, when you first heard this new term...).
This smirky comment was unnecessary. In the case you are trying to have civilized discussion, next time you should do better.
Your conclusion ("Agents = CPU issues") is totally wrong
This is an incorrect assumption. My conclusion is that there is a direct correlation between the number of agents and system requirements. In other words the more agents you have, beefier hardware you need. Or you can also say that I believe that huge number of agents and how they are set in CS2 is creating CPU or I/O bound problem.
EVERY game which has entities that react to things in the gameworld is an "agents based simulation"
Even the cats and dogs in RDR2 are agents. The ghosts in pacman are agents. Those games don't suffer from a so called "agents based simulation".
Here I partially agree with you, every game has "agents". I like to use term 'moving parts', instead of 'entities' or 'agents'. I do not use term ‘moving part’ with others as it is not precise, because for example even trees or dead bodies could be included in it. I started to use term ‘agents’ as I believed that is the term this community understands. As I can see that it triggers some people, I will be using the term ‘entities’ in this post.

RDR2 is great example and will use it to explain difference between what I would call ‘Entity simulation’ and ‘Entity approximation’. Not a best terminology either, but stick with me, I wil try to explain them.

Approximated entity is easier on computation resources, as once is out of player’s focus, it stops existing. Example can be all animals you hunt in RDR2, or members of O'Driscoll Boys. Once you kill them and are out of the defined range, they will despawn. Plus, there is almost null interaction with the environment in a long run. There are multiple ways how you can approximate entity, but lets do not go so deep.

Simulated entities are in general heavier on computation resources. They normally do not despawn and are unique. Good example would be Arthur’s horses, Van der Linde gang members, shop keepers … let’s say certain NPCs. Probably better example would be your companion in the missions where you have them. This NPC must be simulated, in another words, in every single moment you/system must know, where they are, what they are doing and how their actions effects other entities.

However, there are several differences between CS2 and RDR2. First, I would call ‘environmental impact/reach’. In RDR2 the impact of entities is very localized, you can say that majority of the entities, once they are out of players reach, they either hibernate or despawn. This is something you cannot do in CS2, as how simulation model is built, majority of entities (e.g. peoples, trucks, businesses, dogs) in whole map are in your focus, i.e. system must constantly work on them.

Second difference between RDR2 and CS2 is number of entities in the focus. CS2 programmed limit is 2 million for population. RDR2 has a fraction of it and that is why I believe the number of entities in RDR2-like games is not creating a hardware problem.

Why do you think that majority of ‘city builders’ is situated in Middle Ages, or other “harsh” environments where your population is very limited? For example, Farthest Frontier has population cap 1000, and the game can choke your PC a long before you reach the limit.

In summary, I am trying to say that only way how CS2 and CO can significantly decrease system requirements is if they stop simulating people and starts approximating them. I could be wrong on this one, but I believe the Workers & Resources are doing something like that, where you care if you have enough residences in defined distance from your industry and that’s it. Unfortunately with people approximations we would loose the option to follow the ‘cims’ and that is something CO wants and what I heard from some members of the community, even players want it as well.

Hope now we understand each other a little bit better.
 
Last edited:
  • 2Like
Reactions:
Yeah, so this dlc and the whole game is a mess - still a mess, after 6 months.
Despite that I'm very disappointed of how things gone, I lost faith in paradox as a company.

I have every paradox game, including most of the dlc's. Normally, I would have instant-bought millennia. And I don't care how the base game looks because I know the game will be good in the future. I have always defended the dlc policy: "Give them time, this is just the base game. Look where stellaris, ck2, eu4 started and what great games they are now. Paradox games are like wine, you know? They getting better!"
But the wine got sour.

Things gone south with Imperator. Victoria 3 struggeld a lot, CK3 is overall great, but lacks game-changing dlc's compared to ck2. You have to be carefull with paradox games now. You can't say anymore "This is a solid base, can't wait what the future brings".

And this is sad. I'm cautious now, no pre-order, no defending the dlc policy, no recommendation to friends anymore. I skipped the last ck3 asset pack, skipped the last eu4 dlc. Normally I had bought all of this, regardless if I play them or not - just to support paradox. But not anymore.
 
  • 6Like
  • 1
Reactions:
This is going to be long, but remember you asked for it...
Thank you very much for your time and efforts you put into this. I enjoy learning new stuff and now I understand the problem little bit more.
Therefore waiting for a memory operation puts the CPU in a stall (not idle), where it cannot do anything but wait.
Now I know where I mad mistake in my though process. I believed that in the situations, where CPU is waiting for data, it is “idling".
Can it be fixed? Most likely not...
I can code only smaller projects, games are above my paygrade. Based on my knowledge/experiences, yours explanation and other stuff I am reading, I come to the same conclussion as you did.

Maybe the CO would be able to optimize the code, so we can have population around 250k, so this game is perhaps not completely lost at least on a smaller scale. Who knows, but my hopes are very low at the moment.
 
Last edited:
EVERY game which has entities that react to things in the gameworld is an "agents based simulation"
Even the cats and dogs in RDR2 are agents. The ghosts in pacman are agents. Those games don't suffer from a so called "agents based simulation".

Your conclusion ("Agents = CPU issues") is totally wrong

So, there... "agents" (you probably think about "agent Smith" from the Matrix, when you first heard this new term...).
There's a pretty huge difference between a handful of agents that need simulating until they're offscreen/out of a zone and then can be shut down, and hundreds of thousands of agents that need to be simulated all the time.

Thats why older city simulators were either not agent based at all and faked it based on algorithms, or limit the number of agents and then fake it beyond that. Fundamentally trying to simulate every single person doesn't scale. There's always a point at which the hardware can't handle it anymore, and if you actually try to do a realistically big city in CS2 you are hitting it.
 
  • 3Like
  • 1
Reactions:
This smirky comment was unnecessary. In the case you are trying to have civilized discussion, next time you should do better.

This is an incorrect assumption. My conclusion is that there is a direct correlation between the number of agents and system requirements. In other words the more agents you have, beefier hardware you need. Or you can also say that I believe that huge number of agents in CS2 is creating CPU or I/O bound problem.

Here I partially agree with you. I like to use term 'moving parts' for 'entities' or 'agents'. I do not use term ‘moving part’ with others as it is not precise, because for example even trees or dead bodies could be included in it. I started to use term ‘agents’ as I believed that is the term this community understands. As I can see that it triggers some people, I will be using the term ‘entities’ in this post.

RDR2 is great example and will use it to explain difference between what I would call ‘Entity simulation’ and ‘Entity approximation’.

Approximated entity is easier on computation resources, as once is out of player’s focus, it stops existing. Example can be all animal you hunt in RDR2, or members of O'Driscoll Boys. Once you kill them and are out of the defined range, they will despawn. Plus, there is almost null interaction with the environment in a long run.

Simulated entities are in general heavier on computation resources. They normally do not despawn and are unique. Good example would be Arthur’s horses, Van der Linde gang members, shop keepers … let’s say certain NPCs. Probably better example would be your companion in the missions where you have them. This NPC must be simulated, in another words, in every single moment you/system must know, where they are and what they are doing.

However, there are several differences between CS2 and RDR2. First, I would call ‘environmental impact/reach’. In RDR2 the impact of entities is very localized, you can say that majority of the entities, once they are out of players reach, they either hibernate or despawn. This is something you cannot do in CS2, as how simulation model is built, majority of entities (e.g. peoples, trucks, businesses, dogs) in whole map are in your focus, i.e. system must constantly work on them.

Second difference between RDR2 and CS2 is number of entities in the focus. CS2 programmed limit is 2 million for population. RDR2 has a fraction of it and that is why I believe the number of entities in RDR2-like games is not creating a hardware problem.

Why do you think that majority of ‘city builders’ is situated in Middle Ages, or other “harsh” environments where your population is very limited? For example, Farthest Frontier has population cap 1000, and the game can choke your PC a long before you reach the limit.

In summary, I am trying to say that only way how CS2 and CO can significantly decrease system requirements is if they stop simulating people and starts approximating them. I could be wrong on this one, but I believe the Workers & Resources are doing something like that, where you care if you have enough residences in defined distance from your industry and that’s it. Unfortunately with people approximations we would loose the option to follow the ‘cims’ and that is something CO wants and what I heard from some members of the community, even players want it as well.

Hope now we understand each other a little bit better.

FWIW My first comment about agents was aimed at someone else in this thread.
Blanket statements like "It's agents based, which hogs the CPU" is similar to "Drink too much water and you will die, therefore water is poison".
That is called framing.
There's a pretty huge difference between a handful of agents that need simulating until they're offscreen/out of a zone and then can be shut down, and hundreds of thousands of agents that need to be simulated all the time.

Thats why older city simulators were either not agent based at all and faked it based on algorithms, or limit the number of agents and then fake it beyond that. Fundamentally trying to simulate every single person doesn't scale. There's always a point at which the hardware can't handle it anymore, and if you actually try to do a realistically big city in CS2 you are hitting it.
See above...
 
This is going to be long, but remember you asked for it...

An archetypical (single-cycle) CPU runs one instruction per clock cycle (CPI = 1). Within the CPU an instruction is actually executed through multiple steps on different parts of the clock signal (2-4 phases.) A more reliable way of doing this is separating an instruction cycle into stages and going through a single stage at every clock cycle (multi-cycle), which opens up more architectural options, like arbitrary number of stages. So a n-stage processor completes a typical instruction in n cycles (CPI = n; but you can also run it around n times faster, so it comes down to the same speed as a single-cycle).

In multi-stage processors, like the single-stage ones, most of the CPU sits idle because only one part (or stage) of the processor is active at a time. In order to exploit this, we have pipelined processors. A pipelined processor is basically a multi-stage processor where, under optimal conditions, each stage of the CPU works on different instruction at every clock cycle. So a n-stage pipelined processor can work on n instructions at the same time, each at a different stage; meaning it completes n/n = 1 instruction per clock cycle (CPI = 1). Some modern CPUs are pipelined with high number (20+) of stages.

Finally, we have the modern multi-core CPUs. They basically multiply the amount of work that can be done by number of cores so this one is simple. A pipelined CPU with n cores, again under optimal conditions, can carry out n instructions at every clock cycle (CPI = 1/n). Note that these optimizations don't always work. For example some instructions are atomic so they reserve the entire pipeline while they are being processed, or software may not be designed for parallelism. Note also that not all instructions run in a single instruction cycle; for example floating point operations often take multiple iterations through the processor to complete.

Now let's talk about the elephant in the room...

By far the worst thing that can happen to a CPU instruction-wise is a memory operation, which can take hundreds of clock cycles to complete. I/O can also be costly but that is mostly managed asynchronously (through interrupts) so the CPU is not held while an I/O operation is carried out. However, that is not the case with memory operations as there will be further instructions that depend on their result. Therefore waiting for a memory operation puts the CPU in a stall (not idle), where it cannot do anything but wait. This is the reason why CPUs have various levels of caches, as they allow keeping copies of data close to the processor where they be accessed faster than the memory (though still rather slow.)

There are some ways to mitigate CPU stalls, for instance well optimized software schedules instructions in such a way that memory operations begin long before their results will be needed (prefetching.) Thankfully modern compilers help developers in that regard. Modern CPUs also have their own prefetching mechanism, and they might also do some house-keeping while waiting. But despite all these measures, on average use a lot of CPU is time spent stalling. Anyone who wants to create high performance software has to make sure proper optimization is made here; reducing memory operations, prefetching, utilizing the cache with reference locality, making use of arrays where applicable, encoding data into quad-word chunks, etc.

Memory operations in poorly optimized software can greatly increase the CPI, as a lot of clock cycles will be wasted in stall just to complete one memory instruction. Normally a CPU-bound software runs with CPI around 1 on a pipelined processor, less in multi-core. This number increases slightly if floating point operations are used heavily. But if you see CPI >> 1, then you know those nasty memory operations are putting your CPU on stall. In my tests, C:S2 showed CPI > 20! This might be a Unity problem, but someone failed at something miserably. Ultimately, it means the game is heavily memory-bound.

Can it be fixed? Most likely not...

I don't know how Unity is implemented or how other Unity games perform, so the root cause might be there. The language they use (C#) is not ideal for optimizing for memory access, either; this kind of thing is best done in C. It ultimately comes down to poor software design due to not understanding the limitations of technologies being used. I don't think this game can be fixed, and certainly not by CO with their current structure.
This information is indeed correct. However, it's too early to talk about I/O bottle-neck and make a conclusion about the final performance this game can reach. There are clearly tons of bugs in the simulation. For example, I run a 50k population save at 8x speed-up setting, the actual speed-up is 5x. But if I move the camera to the top of the sky, or very close to the ground, it will increase to 7x+. It looks like the rendering pipeline consumes too much CPU time and the scheduling between rendering and simulation also has problems. Someone also reported that a large road network with 0 population and 0 outside connection (which means there is no pathfinding) seriously slows down the simulation.
By the way, I'm using a high-end PC with 7950x3d, 64GB DDR5 and RTX 4070ti under 2K resolution and highest graphic setting.
 
Last edited:
  • 3Like
Reactions:
This is going to be long, but remember you asked for it...

An archetypical (single-cycle) CPU runs one instruction per clock cycle (CPI = 1). Within the CPU an instruction is actually executed through multiple steps on different parts of the clock signal (2-4 phases.) A more reliable way of doing this is separating an instruction cycle into stages and going through a single stage at every clock cycle (multi-cycle), which opens up more architectural options, like arbitrary number of stages. So a n-stage processor completes a typical instruction in n cycles (CPI = n; but you can also run it around n times faster, so it comes down to the same speed as a single-cycle).

In multi-stage processors, like the single-stage ones, most of the CPU sits idle because only one part (or stage) of the processor is active at a time. In order to exploit this, we have pipelined processors. A pipelined processor is basically a multi-stage processor where, under optimal conditions, each stage of the CPU works on different instruction at every clock cycle. So a n-stage pipelined processor can work on n instructions at the same time, each at a different stage; meaning it completes n/n = 1 instruction per clock cycle (CPI = 1). Some modern CPUs are pipelined with high number (20+) of stages.

Finally, we have the modern multi-core CPUs. They basically multiply the amount of work that can be done by number of cores so this one is simple. A pipelined CPU with n cores, again under optimal conditions, can carry out n instructions at every clock cycle (CPI = 1/n). Note that these optimizations don't always work. For example some instructions are atomic so they reserve the entire pipeline while they are being processed, or software may not be designed for parallelism. Note also that not all instructions run in a single instruction cycle; for example floating point operations often take multiple iterations through the processor to complete.

Now let's talk about the elephant in the room...

By far the worst thing that can happen to a CPU instruction-wise is a memory operation, which can take hundreds of clock cycles to complete. I/O can also be costly but that is mostly managed asynchronously (through interrupts) so the CPU is not held while an I/O operation is carried out. However, that is not the case with memory operations as there will be further instructions that depend on their result. Therefore waiting for a memory operation puts the CPU in a stall (not idle), where it cannot do anything but wait. This is the reason why CPUs have various levels of caches, as they allow keeping copies of data close to the processor where they be accessed faster than the memory (though still rather slow.)

There are some ways to mitigate CPU stalls, for instance well optimized software schedules instructions in such a way that memory operations begin long before their results will be needed (prefetching.) Thankfully modern compilers help developers in that regard. Modern CPUs also have their own prefetching mechanism, and they might also do some house-keeping while waiting. But despite all these measures, on average use a lot of CPU is time spent stalling. Anyone who wants to create high performance software has to make sure proper optimization is made here; reducing memory operations, prefetching, utilizing the cache with reference locality, making use of arrays where applicable, encoding data into quad-word chunks, etc.

Memory operations in poorly optimized software can greatly increase the CPI, as a lot of clock cycles will be wasted in stall just to complete one memory instruction. Normally a CPU-bound software runs with CPI around 1 on a pipelined processor, less in multi-core. This number increases slightly if floating point operations are used heavily. But if you see CPI >> 1, then you know those nasty memory operations are putting your CPU on stall. In my tests, C:S2 showed CPI > 20! This might be a Unity problem, but someone failed at something miserably. Ultimately, it means the game is heavily memory-bound.

Can it be fixed? Most likely not...

I don't know how Unity is implemented or how other Unity games perform, so the root cause might be there. The language they use (C#) is not ideal for optimizing for memory access, either; this kind of thing is best done in C. It ultimately comes down to poor software design due to not understanding the limitations of technologies being used. I don't think this game can be fixed, and certainly not by CO with their current structure.
Personally, I was expecting CO to use UE5 to implement this game since it can benefit from the Nanite and the performance of CPP programming. But it seems hard to make script mods for UE5.
 
I was one of the people who defended the game at launch. I also enjoyed playing it for some time, and the "unsupported" mods that came out were great. But reading WotW after WotW my impression changed, my feeling changed... how they do communication... and then how they not improve on important aspects. And now this "DLC" that is an absolute disaster and basically a punch in the face for any player of CS:1 and fan of them made me completely quit CS:2 for now. It also changed my perspective.

I really do not understand their decision making process, and how they handle their marketing and communications as well as their development. It is truly horrifying to have to watch things unfold the way they do for a game I absolutely want to succeed and grow - because I can see the potential in it. They had the perfect base and all the information needed to produce an amazing successor to CS:1 but they just were not able to do so. They did not include the most basic and coolest features into CS:2, and I am afraid to think about how they might just be relying on Modders way to much... for them to finish their job.

It also is alarming to see what is going on in parts of the gaming industry and at some of the development studios, and also how publishers handle situations and game releases nowadays... there are only a few minor success stories nowadays of successful launches and great game support. Most of them seem to just crash when being released, desasters unfolding... and gamers still not learning anything, continuing to buy unfinished products.

What the hell is going on??
 
  • 11Like
Reactions: