• We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.

FireFlowerist

Sergeant
17 Badges
Mar 30, 2015
72
18
  • Hearts of Iron II: Armageddon
  • Semper Fi
  • Rome Gold
  • Victoria: Revolutions
  • Majesty 2
  • Magicka
  • Lead and Gold
  • Heir to the Throne
  • Hearts of Iron III
  • For The Glory
  • Europa Universalis III
  • East India Company Collection
  • Commander: Conquest of the Americas
  • Arsenal of Democracy
  • Victoria 2
  • Rome: Vae Victis
  • Cities: Skylines
Jimmi, please note that CPUs are not inherently designed to be continuously operated on 100%. This a limit where they are restricted and it only needs to be touched for a limited amount of time. If practically speaking you can overclock it as well, but you know what happens when you keep running a CPU on very high frequencies. No matter how good your cooling is, continuous processing on 100% is not recommended.

Technically true but timeline is not included. For example: Intel's 3 year warranty covers 100% use 24/7 without any shut downs with all internal modules being used. What does this mean? Well if you consider normal atomic level of weardown of processor, these processors will last in home usage for 30 years. Even if you overclock lets say 10% higher clock rates, your actual wear down speed increases 10% clockrate increase + overclock/stock vcore % difference with vdroop % having some effect also. Now if you continue your normal use with overclocked cpu it will last 10-20 years more depending on operation temperature (cooling). Now if we put this overclocked CPU 100% load for 24/7 for 3 years it might get there or not. Probably will last 2 years at least but might last even longer depending on silicon quality.

So conclusion: Yes CPU's are designated to be operated at 100% load.

More information from this research: http://www.tut.fi/en/about-tut/depa...Z2UX5BUX5xanFxflFmnhIAfLwdzA==&cat=diplomityö

(Obtainable trough universities' e-libraries or order-able from TUT)
 

MarkJohnson

Field Marshal
16 Badges
Feb 19, 2015
3.466
716
  • Cities in Motion 2
  • Europa Universalis IV
  • Cities: Skylines
  • Cities: Skylines Deluxe Edition
  • Cities: Skylines - After Dark
  • Cities: Skylines - Snowfall
  • Stellaris
  • Cities: Skylines - Natural Disasters
  • Cities: Skylines - Mass Transit
  • Surviving Mars
  • Cities: Skylines - Green Cities
  • Cities: Skylines - Parklife Pre-Order
  • Cities: Skylines - Parklife
  • Cities: Skylines Industries
  • Prison Architect
  • Cities: Skylines - Campus
I read around here that Cities: Skylines, having a larger cities, the CPU is at 100%, as is logical since simulate all Cims has an effort.

My question is, having the CPU at 100% for a few hours, shortens the lifespan of my PC? I have good cooling.

No, it won't affect it at all. They are designed to run 24/7 for years. Running them at 100% won't hurt them. although watch those temps. Heat can prematurely end their life. Just dust it out every year or so. Maybe every few months if the have your PC on the floor or a high dusty area.

And then another question, having the CPU 100% means that consumption of the powe supply is at 100%, right? I have an Intel Core i7 3770, Nvidia GTX 650 TI, Windows 7 Home Premium 64-bit, 8GB RAM, the power supply is a Nox NX 620W, I have enough power even while the CPU is at 100%?

No, running your CPU 100% does not mean your power supply runs at a 100%. Typically it means your PSU is running about 50% or less on a well designed system.

Your whole system listed doesn't even pull 200 watts under full load at stock speed settings.

Your 620 watt power supply will handle your system just fine. No need to upgrade.

Typically you want a power supply that runs at double your full system's power requirements or more. Power supplies are usually at their most efficient at 50% load.
 

JimmiG

Captain
14 Badges
Dec 19, 2014
412
390
  • Cities in Motion
  • Cities in Motion 2
  • Cities: Skylines Deluxe Edition
  • Age of Wonders III
  • Cities: Skylines
  • Cities: Skylines - After Dark
  • Cities: Skylines - Snowfall
  • Cities: Skylines - Natural Disasters
  • Cities: Skylines - Mass Transit
  • Cities: Skylines - Green Cities
  • Cities: Skylines - Parklife Pre-Order
  • Cities: Skylines - Parklife
  • Cities: Skylines Industries
  • Cities: Skylines - Campus
Technically true but timeline is not included. For example: Intel's 3 year warranty covers 100% use 24/7 without any shut downs with all internal modules being used. What does this mean? Well if you consider normal atomic level of weardown of processor, these processors will last in home usage for 30 years. Even if you overclock lets say 10% higher clock rates, your actual wear down speed increases 10% clockrate increase + overclock/stock vcore % difference with vdroop % having some effect also. Now if you continue your normal use with overclocked cpu it will last 10-20 years more depending on operation temperature (cooling). Now if we put this overclocked CPU 100% load for 24/7 for 3 years it might get there or not. Probably will last 2 years at least but might last even longer depending on silicon quality.

So conclusion: Yes CPU's are designated to be operated at 100% load.

More information from this research: http://www.tut.fi/en/about-tut/departments/pori/research/publications/publication/index.htm?id=eJxTKyxNLaq0TUnWS0ksSVU1TlEyMjA0UNJ29HPRVkrJLMjJz80sqVRNNlZNMoOKepcWFWWqGjlDucGJuQXFiVCOb2Z2UX5BUX5xanFxflFmnhIAfLwdzA==&cat=diplomityö

(Obtainable trough universities' e-libraries or order-able from TUT)

True. Also, even though Windows might show 100% CPU usage, it doesn't mean every single transistor in the CPU is being activated at the same time. There are "torture" programs written specially to stress-test the CPU, which will cause significantly higher temperatures and power usage compared with normal applications and games.
 

V10lator

Second Lieutenant
2 Badges
Mar 13, 2014
137
57
  • Cities in Motion 2
  • Cities: Skylines
No, running your CPU 100% does not mean your power supply runs at a 100%. Typically it means your PSU is running about 50% or less on a well designed system.
Don't forget the GPU: The i7 3770 has a TDP of 77 Watt. Now TDP is a marketing term, the real wattage is (lazy calculation) x1.5, so around 115 W. Now the GPU eats 110 W TDP, so around 165 W which is more than two times what the CPU needs.
Anyway I'm pretty sure his GPU won't hit full load as his CPU already is maxed out, most likely creating a bottleneck in the graphics pipeline. Not to mention V-Sync.
 

MarkJohnson

Field Marshal
16 Badges
Feb 19, 2015
3.466
716
  • Cities in Motion 2
  • Europa Universalis IV
  • Cities: Skylines
  • Cities: Skylines Deluxe Edition
  • Cities: Skylines - After Dark
  • Cities: Skylines - Snowfall
  • Stellaris
  • Cities: Skylines - Natural Disasters
  • Cities: Skylines - Mass Transit
  • Surviving Mars
  • Cities: Skylines - Green Cities
  • Cities: Skylines - Parklife Pre-Order
  • Cities: Skylines - Parklife
  • Cities: Skylines Industries
  • Prison Architect
  • Cities: Skylines - Campus
Don't forget the GPU:
I didn't forget the GPU, I think you're adding in the iGPU?

The i7 3770 has a TDP of 77 Watt. Now TDP is a marketing term

This is not marketing terms. This is called, "Thermal Design Point" This info is designed for heat sinks. You need a TDP of 77 watts or higher heatsink to keep it cooled properly. TDP has nothing to do with CPU power. You should notice as all same gen cpus have the same TDP even though some are faster than others.

the real wattage is (lazy calculation) x1.5, so around 115 W.

Guesstimations won't work. Each CPU model will have varying wattage. In this instance the CPU will draw approximately 140-150 watts stress load without GPU.

Now the GPU eats 110 W TDP, so around 165 W which is more than two times what the CPU needs.

The GPU uses 110watts stress load. ( i miss quoted. I was thinking of GTX 650 power by mistake(60 watts), not GTX 650 Ti(110 watts))

So it should be 250watts, which is well below 385 watts total the PSU has for 12v rails.

Anyway I'm pretty sure his GPU won't hit full load as his CPU already is maxed out, most likely creating a bottleneck in the graphics pipeline. Not to mention V-Sync.

His CPU should handle the light load of the GPU with ease. Maybe very late game where the CPU stars to lag out it may have issues. but this game doesn't have that big of a stress on the CPU.

My game has been maxed at 100% for a while now, but I still get the same frame rates (4790k @ 4.4GHz and GTX 780 @ 4k res.), but my i3-4340 @ 3.6Ghz and GTX 650 Ti @ 1080p laggs badly (359k pop for both). so badly that I can't speed up the game. I'm stuck at x1 speed from the high CPU load. and it's been maxed since 125k pop. I mean I can change to x2 and x3 speed but the game still runs at x1 speed.

This is good indicator for this type of game. If you can't switch speeds without noticing the change. you are bottlenecked. Otherwise, if you can change speeds at 100% CPU usage and still see a speed increase, you aren't fully bottlenecked.
 

V10lator

Second Lieutenant
2 Badges
Mar 13, 2014
137
57
  • Cities in Motion 2
  • Cities: Skylines
I didn't forget the GPU, I think you're adding in the iGPU?
Not the iGUP, no, his dedicated GPU (the nvidia one).
This is not marketing terms. This is called, "Thermal Design Point"
I know what it means and what it's for, none the less printing the TDP but not the max. Watt is a marketing strategy (especially when the same TDP number is used as reference on motherbords to tell if your CPU is compatible - power wise, not thermal wise as that's the job of the cooler, not the MB), don't you think? ;)
TDP has nothing to do with CPU power.
Of course it has. At full load it needs at least the TDP (ofc. it needs more or it produces heat only but you get what I'm heading to).
Guesstimations won't work. Each CPU model will have varying wattage. In this instance the CPU will draw approximately 140-150 watts stress load without GPU.
Writing down some numbers also won't work. Source?
The GPU uses 110watts stress load.
So 110 Watt max - 110 W TDP = 0 Watt. Great GPU that doesn't use any power except for heat. Again: Source?
So it should be 250watts, which is well below 385 watts total the PSU has for 12v rails.
We agree that his PSU is good enough. :) I just wanted to point out that the GPU will most likely use more power than the CPU.
His CPU should handle the light load of the GPU with ease.
It's not that the CPU has to handle load of the GPU but the other way around: The GPU needs data to compute anything, these data gets feeded in by the CPU (after all that's what DirectX / OpenGL is for - feeding the GPU without speaking its assembler code). Now if the CPU is already busy doing other stuff it can't feed the GPU fast enough, hence is the bottleneck. This is a problem with DirectX as well as OpenGL. Maybe Vulkan and/or DirectX 12 will fix it.
My game has been maxed at 100% for a while now, but I still get the same frame rates (4790k @ 4.4GHz and GTX 780 @ 4k res.)
You say you get smooth 4790000 FPS? Not even small lag spikes with heavy CPU load? Sorry but I can't believe that.
but my i3-4340 @ 3.6Ghz and GTX 650 Ti @ 1080p laggs badly (359k pop for both).
And now I call you a liar. Everything above 60 (in fact 25 but let's keep 60 for the sake of it) FPS isn't notable. 359000 FPS should be butter smooth. Or, oh wait, you're not even talking about FPS but about the population of your city? Is this how you benchmark CPU and GPU use? ;)
This is good indicator for this type of game. If you can't switch speeds without noticing the change. you are bottlenecked. Otherwise, if you can change speeds at 100% CPU usage and still see a speed increase, you aren't fully bottlenecked.
That's indeed a good indicator if it's true. Anyway for good data we would need to know what exactly stops the game from speeding up; the simulation, the graphics pipeline, the audio pipeline (unlikely), ... ? As we don't have that info we can't really go anywhere from here.
 

MarkJohnson

Field Marshal
16 Badges
Feb 19, 2015
3.466
716
  • Cities in Motion 2
  • Europa Universalis IV
  • Cities: Skylines
  • Cities: Skylines Deluxe Edition
  • Cities: Skylines - After Dark
  • Cities: Skylines - Snowfall
  • Stellaris
  • Cities: Skylines - Natural Disasters
  • Cities: Skylines - Mass Transit
  • Surviving Mars
  • Cities: Skylines - Green Cities
  • Cities: Skylines - Parklife Pre-Order
  • Cities: Skylines - Parklife
  • Cities: Skylines Industries
  • Prison Architect
  • Cities: Skylines - Campus
I know what it means and what it's for, none the less printing the TDP but not the max. Watt is a marketing strategy (especially when the same TDP number is used as reference on motherbords to tell if your CPU is compatible - power wise, not thermal wise as that's the job of the cooler, not the MB), don't you think? ;)

I guess to a novice it may seem confusing. But if you see a term like (Watts TDP), then one would think it is not normal watts and would inquire what it means. We all should be wise enough by all the marketing misleads in todays society. it's not like everyone tries to get over on you on every single product being sold.

Of course it has. At full load it needs at least the TDP (ofc. it needs more or it produces heat only but you get what I'm heading to).

No it doesn't. It means the heatsink will dissipate 75 watts of energy at room temperature. It has nothing to do with CPU power consumption. There is a slight correlation as higher TDP will mean higher CPU power consumption.

Since you like links, but don't post any, here a quick one I found by Linus, as he always has good stuff.


Writing down some numbers also won't work. Source?

My numbers mostly come from my own experience. I have multimeters to do my own measurements. not 100% accurate, but within a few percent.

So 110 Watt max - 110 W TDP = 0 Watt. Great GPU that doesn't use any power except for heat. Again: Source?

Again, I test my own systems. I actually have a GTX 650 Ti and it reports appox. 110 watts. This is with furmark stress testing the GPU, so it will use even less power.

But I'd like to see your 110 watt TDP for this card. It definitely isn't 110 watt TDP. Not even close. You needto question your sources of information.

http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-650ti/specifications

We agree that his PSU is good enough. :) I just wanted to point out that the GPU will most likely use more power than the CPU.

Actually the CPU uses way more power than the GTX 650 Ti. This GPU is very weak. You need to get to the x60 series or higher. They have high memory bandwidth that use a lot more power.

It's not that the CPU has to handle load of the GPU but the other way around: The GPU needs data to compute anything, these data gets feeded in by the CPU (after all that's what DirectX / OpenGL is for - feeding the GPU without speaking its assembler code). Now if the CPU is already busy doing other stuff it can't feed the GPU fast enough, hence is the bottleneck. This is a problem with DirectX as well as OpenGL. Maybe Vulkan and/or DirectX 12 will fix it.

I mean the CPU is so busy that the GPU has to wait. It has way less to do with any DX or OpenGL. These make things way faster. It has to do with the type of app. This game is CPU intensive it has GPU all of the time. Video should have idle time. But I believe this game takes advantage of DX11 and GPU multithreading to do CPU types tasks since the new GPUs are general purpose GPUs (GPGPU) and can now do other tasks than video.

DX12 should eliminate sli/crossfire completely. It will use all video cards simultaneously now. Can't find the link on a quick search. But one thing it does is free up the CPU threads from the GPU threads to stop video lag.

And now I call you a liar. Everything above 60 (in fact 25 but let's keep 60 for the sake of it) FPS isn't notable. Is this how you benchmark CPU and GPU use? ;)

Of course this is how I benchmark since this is Cities: Skyline forums. What game should I be using to benchmark for people who want advice on running this game? :shakeshead:

And yes, above 60FPS makes a huge difference. You obviously haven't ever had a good +60Hz or higher monitor.
Even though your eyes can perceive 32fps as smooth motion, monitors have huge amounts of lag and need very high FPS for fluid motion. Mostly no blur, no tearing, no ghosting.

Check out blurbusters for more info on your monitor and how crappy it really is. lol

http://www.blurbusters.com/

That's indeed a good indicator if it's true. Anyway for good data we would need to know what exactly stops the game from speeding up; the simulation, the graphics pipeline, the audio pipeline (unlikely), ... ? As we don't have that info we can't really go anywhere from here.

Assuming no major bugs, then graphics requirements are slow low that if you downgrade to old DX9 mode it looks exactly the same(very light GPU). All of the load is from agents(Heavy CPU). A little more on GPU when zooming in close and needing to draw all of the 3D buildings. Lastly poor coding in certain areas, probably game engine design flaws, etc. and other inefficiencies need clearing up.
 

V10lator

Second Lieutenant
2 Badges
Mar 13, 2014
137
57
  • Cities in Motion 2
  • Cities: Skylines
I guess to a novice it may seem confusing. But if you see a term like (Watts TDP), then one would think it is not normal watts and would inquire what it means. We all should be wise enough by all the marketing misleads in todays society. it's not like everyone tries to get over on you on every single product being sold.
We should and we do, but most people don't. That's why I like to call it a marketing term and indirectly you just confirmed that. :)
No it doesn't. It means the heatsink will dissipate 75 watts of energy at room temperature. It has nothing to do with CPU power consumption. There is a slight correlation as higher TDP will mean higher CPU power consumption.
And where do the 75 watt of energy come from? So yes, there's a direct correlation between TDP and power consumption (the more heat, the more power) but ofc. real watts used will be higher.
Since you like links, but don't post any, here a quick one I found by Linus, as he always has good stuff.
Thanks, will have a look to that soon (right now he's talking to fast for my tired brain). :)
Anyway, who is that guy? It's not "the one" Linus (Torvalds).
My numbers mostly come from my own experience. I have multimeters to do my own measurements. not 100% accurate, but within a few percent.
Damn, I hoped for some specification having it written in the smallest font size possible... ;)
But I'd like to see your 110 watt TDP for this card. It definitely isn't 110 watt TDP. Not even close. You needto question your sources of information.
As I was lazy my source was Wikipedia: http://en.wikipedia.org/wiki/GeForce_600_series#GeForce_600_.286xx.29_series - Seems like nobody reading/writing wikipedia realized that TDP and max watt are different terms...
I mean the CPU is so busy that the GPU has to wait. It has way less to do with any DX or OpenGL. These make things way faster.
Uhm, no...
https://www.opengl.org/archives/resources/faq/technical/performance.htm
http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/4 - See how DX12 (and Mantle, the, well, let's call it parent of Vulkan) speed up things dramatically? Also it tells that DirectX 12 is a completely different beast than 11: "The solution to that problem is to eliminate the abstraction and let the developers do it themselves". Please note that this article doesn't show the full image, but you get an idea.
The last one is lazy wikipedia again (tbh all right now is lazy googling - tired brain) : http://en.wikipedia.org/wiki/Vulkan_(API) - "offers higher performance and lower CPU overhead" - "offers lower overhead, more direct control over the GPU, and lower CPU usage" to quota a bit.

Ofc. you're also right: A badly written app will have higher CPU overhead no matter what graphics API is being used but that wasn't my point.
DX12 should eliminate sli/crossfire completely. It will use all video cards simultaneously now.
Right, that's also an advantage of DX12 but that's more becaue SLI/CrossFire was a hackish solution to begin with. Just throw the data over the damn PCI-E bus instead of some proprietary BS is what they should have done from day 0 (and it's what the FOSS Linux drivers are doing right now).
Of course this is how I benchmark since this is Cities: Skyline forums. What game should I be using to benchmark for people who want advice on running this game? :shakeshead:
You shouldn't use another game but have a look at FPS and CPU usage instead of looking at population (maybe saving a game and re-loading it before each "benhmark run" would be the best bet).
And yes, above 60FPS makes a huge difference. You obviously haven't ever had a good +60Hz or higher monitor.
Even though your eyes can perceive 32fps as smooth motion, monitors have huge amounts of lag and need very high FPS for fluid motion. Mostly no blur, no tearing, no ghosting.
No idea about blur but tearing is a buffering problem of the GPU, you can't just eliminate it by highering frequencies. I never had any ghosting on my PC. Till now it was a nice talking with you but this:
Check out blurbusters for more info on your monitor and how crappy it really is. lol
sounds a bit harsh. Will have a look at your link later anyway.
Assuming no major bugs, then graphics requirements are slow low that if you downgrade to old DX9 mode it looks exactly the same(very light GPU). All of the load is from agents(Heavy CPU). A little more on GPU when zooming in close and needing to draw all of the 3D buildings. Lastly poor coding in certain areas, probably game engine design flaws, etc. and other inefficiencies need clearing up.
So why does it make a difference when using the FOSS drivers or catalyst on Linux? AFAIK on windows OpenGL is the worst optimized path, so you might want to try DX vs OpenGL there to get a better image (-force-opengl IIRC).
 

MarkJohnson

Field Marshal
16 Badges
Feb 19, 2015
3.466
716
  • Cities in Motion 2
  • Europa Universalis IV
  • Cities: Skylines
  • Cities: Skylines Deluxe Edition
  • Cities: Skylines - After Dark
  • Cities: Skylines - Snowfall
  • Stellaris
  • Cities: Skylines - Natural Disasters
  • Cities: Skylines - Mass Transit
  • Surviving Mars
  • Cities: Skylines - Green Cities
  • Cities: Skylines - Parklife Pre-Order
  • Cities: Skylines - Parklife
  • Cities: Skylines Industries
  • Prison Architect
  • Cities: Skylines - Campus
And where do the 75 watt of energy come from? So yes, there's a direct correlation between TDP and power consumption (the more heat, the more power) but ofc. real watts used will be higher.

What I mean id you can't predict 100% max wattage from TDP wattage. There are many factors that that determine max wattage. Plus TDP isn't well defined. Like I already said, all CPUs will have the same TDP, but each CPU will have different max power ratings.

Thanks, will have a look to that soon (right now he's talking to fast for my tired brain). :)
Anyway, who is that guy? It's not "the one" Linus (Torvalds).

No, he's just a reviewer that's been around for several years. I first found him at NCIX doing reviews for them. Now I think he's on his own.

As I was lazy my source was Wikipedia: http://en.wikipedia.org/wiki/GeForce_600_series#GeForce_600_.286xx.29_series - Seems like nobody reading/writing wikipedia realized that TDP and max watt are different terms...

Yes, wiki is good for general use. But you need to double check their info as they are not regulated and let most anyone change the info and I question the souce writing that you usually don't know who that even is.

Uhm, no...

no what? You start talking about DX12 here instead of responding. I deleted it as it was irrelevant to the conversation.


Ofc. you're also right: A badly written app will have higher CPU overhead no matter what graphics API is being used but that wasn't my point.

Well, it kind of has to be. We don't know where the issue lies, so it will without doubt be in these places as well, and usually to a great extent.

You shouldn't use another game but have a look at FPS and CPU usage instead of looking at population (maybe saving a game and re-loading it before each "benhmark run" would be the best bet).

Umm, I already do all of this. Thanks to the steam cloud! Finally a good use for online gaming. lol

You forgot to monitor GPU usage, as well as memory usage for both. and many other factors. I use GPU-Z for the GPU. HWMonitor usually for CPU plus motherboard and other stuff.

No idea about blur but tearing is a buffering problem of the GPU, you can't just eliminate it by highering frequencies. I never had any ghosting on my PC. Till now it was a nice talking with you but this:

Tearing has many causes. Ghosting is an issue on all monitors. Like I said monitor technology can't match your eyes abilities. There is too much going on for monitors to redraw these screens so fast and have the previous image disappear completely. it will leave traces behind (ie ghosting).

sounds a bit harsh. Will have a look at your link later anyway.

Sorry, again I wasn't very clear. What I meant by, "Your monitor" was all of our monitors in general. None without issues. technology just isn't there yet. although it is very good now with variable refresh rates so they can sync up FPS rates. Tearing can be caused from refresh rates out of sync with FPS. ie. 1 Hz = 1 FPS. So iof you have 60fps and a 60Hz monitor they are in sync. But if you have say 37fps on 60hz, then they get out of sync. Frames will have to be skipped altogether to align them. If it were 30fps on 60hz monitor, then you could redraw every frame twice and stay in sync.

So why does it make a difference when using the FOSS drivers or catalyst on Linux? AFAIK on windows OpenGL is the worst optimized path, so you might want to try DX vs OpenGL there to get a better image (-force-opengl IIRC).

The issue lies with DirectX being a proprietary API made by Microsoft. Other OSes can't use it. So the others(MS included) got together and developed OpenGL. They aren't fully DX11 compatible and will almost always be significantly slower. MS has optimized DX11 thoughout windows specifically. OpenGL is generic and not optimized to any particular OS.