• We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.
Status
Not open for further replies.
Amazon has some GTX 1050Ti's on sale today in their Gold Box...
 
"Fullscreen" in the game settings is actualy windowed fullscreen. This is why its a PITA to get vsync to work. Windowed fullscreen acts as windowed, so you need to activate a global vsync and framelimiter , not a program specific profile.

I found using Nvida Inspector and setting global profile framelimit to 60 worked all the time, while forcing global vsync only worked in parts of the game.
Thanks for the tip using Inspector, but the game's fullscreen is a proper fullscreen, and not windowed like you claim. Details for running in windowed fullscreen are found at https://forum.paradoxplaza.com/foru...ing-screen-resolutions-aspect-ratios.1092119/ .
 
The interactive scene between missions had no FPS limit, it would draw as hard as it could, soak up as much resources as it could. That *did* cause hardware damage, but limited liability makes it your problem not Blizzards

I had 665 fps while idling in the main menu with enabled Vsync and after ~15-20 minutes of playing the first mission the capacitators/compensators of my GPU started to whistle, which never ever happened before while gaming.
The only thing that ever managed to make my capacitators/compensators whistle a bit, was Furmark Burn-IN when i stresstested the system to check for stability/temps and if everything is working properly after assembly.
 
Thanks for the tip using Inspector, but the game's fullscreen is a proper fullscreen, and not windowed like you claim. Details for running in windowed fullscreen are found at https://forum.paradoxplaza.com/foru...ing-screen-resolutions-aspect-ratios.1092119/ .

My mistake then. it just acted like it did in some ways. Still, bassed on my incorrect deduction, acting as if it was borderless windowed, still sovled the issue of caping framerate in regards to global settings :)

I had 665 fps while idling in the main menu with enabled Vsync and after ~15-20 minutes of playing the first mission the capacitators/compensators of my GPU started to whistle, which never ever happened before while gaming.
The only thing that ever managed to make my capacitators/compensators whistle a bit, was Furmark Burn-IN when i stresstested the system to check for stability/temps and if everything is working properly after assembly.

Thats coil whine, really high frame rates can cause some graphics card to whine. Its not damaging though so dont worry.
 
That's incorrect at face value. Modern processors typically have internal protection circuits to throttle/shut down in the event of hitting a pre-determined thermal limit. Software can't "damage" hardware in a regulated system. Now if you use software to bypass the safeties in hardware, then yes, you're right, it can. But that isn't possible with modern GPUs, modern drivers, and games.

No, they have a thermisiter that's built into the CPU itself. There is no internal protection circuit, the self-throttling is done on the bios/UEFI side it's a single point of failure and a single point of protection. You fail to understand the basics of CPU and GPU design, you also fail to understand that the line between a TDL failure and the thermister properly throttling back and how narrow this margin is. This is calculated not in seconds but miliseconds. The line is that fine. GPU's are a bit different, but they're done by the onboard bios on the GPU again, single point of failure and protection. Just a FYI modifying the "throttle" point is far easier on GPU's because you can dump the bios and then reflash them, bit harder with UEFI but not much.

As for heavy usage, that's wear and tear, not "damage".
We're not talking about heavy usage. We're talking about the equivalent of a stress test over several hours. That's not wear and tear. Even stress tests for overclocking GPUs are measured in usually minutes, not hours until you find that perfect sweet spot. You should know this.
 
[Mod Edit: Reference removed, Disrespect in quoted post] If you look back through my posts you'll see I complained about the forum move to Paradox, was POed at HBs about it, and also criticized the first mission in Battletech.

As for my professional life: I've worked on computers since 91, professionally since 96 in IT. Built hundreds of custom systems, and maintained thousands of systems and dozens of servers and network devices over the years. But yeah, yeah, everyone on the Internet is an expert, right?

Just a question for the self proclaimed experts:

What GPU will have a longer lifespan ?
The one that i put through Furmark Burn-in 24/7, or the one of a typical desktop user who maybe plays a few hours a day ?
Of course the one being tested/benchmarked 24/7 will generally have a shorter life span. There's no argument there. But there are so many unknowns, such as the quality of its supplied power by the power supply, the environment inside the system, outside the system, and whether or not those specific GPUs have a manufacturing defect or not. And you know what, that's all besides the point.

The point is that running full bore does not "damage" a GPU if it's in a system that's running within tolerances of reasonable environmental conditions. If may shorten its life over time, but you'd have to run it at 100% load for extended periods of time until the differences might be apparent.

There is a reason big server farms have tons of backup HDDs, it's because the lifespan of them goes down significantly compared to normal desktop usage and the thingys actually die pretty often if you put them on permanent heavy duty, or get replaced after a certain anmount of read/write cycles.
Most server farms run RAIDs for their storage, making use of the redundancy. Larger server farms may of course have extra drives on hand to replace failed units on-the-fly within the RAIDs. And they're not all using "HDDs", as some have upgraded to solid state storage these days. Still, average MTBF on server storage drives (aka enterprise drives) is still measured in years, so no, they don't "die pretty often". I work for an entity of a university, and all of our virtual servers are hosted within our specific college's servers, and they haven't had a single drive die in the last several years. Instead they've been adding drives to increase capacity.

The same way you can't expect that you get a lot of mileage out of your car-engine if you redline the thing all the time.
Analogies to push your forum warrior argument? Really? *sigh*

Back on to topic...
There is simply no reason a game that looks as dated as BT, should needlessly put on workload not even the latest triple A graphic bombs manage to do and tax a halfway decent modern GPU like a hardcore stresstest.
This is your real complaint. And I agree the client probably has some bugs with specific configurations. I currently run a GTX 1080 at home, at with vsync enabled and all settings at max, my GPU load is typically 30-40% while playing the game. Instead of trying to reach way out in left field to support your failing argument about hardware, why not just pose your argument about the game itself? Because with that I could agree with you, and with that we can agree it's probably related to a compatibility bug with specific hardware or software (ie: drivers).
 
Last edited by a moderator:
Thats coil whine, really high frame rates can cause some graphics card to whine. Its not damaging though so dont worry.
Coil whine can be an easy fix sometimes. Usually a small piece of rubberized tape is more then enough, always depends on how much space you're dealing with though. And what you have to pull apart to get at the coil(s) causing it. It can even sometimes be fixed using the old "spectrum spread" setting(if supported), but then you're trading performance for no coil whine.
 
No, they have a thermisiter that's built into the CPU itself. There is no internal protection circuit
Actually it's "thermistor" not "thermisiter", and it's still a circuit since it is electrically connected inside the GPU. But you're just arguing semantics... for the sake of it? :rolleyes:

the self-throttling is done on the bios/UEFI side
Since you wanted to go into details, you're wrong again. The logic is done from the driver. But don't take my word for it: http://nvidia.custhelp.com/app/answ...maximum-operating-temperature-and-overheating

I'm not even going to bother disputing the rest of your details since you're just here to argue instead of staying on topic or arguing about the real issue.

Coil whine can be an easy fix sometimes. Usually a small piece of rubberized tape is more then enough
That's an interesting fix, thanks for that. :) Usually I just get the component or piece of hardware replaced if it's having coil whine. Same with capacitor buzzing. But workaround is helpful for coils, thanks.
 
there is no "magic overheat code". If a GPU overheats because its used to its maximum capacity, its either faulty, dirty, or a design flaw.
100% incorrect. You actually mentioned it in the name "Code" Faulty coding or out of sync coding with drivers can and will cause overheating. To announce what you just did is ignoring every other comment. I'm using a HAF X case with fans out the wazoo. Water cooled CPU and a fan graph manually set in Afterburner. setting the fans (running gtx 1080s SLI) to 100% at 60c should cap it at that. Like every other game I play. This game is overheating games. I'm seeing mid 70s. With he fan curve I created disabled I'm seeing 85c and more. Thats atrocious coding. Nothing to do with cooling.
 
Since you wanted to go into details, you're wrong again. The logic is done from the driver. But don't take my word for it: http://nvidia.custhelp.com/app/answ...maximum-operating-temperature-and-overheating

I'm not even going to bother disputing the rest of your details since you're just here to argue instead of staying on topic or arguing about the real issue.
They are using the name drivers in that article for the general public to understand. Drivers is not the name. He is correct. It is Bios Firmware controlled. The older nvidia cards could be bios hacked (as I have done myself with gtx680s) and you could control the entire card by flashing a new hacked bios. Fan control. MHz control is all done at the bios level. Unfortunately nvidia found out a way to lock the 10xx series cards and stop us deviates from playing with their architecture. At time of writing there is no was to hack the 10 series bios. I'm not sure what your beef is with this user but from what Ive seen him post, he's been correct on all accounts.[Mod Edit: Personal not topical]
 
Last edited by a moderator:
100% incorrect. You actually mentioned it in the name "Code" Faulty coding or out of sync coding with drivers can and will cause overheating. To announce what you just did is ignoring every other comment. I'm using a HAF X case with fans out the wazoo. Water cooled CPU and a fan graph manually set in Afterburner. setting the fans (running gtx 1080s SLI) to 100% at 60c should cap it at that. Like every other game I play. This game is overheating games. I'm seeing mid 70s. With he fan curve I created disabled I'm seeing 85c and more. Thats atrocious coding. Nothing to do with cooling.
now please explain how a game can magically overheat your card. seriously, if we could create heat like that we would have other problems.
yes Battletech uses more ressources than it should in some cases, but 100% usage isnt bad, it just means that its working up to its maximum use.
being used creates heat because sadly we still dont have perfectly efficient technology, so we need to cool it.
if the cooling isnt working as intended then obviously 100% usage over a longer duration is bad.
but there is no magic way to create heat in your gpu.

not to mention that 70° is still fine, usually safety shutdown is between 90° and 110°

i can stresstest my GTX960 with furmark without getting above 70°
 
So in short, as I've been following this thread - a possible workaround would be:

-turn vsync OFF in the game itself
-turn on vsync in Nvidia control panel

Do I understand this correctly?
 
now please explain how a game can magically overheat your card. seriously, if we could create heat like that we would have other problems.
yes Battletech uses more ressources than it should in some cases, but 100% usage isnt bad, it just means that its working up to its maximum use.
being used creates heat because sadly we still dont have perfectly efficient technology, so we need to cool it.
if the cooling isnt working as intended then obviously 100% usage over a longer duration is bad.
but there is no magic way to create heat in your gpu.

not to mention that 70° is still fine, usually safety shutdown is between 90° and 110°

i can stresstest my GTX960 with furmark without getting above 70°

Schools in. So when a piece of coding calls for an instance and the out of sync call is false flagged, it becomes moved to a save state. Those save states keep the GPu gates open for no reason.. Erm. never mind. I'm going back to playing. I'm not trying knock the code. Its an issue between the developers and Nvidia. Driver updates will fix this issue in due course. Especially when the Issue is, well, an issue. Have fun with the game. It's fantastic.
 
That's incorrect at face value. Modern processors typically have internal protection circuits to throttle/shut down in the event of hitting a pre-determined thermal limit. Software can't "damage" hardware in a regulated system. Now if you use software to bypass the safeties in hardware, then yes, you're right, it can. But that isn't possible with modern GPUs, modern drivers, and games.

As for heavy usage, that's wear and tear, not "damage".

Tell that to a friend of mine who ruined his perfectly fine gtx 1060 that worked like a charm for months, until one day he decided to do some extensive stresstesting with it and the card started to coilwhine like mad every time it got under moderate to high load after that.
In the end he got rid of it because the coilwhine was so bad it caused a severe tinnitus to him even with headset on while gaming.

Software definitely can damage hardware even with safety measures build in, if you wanna find out just overheat your GPU/CPU until emergency shutdown or BSOD and repeat it a few dozen times and see what happens.
 
To those who don't think Cards can't get over heated from a game, think again. This has happened in the past with a few games. Star Craft 2 was one of them IIRC. Spore being another.
People were saying the same lines, "How can a game overheat your card, just dust it out or its just faulty" Well.. It does happen... With Starcraft 2, it was an issue with the FPS jumping like crazy (2000fps) in the main menu IIRC and Blizz did confirm that there was a issue with the GAME.
http://www.gameinformer.com/b/news/...rd-confirms-starcraft-ii-overheating-bug.aspx

For me, the game has been running mostly ok and no overheating issues.

But one thing i did notice is that the VRAM seems to be jumping all over the place constantly.... I don't get it... Isn't the game just supposed to reserve the max amount of VRAM that it might use if its available and stay there? I think this is how most games do it? I mean, its still RAM right? That's what its there for.
But in the game, it seems to be jumping around during a battle. Anywhere from 16%- 40%. Its like the game is constantly loading and unloading textures and stuff. That can't be efficient, can it?
 
Last edited:
Running your GPU @ 100% will not kill it unless it overheats(bad case airflow, or not clean) or has faulty components, or bad power being supplied to it.

How do you think crypto miners operate? They run banks of GPU's usually 3 or 4 or even more with breakout boards @ 100% for YEARS.

Anyone claiming running your GPU @100% is bad or will kill it is wrong they are designed to run @100% or you wouldn't be able to mine with them, or use them for compute applications.
 
Status
Not open for further replies.