Thanks for the tip using Inspector, but the game's fullscreen is a proper fullscreen, and not windowed like you claim. Details for running in windowed fullscreen are found at https://forum.paradoxplaza.com/foru...ing-screen-resolutions-aspect-ratios.1092119/ ."Fullscreen" in the game settings is actualy windowed fullscreen. This is why its a PITA to get vsync to work. Windowed fullscreen acts as windowed, so you need to activate a global vsync and framelimiter , not a program specific profile.
I found using Nvida Inspector and setting global profile framelimit to 60 worked all the time, while forcing global vsync only worked in parts of the game.
The interactive scene between missions had no FPS limit, it would draw as hard as it could, soak up as much resources as it could. That *did* cause hardware damage, but limited liability makes it your problem not Blizzards
Thanks for the tip using Inspector, but the game's fullscreen is a proper fullscreen, and not windowed like you claim. Details for running in windowed fullscreen are found at https://forum.paradoxplaza.com/foru...ing-screen-resolutions-aspect-ratios.1092119/ .
I had 665 fps while idling in the main menu with enabled Vsync and after ~15-20 minutes of playing the first mission the capacitators/compensators of my GPU started to whistle, which never ever happened before while gaming.
The only thing that ever managed to make my capacitators/compensators whistle a bit, was Furmark Burn-IN when i stresstested the system to check for stability/temps and if everything is working properly after assembly.
That's incorrect at face value. Modern processors typically have internal protection circuits to throttle/shut down in the event of hitting a pre-determined thermal limit. Software can't "damage" hardware in a regulated system. Now if you use software to bypass the safeties in hardware, then yes, you're right, it can. But that isn't possible with modern GPUs, modern drivers, and games.
We're not talking about heavy usage. We're talking about the equivalent of a stress test over several hours. That's not wear and tear. Even stress tests for overclocking GPUs are measured in usually minutes, not hours until you find that perfect sweet spot. You should know this.As for heavy usage, that's wear and tear, not "damage".
[Mod Edit: Reference removed, Disrespect in quoted post] If you look back through my posts you'll see I complained about the forum move to Paradox, was POed at HBs about it, and also criticized the first mission in Battletech.
Of course the one being tested/benchmarked 24/7 will generally have a shorter life span. There's no argument there. But there are so many unknowns, such as the quality of its supplied power by the power supply, the environment inside the system, outside the system, and whether or not those specific GPUs have a manufacturing defect or not. And you know what, that's all besides the point.Just a question for the self proclaimed experts:
What GPU will have a longer lifespan ?
The one that i put through Furmark Burn-in 24/7, or the one of a typical desktop user who maybe plays a few hours a day ?
Most server farms run RAIDs for their storage, making use of the redundancy. Larger server farms may of course have extra drives on hand to replace failed units on-the-fly within the RAIDs. And they're not all using "HDDs", as some have upgraded to solid state storage these days. Still, average MTBF on server storage drives (aka enterprise drives) is still measured in years, so no, they don't "die pretty often". I work for an entity of a university, and all of our virtual servers are hosted within our specific college's servers, and they haven't had a single drive die in the last several years. Instead they've been adding drives to increase capacity.There is a reason big server farms have tons of backup HDDs, it's because the lifespan of them goes down significantly compared to normal desktop usage and the thingys actually die pretty often if you put them on permanent heavy duty, or get replaced after a certain anmount of read/write cycles.
Analogies to push your forum warrior argument? Really? *sigh*The same way you can't expect that you get a lot of mileage out of your car-engine if you redline the thing all the time.
This is your real complaint. And I agree the client probably has some bugs with specific configurations. I currently run a GTX 1080 at home, at with vsync enabled and all settings at max, my GPU load is typically 30-40% while playing the game. Instead of trying to reach way out in left field to support your failing argument about hardware, why not just pose your argument about the game itself? Because with that I could agree with you, and with that we can agree it's probably related to a compatibility bug with specific hardware or software (ie: drivers).There is simply no reason a game that looks as dated as BT, should needlessly put on workload not even the latest triple A graphic bombs manage to do and tax a halfway decent modern GPU like a hardcore stresstest.
Coil whine can be an easy fix sometimes. Usually a small piece of rubberized tape is more then enough, always depends on how much space you're dealing with though. And what you have to pull apart to get at the coil(s) causing it. It can even sometimes be fixed using the old "spectrum spread" setting(if supported), but then you're trading performance for no coil whine.Thats coil whine, really high frame rates can cause some graphics card to whine. Its not damaging though so dont worry.
Actually it's "thermistor" not "thermisiter", and it's still a circuit since it is electrically connected inside the GPU. But you're just arguing semantics... for the sake of it?No, they have a thermisiter that's built into the CPU itself. There is no internal protection circuit
Since you wanted to go into details, you're wrong again. The logic is done from the driver. But don't take my word for it: http://nvidia.custhelp.com/app/answ...maximum-operating-temperature-and-overheatingthe self-throttling is done on the bios/UEFI side
That's an interesting fix, thanks for that. Usually I just get the component or piece of hardware replaced if it's having coil whine. Same with capacitor buzzing. But workaround is helpful for coils, thanks.Coil whine can be an easy fix sometimes. Usually a small piece of rubberized tape is more then enough
100% incorrect. You actually mentioned it in the name "Code" Faulty coding or out of sync coding with drivers can and will cause overheating. To announce what you just did is ignoring every other comment. I'm using a HAF X case with fans out the wazoo. Water cooled CPU and a fan graph manually set in Afterburner. setting the fans (running gtx 1080s SLI) to 100% at 60c should cap it at that. Like every other game I play. This game is overheating games. I'm seeing mid 70s. With he fan curve I created disabled I'm seeing 85c and more. Thats atrocious coding. Nothing to do with cooling.there is no "magic overheat code". If a GPU overheats because its used to its maximum capacity, its either faulty, dirty, or a design flaw.
It's also overheating 'mechs![...]
This game is overheating games.
[...]
They are using the name drivers in that article for the general public to understand. Drivers is not the name. He is correct. It is Bios Firmware controlled. The older nvidia cards could be bios hacked (as I have done myself with gtx680s) and you could control the entire card by flashing a new hacked bios. Fan control. MHz control is all done at the bios level. Unfortunately nvidia found out a way to lock the 10xx series cards and stop us deviates from playing with their architecture. At time of writing there is no was to hack the 10 series bios. I'm not sure what your beef is with this user but from what Ive seen him post, he's been correct on all accounts.[Mod Edit: Personal not topical]Since you wanted to go into details, you're wrong again. The logic is done from the driver. But don't take my word for it: http://nvidia.custhelp.com/app/answ...maximum-operating-temperature-and-overheating
I'm not even going to bother disputing the rest of your details since you're just here to argue instead of staying on topic or arguing about the real issue.
Haha. oops. * smashes the edit buttonIt's also overheating 'mechs!
now please explain how a game can magically overheat your card. seriously, if we could create heat like that we would have other problems.100% incorrect. You actually mentioned it in the name "Code" Faulty coding or out of sync coding with drivers can and will cause overheating. To announce what you just did is ignoring every other comment. I'm using a HAF X case with fans out the wazoo. Water cooled CPU and a fan graph manually set in Afterburner. setting the fans (running gtx 1080s SLI) to 100% at 60c should cap it at that. Like every other game I play. This game is overheating games. I'm seeing mid 70s. With he fan curve I created disabled I'm seeing 85c and more. Thats atrocious coding. Nothing to do with cooling.
now please explain how a game can magically overheat your card. seriously, if we could create heat like that we would have other problems.
yes Battletech uses more ressources than it should in some cases, but 100% usage isnt bad, it just means that its working up to its maximum use.
being used creates heat because sadly we still dont have perfectly efficient technology, so we need to cool it.
if the cooling isnt working as intended then obviously 100% usage over a longer duration is bad.
but there is no magic way to create heat in your gpu.
not to mention that 70° is still fine, usually safety shutdown is between 90° and 110°
i can stresstest my GTX960 with furmark without getting above 70°
That's incorrect at face value. Modern processors typically have internal protection circuits to throttle/shut down in the event of hitting a pre-determined thermal limit. Software can't "damage" hardware in a regulated system. Now if you use software to bypass the safeties in hardware, then yes, you're right, it can. But that isn't possible with modern GPUs, modern drivers, and games.
As for heavy usage, that's wear and tear, not "damage".
Once you change the vsync setting in the control panel it null and voids any setting in game no matter its setting.So in short, as I've been following this thread - a possible workaround would be:
-turn vsync OFF in the game itself
-turn on vsync in Nvidia control panel
Do I understand this correctly?