If you build your own computer, do get a CPU cooler, even nice CPUs can come with bad stock fans, they're not very expensive either.
This is especially true for people who live in places with high ambient temperatures.
If you build your own computer, do get a CPU cooler, even nice CPUs can come with bad stock fans, they're not very expensive either.
Remember to check Hardwareswap on Reddit! You can find amazing deals on there too.Overclockers do some nice bundles if you don't want the hassle of set up.I think its time to upgrade from my old i5 2005 which I have had for ages.
With games in general, the Performance of a single core is more important then number of Cores.
There are hosts of Problems you can not solve quicker by throwing more cores at it. Games are prime examples.
The game already uses Multithreading as far as feasible.
Take the code to calcuate the Fibbonacci Sequence up to a certain number. Now try to Mutlithread it so that each number is calcualted by a seperate thread at the same time.
You will realsie you can not.
And programming a games main calculation is like the Fibbonacci Sequence, to the 10th power.
One of the devs recently confirmed that the AI is already running massively paralell. However, there is limits to that too: At some point the main game tick has to wait for aIl AI's to make thier decisions before the next game tick. Otherwise we end up with an AI doing nothing while the game runs fluid.You are partially right, but in a round-about way. In Stellaris there a ton of computations that can be done in parallel. Ships moving inside each and every starsystem only need to account for the local ships in the same system. Every empire sector has an independent economy and AI associated with it. Every combat only account for the fleets actively involved in it. These things can all run in parallel, and a lot of them don't need to run on every single game day. But in order for a gameday to be "done", you actually need everything for that day to have completed computation.
Which AMD-FX processor, and which R9 series graphics card?I have a good computer and I'm also having lag problems with this game. Not only with Stellaris. For example with Shogun 2 which I still play there are optimization issues where the GPU goes mad. Stellaris is not hungry about system requeriments, graphics are very simple compared to other games. It's more about optimization and making efficient use of computer resources than upgrading your hardware. So in the end I think that with a better computer you won't notice a massive change in this game performance.
My specs are: AMD-FX, 8Gb ram, Radeon R9 and a corsair SSD. Early game it goes smooth but mid-late game everything changes. Just give time to PDX, sooner or later they are going to find what is causing this problem.
Which AMD-FX processor, and which R9 series graphics card?
The AMD-FX 8150 isn't a very good CPU for gaming. The Bulldozer CPUs suffered from poor single thread performance, and not many games use 8 threads well enough to compensate for that. The R9-200 is a series consisting of cards like the R9-270, R9-280, and the R9-290. I recall that the R9-290 was about on par with the GTX 970, but I don't think the 270 or 280 are nearly as good. You could probably find out what exact card you have by using GPU-Z.8150, R9 200
GPU can not be the bottleneck. If anything CPU delays interfere with drawing passes and thus FPS more.CPUs aren't a bottleneck unless he's playing on some silly tiny resolution like 640x480. Stellaris isn't an especially CPU or GPU intensive game anyway. Someone mentioned their i5 getting 40-45 C, which is similar to what my AM2+ Phenom II Thuban hits playing Stellaris.
GPU can not be the bottleneck.
It is worth a upgrade if your CPU is mediocre like the AMD-FX 8150. If you have a good CPU with good single thread performance it's almost always better off to upgrade to a new GPU rather than a better CPU. But if you do not, your CPU will bottleneck your graphics card especially if your running something like a GTX 1070 or GTX 1080. Simply turning up the resolution won't matter if your CPU doesn't have enough performance keep up with your graphics card, which will lead to a decrease in FPS.Someone doesn't play on a high enough resolution, then.
CPUs rarely increase performance in games substantially. You will always be better off upgrading to a new GPU, unless your mainboard doesn't support it, in which case you'll need a new CPU, unless by happenstance your new mainboard supports your old socket I guess. They're basically tertiary upgrades after mainboard and GPU if you want to increase your frames per second.
GPUs are almost always the bottleneck in frames.
- There is one last that is my favorite. In our Clausewitz engine, there was an old code loop that nobody ever dared to touch. It was processing all the user interface elements in a "flat manner" instead of the "tree hierarchy". This means, that the more windows and buttons we add to the game, this loop was heavier and heavier. And the windows didn't even had to be shown for it to slow down the game. We always knew about this infamous spot, however reworking it without breaking all the interfaces was nearly impossible. Until now. I found the way! Previously that code loop had ~120 000 passes in each frame, now it's under 700, processing only the necessary interface elements. By that I mean, when you are looking at the technology trees, we are not processing through the hidden production windows and buttons, etc.
Someone doesn't play on a high enough resolution, then.
CPUs rarely increase performance in games substantially. You will always be better off upgrading to a new GPU, unless your mainboard doesn't support it, in which case you'll need a new CPU, unless by happenstance your new mainboard supports your old socket I guess. They're basically tertiary upgrades after mainboard and GPU if you want to increase your frames per second.
GPUs are almost always the bottleneck in frames. Games simply aren't big enough number crunchers to tax CPUs very much, unless you have a strange case like you're crunching PhysX on your CPU instead of your GPU (for whatever silly reason). Stellaris isn't a very intense game anyway. An Ivy Bridge will crush it into submission as much as a Skylake in any realistic measurement, so a new CPU is not going to be a panacea for any sort of end game FPS drop. That's inherent to the version of Clausewitz engine that Stellaris runs on.
CPUs aren't a bottleneck unless he's playing on some silly tiny resolution like 640x480. Stellaris isn't an especially CPU or GPU intensive game anyway. Someone mentioned their i5 getting 40-45 C, which is similar to what my AM2+ Phenom II Thuban hits playing Stellaris.
And no, an overclocked Groundshaker won't be much better than an overclocked Pentium IV TBH. Both P4 and Earthmover were both "Speed Demon" type uarches, focusing on "Phat Megahurtz" and Big Airflow to do stuff. Raisin/Grape is AMD's first "Brainiac" design, comparable to Intel Core (the first Intel Brainiac design was Nehalem) from the early 2010s.
AMD continues to overfocus on multithreading. Great for workstations (these days?) but mediocre for games. Raisin promises great multithreading performance and decent single core performance, though, which is fantastic for people who use PCs for real work that isn't just "vidya gams", but also for video/photo/3D editing and other stuff that takes advantage of Big Threads.
Also it's ICE ICE BABY compared to the world's first commercial thermonuclear reactor Bulldozer/Excavator/Piledriver/Steamroller. Unfortunately RX 480 retains the thermonuclear capability of the Earthmovers. We'll have to wait for Vega to see ICE ICE BABY hit the GPU scene. Maybe by 2025 AMD will reach the apex of silicon development that Intel has hit and release a good single-core Brainiac design that has cold temps and combines AMD's experience with multithreading and learning how to do single-cores. Just in time for Intel or IBM to release the first commercial photonic uarch and make all silicon obsolete.
tl;dr CPUs aren't a serious gamer thing.
If my experiences a patch or two ago are anything to go by, an Ivy will not crush this game, an Ivy will result in lag late game, just like all available CPU's do. NO CPU you can buy is sufficiently fast to avoid lag.
I don't know what this post has to do with the OP but a couple of points
-What the hell does the energy efficiency of an RX480 have to do with someone who has gtx 1080?
-the reason AMD has a design that is better for servers than gamers is simple, they designed a server chip and also used it to sell to gamers. The only reason Naples is not out yet is because it takes longer to verify server chips than consumer ones
-single core performance is not very far behind skylake/kaby lake, the main issue is the clock deficit not the ICP. This is most likely due to the low power process they manufacture it on-something that will pay dividends in the mobile space. That they are this close is kind of amazing for a company that is so far behind in research dollars but probably irrelevant to the thread.