Do you have a personal preference when it comes to choosing between fun (according to your subjective opinion, of course), historical accuracy/plausibility (or whatever the equivalent would be for Stellaris), and balance, assuming it is impossible to do something in a manner that's compatible with all of them?
For me I am a firm believer of Fun > Balance > Accuracy.
Of course ideally its a middle ground between all of those, but at the end of the day we are making games and they are meant to be fun. Balance next cause something being too powerful makes it the go to thing and reduces viable decisions you can make which takes away from fun strategic decision making. Then historical accuracy cause something somewhat plausible or a modern widely seen pop culture view on something which is fun is generally more enjoyable than something very accurate but boring, at the same time something purely fantasy and unbelievable takes you of the immersion of the game as well which is a negative on fun.
I think we strike a pretty good mid ground for those things though.
How do you personally measure how much progress you are making on something, seeing as a lot of metrics (e.g. X lines of script) are a bit fuzzy when it comes to time/difficulty to implement them?
There are few fool proof metrics but the things I take into account is generally quantity of things done, quality of implementation, time efficiency and future proofing.
Its all well and good to implement a bunch of new features very quickly but if they end up being buggy or not well tested then you are just making work for the future which you'll likely end up dealing with which is an overall loss.
This also depends in the stage of development you are in too, for experimental stuff you plant to rip out then time efficiency and quantity become more important. For core game play features then quality and future proofing become more important. And towards the end of development before you ship something you want to be whacking out fixes for a lot of bugs even if they may be not the perfect solution because that is better than the issue staying and you can budget future tech debt work to make it better, of course you don't want it to be a truly hacky fix as then it probably doesn't have enough quality to actually work right.
Pretty much the only bad measurement that exists is lines of code written, I can artificially extend or shorten pretty much every piece of code I write if I needed to.
Assuming the original creator of something still is working on the same project, is it customary for them to iterate on or bugfix their own work (as they might know what they did/why they did something), customary for them not to do that (as they might be blind to their own mistakes), or decided on a case-by-case basis (e.g. based on who is available)?
I wouldn't say its customary but more just sensible, they'll know the system better than you and its more efficient for someone to explain you the system on a high level before you dive in than it is for you to try and piece it together. They will also probably know the areas they were dissatisfied with that can give you a head start on places to look at to improve.
This also related to the previous question on metric for doing well, if you write something that nobody else can understand you make it harder for other people to ever work on it leaving you to be the only one who knows how it works leaving your team screwed if you are sick, on vacation, move team or quit as that black box of knowledge goes. So making things that is not unnecessarily complex, hard to read or littered with little gotchas is important.
As is writing good documentation and unit tests on things if you get the chance, though that can of course be easier said than done.