RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
34% Positive
Analyzed from 2828 words in the discussion.
Trending Topics
#review#don#incentives#sub#game#research#system#liberata#might#professors

Discussion (50 Comments)Read Original on HackerNews
While I agree in the abstract, the problem is that when you're well-established, in most areas, your research basically amounts to supervising PhD students and postdocs who are not well-established. And they're struggling to meet the requirements to finish their thesis, get a permanent position, etc. So if you encourage them to do slow science and publish less, there's a high risk that you're basically letting them down. Plus, to do research you're probably using some grant funding and guess what the funding agency expects...
Thus, most people never get to a point in their career where they can safely say "let's ignore incentives and just pursue this project slowly and carefully". There might be some exceptions. Probably in math, where research is often individual. And maybe in other areas if you can have a smallish side project with other professors that doesn't require much specific funding, or if you have a student who is finishing and has already secured a position in industry so their stakes aren't high. I've been in those situations sometimes, but it's the exception rather than the rule. The truth is that even senior professors seldom have the luxury of not being heavily pressured by incentives.
B) You posted almost the same comment with a link to your project 8 times.
The professor can always set his terms, and it's up to a student to have him as an advisor. In both universities I attended, there were professors who were very fussy about how much research they did and how much money they brought in (could be 0), and if a student wanted them as an advisor, they needed to understand the risks involved.
It's a lot less pressure than industry once you have tenure.
At the same time, knowing someone who committed academic fraud during his PhD and was caught, I can say two things:
A lot of people do it when they simply don't need to. They're not trying to "survive in academia". They're trying to get to the top. The person in question was smart, bright, and did good research (at least excluding the stuff he made up). He could have gotten an academic position without committing fraud. And he could have had a great industry job without it too.
No matter - he simply switched to another top tier university, got his PhD, and is now running a startup. Which comes to the second point: The repercussions are minor even when you do get caught.
Was it made public?
So he switched universities.
But still, didn't he worry that he'd bump into his former professor at a conference and that he would tell his new advisor? I don't know if he made some deal with him ...
If it is your field, you don't need an intro, and don't want to hear whatever yarn they are spinning in the abstract/discussion. You jump straight to the figures / table to review the data yourself.
I wasn't thinking of this at all. Important to understand: the peer review process takes up only a minor part of a professor's mindshare. It's considered a chore. Much more important is to read lots of new papers (including pre-prints) for continual education, to know what's going on in your field and adjacent fields.
It's not a pretty system sometimes.
Edited to add: Conference's also require declaring that there was someone who sub-reviewed the paper. The professor / PI mentions the PhD student's name in the review form of the paper. Of course, the professor also double-checks all the sub-reviews
The PC chairs assign papers to members of the PC. Those reviewers are ultimately responsible for the review quality and, a more frequent problem for the conference, ensuring the reviews are in on time. In principle, they can ask anyone to sub-review, but in practice, it usually goes to grad students, postdocs, or graduate alumni (and since we have a relatively light review load per member, we have many people who do all reviews themselves). The reviewers arguably know more about the expertise of their grad students and postdocs than the chairs doing the assignments do. Also unlike a journal, where editors might ask anyone with particular expertise, we both only assign reviews to PC members, and do assign them: PC members only get to state their preferences on what they would like to review. The sub-review process ideally lets reviewers ask people to do reviews who they know would be suited to a particular paper, but who might not be experienced enough to reasonably be on the PC itself with those responsibilities, and the chairs might not know much about. It then lets those reviewers look over the sub-reviewer's work directly, which might include mentoring them. While we do anonymous reviews, identities are visible to chairs, and one thing I've noticed when a chair, for example, is that grad student sub-reviewers often do excellent, thorough reviews, but also often lack the confidence to be sufficiently critical when writing about problems and weaknesses they identify, something that the reviewer can help with.
The review system (we use easychair) directly handles sub-reviewers, and our proceedings list all sub-reviewers (at least, those who actually submitted reviews). Good sub-reviewers can sometimes be reasonable candidates to ask to be on the PC the next year, and give a gentler, safer onramp: we're able to have a wider mix of junior and senior members when there are new postdocs (and I think in one case a grad student) who we already know do reliably good reviews and know our review process.
As an example, let's take soccer: All players will tackle if they think they can get away with it. Even Messi, Ronaldo, Mbappe do it. Those who are caught receive a red card and are sent off the field. Do red cards stop tackles? No. Players just try hard not to get caught.
I understand this is a cheeky section heading and the author is not really making this point, but this may be one of the dumbest popular phrases out there. You're effectively saying "Don't get upset at me for being an awful person, I probably wouldn't have succeeded if I'd been a good person." "The game," of course, is made up of players and if no one played that way there would be no game.
Of course the thing that makes the game rotten is incentives. The academic profession as a whole has decided to incentivize and reward this behavior.
Do serious workers tend to get out of the field, if the incentives are wrongheaded enough? Sure. Some. Does that fix the incentives or the outcomes within that field? No, not at all.
I suspect the way this usually gets started is similar to embezzlement schemes. “Oh I’ll just borrow a few dollars from the till and pay it back tomorrow” is akin to “The manuscript is due tonight so I’ll just touch up this microphotograph to look like the other one that had bad focus.”
That escalates into forging invoices on the one hand and completely fabricated data on the other. By that point they’re in too deep to stop until they get caught.
That's not obviously true at all.
The issue is that there is no incentive to do the additional work necessary to generate reproducible results because of the pressure to constantly generate sufficiently novel results to publish.
If you spend the additional time required to have fully reproducible results and your competition is not, you're probably going to lose the game (where the game is obtaining more funding).
Not generating reproducible results doesn't mean you're a fraud, but the absence of a requirement to generate them in order to publish means that it's easier for fraudsters to operate that it would be with that requirement.
You don't have to hate someone in order to, er, apply incentives against whatever it is they just did.
But I don’t hate you for this. None of these terrible moves you make are your fault. Just a reality of the world we live in. Hate the game, not the player.
but why are they imposing these structures? to try to weed out the cheating scum. once you start walking down that path, you're signing up for a distortion of value.
You don't need to be a "cheating scum" to succeed, but there are not enough checks in place to prevent that from being a successful strategy for someone who wouldn't succeed otherwise.
The people who need to change the most are the nameless "they" who issue funding because they have the most control over these systems, along with the publishing cartel which has almost no redeeming value in today's environment.
It is a deflection of personal responsibility, full stop.
That's not what that phrase means in general, and it's normally not used to describe one's own behavior (when it is, I would say your definition is closer to correct because it's being used as an excuse for antisocial behavior).
The point is that the system's incentives are at a minimum misaligned with what would be considered "good" behavior and in the worst case actively encourage undesirable behavior.
It is often the case that people have no meaningful alternative to participating in these systems and have no control over the rules, and the behavior they induce is generally not bad enough to be seen as "awful", let alone bad enough to call the person themselves "awful".
Things have changed since, but in my time, if a journal required source code for publication, most of the professors in my department would not have published there.
Even when they do require it, one of the problems for journals that require source code for publication is that there is little support for making sure that code actually works reliably. Reviewers often aren't obligated to look at code packages, and when they are, might not be expected to actually get anything running; they might not even have the resources to do so. I have done reviews where I have tried to get code to run, and oftentimes I feel like the code needs work not because of any malice, but simply because making a one-time code package that works, and continues to work, for others, over time, without updates, can be quite hard, especially when odd dependencies are involved. It's also not necessarily something related to normal reviewing of scientific content, but more things like insufficient dependency pinning, accidentally hard-coded paths, environment assumptions that worked for the authors but not the reviewers, etc.
Dealing with that process might actually be something that a journal could do as part of a publication fee, the way that some journals currently do visual editing of figures.
I wouldn't say it's pleasant to read, but I didn't have any issue understanding it.