I'm not in academia, so I might be fully ignorant about how things operate, but if professors don't reaed the actual paper, can do they know if it's BS or not?
A few other commenters have talked about the paper review process.
I wasn't thinking of this at all. Important to understand: the peer review process takes up only a minor part of a professor's mindshare. It's considered a chore. Much more important is to read lots of new papers (including pre-prints) for continual education, to know what's going on in your field and adjacent fields.
Here's how it works in our group. The professor gives papers to the PhD students or PostDocs, who read the paper completely. I regularly 'sub-review', as it is called, meticulously looking for issues. I have heard that there are professors who review entire papers in 2-3 hours, since they have a lot (10+) of papers per conference to review without any compensation while they have their own research, teaching, and funding to juggle.
It's not a pretty system sometimes.
Edited to add: Conference's also require declaring that there was someone who sub-reviewed the paper. The professor / PI mentions the PhD student's name in the review form of the paper. Of course, the professor also double-checks all the sub-reviews
The sub-review process, when it works well, is arguably a reasonable one. To give the example of how this works from the perspective of the program committee of a conference I'm involved in:
The PC chairs assign papers to members of the PC. Those reviewers are ultimately responsible for the review quality and, a more frequent problem for the conference, ensuring the reviews are in on time. In principle, they can ask anyone to sub-review, but in practice, it usually goes to grad students, postdocs, or graduate alumni (and since we have a relatively light review load per member, we have many people who do all reviews themselves). The reviewers arguably know more about the expertise of their grad students and postdocs than the chairs doing the assignments do. Also unlike a journal, where editors might ask anyone with particular expertise, we both only assign reviews to PC members, and do assign them: PC members only get to state their preferences on what they would like to review. The sub-review process ideally lets reviewers ask people to do reviews who they know would be suited to a particular paper, but who might not be experienced enough to reasonably be on the PC itself with those responsibilities, and the chairs might not know much about. It then lets those reviewers look over the sub-reviewer's work directly, which might include mentoring them. While we do anonymous reviews, identities are visible to chairs, and one thing I've noticed when a chair, for example, is that grad student sub-reviewers often do excellent, thorough reviews, but also often lack the confidence to be sufficiently critical when writing about problems and weaknesses they identify, something that the reviewer can help with.
The review system (we use easychair) directly handles sub-reviewers, and our proceedings list all sub-reviewers (at least, those who actually submitted reviews). Good sub-reviewers can sometimes be reasonable candidates to ask to be on the PC the next year, and give a gentler, safer onramp: we're able to have a wider mix of junior and senior members when there are new postdocs (and I think in one case a grad student) who we already know do reliably good reviews and know our review process.
This feels like a core failure mode: papers are optimized for skim-level persuasion because the system is too overloaded for deep evaluation at scale. Then a lot of the actual scrutiny gets pushed onto under-credited sub-review labour. Peer review is too important to stay this invisible and under-incentivized. Liberata is exploring exactly that problem, and our beta waitlist is open if you want to follow along: https://liberata.info/beta-signup
Curious, when you do this, do you understand the math/reasoning of the paper and just have claude as do the coding? Not saying that matters if you just care about the end result, but I'm curious how much using an agent affects your understanding of what the papers are proving.
I went in with limited understanding, gained some more as the agent worked through it. What can appear complex in the paper often turns out to be far more simple and elegant once you see the code written.
Awesome work, what always prevents me from implementing more solvers is the amount of math required. While the implementation always seems simple, understanding the different optimization strategies for each solver gets confusing.
It's really impressive that the author was able to implement rendering papers and physics sim papers with such regularity. It really is a feat. Makes me curious to see what their background is.
Can you elaborate on what you mean? It could be a matter of perspective, For a stack of blocks, each 1 meters high, the stack can get quite high and your expectations on how it should look like IRL might not be correct, due to never experience a large tower of blocks being knocked over at that vantage point. Especially if the mass of the objects are strange (super light for their size or super heavy).
I know in older games, the recommendation was to keep gravity low (~6 m/s^2 iirc) to help with simulation stability and make things look better, that might contribute to your idea of things being floaty.
I don't find the examples in the git repo to be especially floaty, but I work with a lot of simulators so I might just be used to it.
I was curious to see what someone elses perspective was on something I routinely engage with. I wasn't sure if it was someone trolling or genuinely upset.
I don't think people really make up domestic abuse charges with this much detail. His wife explains in the post specifically what causes him to get so angry that he hurts her.
I don't see her having much incentive to lie and make up these statements, and see no evidence that she did lie. Some women lie about domestic abuse, most don't.
If the wild allegations in the smearjob was hers, she does not rank very high on credibility.
Going by what people say, it was not unusual at all to use false allegations of abuse (or adultery) in divorce proceedings at that time. Sometimes it was the only way.
Both those statements need to be prove for her, and I don't see any strong evidence for either.
And if someone was going to to make false allegations of abuse, why include specifics about how interrupting his calculus and drums caused his anger? Why not just say he was abusive, or state a more common reason for abuse? To me, the specifics make her statement more credible. Combined with his predatory history regarding women[1], I view Feynman as a distrubed individual (but a genius nonetheless)
I find the allegation credible, as I don't see why someone in her position would lie, and especially give specific details on what sets Feynman off.
Also, unless I see some concrete data about the amount/percentage of women who lie in order to get a divorce, the comment you linked is pure conjecture. Nothing really to argue about since it's just the vague idea of what people think about that time.
> Both those statements need to be prove for her, and I don't see any strong evidence for either.
You can query the search engines yourself, it's a pretty standard and accepted thing now based on analysis of the letter and that the interview with the FBI officer happened in her home town, Boise, Idaho. Only personal connect that Feynman had with Boise, Idaho is through his 2nd wife, Mary Louise Bell.
The redacted FBI files still contain references to the informant as "she" and "her" and accusations match the tone of her wife's divorce filings.
Regarding specificity of complaints, of course, she was not an idiot, these are filings in a divorce court, unless it's specific it would likely be thrown out. On top of that there were divorce lawyers overseeing the filing of these accusations, it would be their job to make it specific.
Yes, circumstantial, but as damning as you can get. A vindictive wife with a tendency to throw wild accusations ... not a particularly credible source, especially when compared with how Feynman's sister and other wives talk about him.
As for the baffler article the only concrete thing is his anecdote in surely you are joking, that's well addressed in
I don't agree with the parent commenters characterization of Karpathy, but these projects are just simple toy projects. They're educational material, not production level software.
reply