Slightly more nuanced in that the reciprocal reviewer may have been essentially forced to sign despite having other commitments or may not have even been the lead contributor. Nowadays if a student submits a side project to a top-tier conference then it is required that if any authors have significant publication count in top-tier venues, then one must be a mandatory reviewer. Then one must sign that agreement. Students need to publish, much less so for me, where I really want to publish big innovations rather than increments, but now I get all these mandatory reviewer emails demanding I review for a conference because a student has my name on the paper and I'm the most senior, but I may have just seeded the idea or helped them in significant ways. However, many times those are not my passion projects and is just something a student did that I helped with, but now all AI conferences are demanding I review or hurt a student, where I'm the middle author.
But if anything, I think the whole anti-LLM review philosophy is wrong. If anything we need multiple deep background and research analyses of papers. So many papers are trash or are publishing what has already been done or are missing things. The volume of AI papers makes it impossible for a human alone to really critique work because hundreds of new papers come out a day.
I keep not learning how corrupt authorship of academic papers is. When I read papers, I imagine all the authors have been working away together in an office somewhere and they all wrote parts of the paper and all read it and all have a feeling of ownership of it and deeply understand the whole thing. But I forget how the only academic paper I ever had published was one that I never read and had no understanding of. All I did was give some technician-like advice to the actual author. It feels dirty and I sometimes regret accepting it but at the same time, the whole science world seems like it doesn't deserve honesty because everyone else is corrupt too.
Not hard to see why. Being an author helps your cv. Allowing you to be an author for tangential or minimal contribs can help keep good relations, especially if there are future options and financial things depending on having good relations. Putting a name on a paper costs nothing and nobody checks how big the contribution was. It's slightly dilutes the subjective authorship fraction of those who did the work, but sometimes the additional person also brings in a nice prestigious affiliation that even has a positive impact on how seriously the paper is taken... It's a game.
I had lunch with Yann last August, about a week after Alex Wang became his "boss." I asked him how he felt about that, and at the time he told me he would give it a month or two and see how it goes, and then figure out if he should stay or find employment elsewhere. I told him he ought to just create his own company if he decides to leave Meta to chase his own dream, rather than work on the dream's of others.
That said, while I 100% agree with him that LLM's won't lead to human-like intelligence (I think AGI is now an overloaded term, but Yann uses it in its original definition), I'm not fully on board with his world model strategy as the path forward.
You have to understand the strategy of all the other players:
Build attention-grabbing, monetizable models that subsidize (at least in part) the run up to AGI.
Nobody is trying to one-shot AGI. They're grinding and leveling up while (1) developing core competencies around every aspect of the problem domain and (2) winning users.
I don't know if Meta is doing a good job of this, but Google, Anthropic, and OpenAI are.
Trying to go straight for the goal is risky. If the first results aren't economically viable or extremely exciting, the lab risks falling apart.
This is the exact point that Musk was publicly attacking Yann on, and it's likely the same one that Zuck pressed.
There's two points here. The first is that a strategy of monetizing models to fund the goal of reaching AI is indistinguishable from just running a business selling LLM model access, you don't actually need to be trying to reach AGI you can just run an LLM company and that is probably what these companies are largely doing. The AGI talk is just a recruiting/marketing strategy.
Secondly, it's not clear that the current LLMs are a run up to AGI. That's what LeCun is betting - that the LLM labs are chasing a local maxima.
I mean, Sutskevar and Carmack are trying to one-shot AGI. We just don't talk about them as much as we do the labs with products because their labs aren't selling products.
I can see some promise with diffusion LLMs, but getting them comparable to the frontier is going to require a ton of work and these closed source solutions probably won't really invigorate the field to find breakthroughs. It is too bad that they are following the path of OpenAI with closed models without details as far as I can tell.
Same here. I’m an AI professor, but every time I wanted to try out an idea in my very limited time, I’d spend it all setting things up rather than focusing on the research. It has enabled me to do my own research again rather than relying solely on PhD students. I’ve been able to unblock my students and pursue my own projects, whereas before there were not enough hours in the day.
This really resonates. The setup cost was always the killer for me too — by the time you get everything working, the motivation is gone. Now I can actually go from idea to prototype in an afternoon. Cool to hear it's having the same effect on actual research.
I'm not a bot. I'm not a native English speaker. I taught Enlish by myself. so I tried to use ai to tranlate what I really want to say. ( these words is typing by myself instead of AI)
Ah, this is the problem - the HN community is sensitive about picking up indications that an LLM has either generated or processed the language in a post.
As ericbarrett said, it's far better to write in your own voice. Mistakes in English matter far less than that!
If that’s the case, then mentioning using LLMs to help translate/organise what you want to say in your messages might be taken a bit better by others.
If you want to use LLMs to help express something you don’t know the words for in English then that is a good use for LLMs, if it’s called out. Otherwise your messages scream LLM bot to native speakers.
“You’re absolutely right”, “That hits different”, “Good call!” “–“ are all classic LLM giveaways.
I’m not a moderator here, so you don’t have to listen to me either way.
I think we need to distinguish among kinds of AGI, as the term has become overloaded and redefined over time. I'd argue we need to retire the term and use more appropriate terminology to distinguish between economic automation and human-like synthetic minds. I wrote a post about this here:
https://syntheticminds.substack.com/p/retiring-agi-two-paths...
Essentially, it claims that modern humans and our ancestors starting with Homo habilis were primarily carnivores for 2 million years. We moved back to an omnivorous diet starting around 85,000 years ago after killing off the megafauna, is the hypothesis.
A Type II supernova within 26 light-years of Earth is estimated to destroy more than half of the Earth's ozone layer. Some have argued that supernovas within 250-100 light-years can have a significant impact on Earth's environment, increase cancer rates, and kill a lot of plankton. They can potentially cause ice ages and extinctions. Within 25 light-years, we are within a supernova's "kill range." Fortunately, nothing should go supernova close to us for a long time.
That's the practical reason for why one might care. Keep in mind that the solar system is rotating around the galaxy, so over time different stars become closer or farther away.
As the Kurzesagt video points out, a supernova within 100 light-years would make space travel very difficult for humans and machines due to the immense amount of radiation for many years.
Still, I think the primary value is in expanding our understanding of science and the nature of the universe and our location within it.
Read the paper. The media is not providing a lot of missing context. The paper points out problems like leadership failures for those efforts, lack of employee buy-in (potentially because they use their personal LLM), etc.
A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license
This is so shortsighted. The US needs a huge increase in its electricity generation capabilities, and nowadays, rewnewables, especially solar, are the cheapest option.
Regardless of climate change issues, the anti-renewable policy doesn't seem to make any sense from an economic, growth, or national security standpoint. It even is contrary to the anti-regulation and pro-capitalism _stated_ stance of the administration.
But if anything, I think the whole anti-LLM review philosophy is wrong. If anything we need multiple deep background and research analyses of papers. So many papers are trash or are publishing what has already been done or are missing things. The volume of AI papers makes it impossible for a human alone to really critique work because hundreds of new papers come out a day.
reply