• 4 Posts
  • 95 Comments
Joined 11 months ago
cake
Cake day: August 5th, 2023

help-circle








  • Interesting take on LLMs, how are you so sure about that?

    I mean I get it, current image gen models seem clearly uncreative, but at least the unrestricted versions of Bing Chat/ChatGPT leave some room for the possibility of creativity/general intelligence in future sufficiently large LLMs, at least to me.

    So the question (again: to me) is not only “will LLM scale to (human level) general intelligence”, but also “will we find something better than RLHF/LLMs/etc. before?”.

    I’m not sure on either, but asses roughly a 2/3 probability to the first and given the first event and AGI in reach in the next 8 years a comparatively small chance for the second event.





  • Children will (on average) be a net-positive/taxed in the future, therefore societies incentivize having children by letting parents pay less taxes. Also, children will completely form the society of the future, so different groups in a society having children is probably a good idea for a more diverse society in the future. As having children is expensive it is probably a good idea to let less wealthier people also have children, as you probably don’t want to just exclude them.





  • As others pointed out, having the feeling of knowing (about) things without actually having experienced them yourself is a core feature of what one might call intelligence, and as such not insane.

    I would argue instead that the problem isn’t with arguments over stuff you haven’t experienced yourself, but rather people caring too much about their fixed opinion and not about actually trying to find the truth (e.g. though argument) as they might proclaim.

    (I am relatively certain of this point as I’ve seen seemingly good counter examples to this provided by the LessWrong community, where people often discuss topics they do not necessarily have experience with, but rather try to find the truth and therefore not have a fixed opinion beforehand.)


  • I’d like to actually discuss the problems I perceive with Yudkowsky‘s take for a moment, before everyone can go on with telling each other how crap his opinion is.

    First, quantifying emotional states is hard, if not impossible at the moment. This could easily lead to misconceptions and misunderstandings, as it is not clear what x% “better” means.

    Second, people probably don’t always want to live in constant fear of getting dumped by their partners. I mean, I get it, if you are in a relationship where you would leave your partner for someone else it’s definitely not a bad idea to be clear about that, but I don’t think that is the norm at all in relationships “even” apart from marriage. So his tweet about marriages being an agreement to ignore other options is not wrong itself, but he seems to lack the understanding that many relationships outside of marriage include this social contract as well.

    Especially in a monogamous relationship, this view does not seem to make sense to me as it’s just a possibly emotionally hurtful way to tell your partner about your fear of commitment.