25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 13 Comments
Joined 11 months ago
cake
Cake day: October 14th, 2024

help-circle
  • I’m just saying like I oppose the death penalty, but there are certain cases where I’m not going to die on that particular hill. I don’t believe they should be killed, but the context of the moment is going to alienate more people than it convinces.

    Same thing here. I oppose identification laws but making that argument in defense of those two is going to make folks think it’s a fanatical position rather than a reasonable one.

    It’s far better to argue from a reasonable position and then extend that to other cases than just argue these places should be allowed to continue to weaponize anonymity.




  • Yeah that’s the problem with how they are marketing it. It’s a tool for expert use, not laymen.

    I don’t think the problem is ChatGPT itself — it just does what it does and folks get what they get, but it’s definitely a problem that people aren’t being informed about what it can and can’t do (see all the people asking it to count letters and those who think they’ve hacked the system prompt because the AI said they did).

    In this case, the user is asking ChatGPT to act as a friend and confidante, and that’s something it can’t do and a use case impossible to detect. The user simply has to understand it lacks any qualities required for a relationship of any kind. Everything a user says is simply input to a mathematical model that wants to complete it with something a human might say.

    So it responds to a fictional scenario I might be writing for a book or game exactly the way it responds to a user looking for companionship. There is no way to tell the difference without genuine understanding rather than just token vector comparisons.

    It’s like fire. A user can buy and use a lighter, and fire can act like a friend when you’re cold or hungry, but it’ll burn you off you try hugging it.


  • I can’t tell if Altman is spouting marketing or really believe his own bullshit. AI is a toy and a tool, but it is not a serious product. All that shit about AI replacing everyone is not the case and in any event he wants someone else to build it in top of ChatGPT so the lability is theirs.

    As for the logs I hadn’t heard that and would want to understand the provenance and whether they contained PII other than what the user shared. Whether they are kept secure or not, making them available to thousands of moderators is a privacy concern.




  • Human moderator? ChatGPT isn’t a social platform, I wouldn’t expect there to be any actual moderation. A human couldn’t really do anything besides shut down a user’s account. They probably wouldn’t even have access to any conversations or PII because that would be a privacy nightmare.

    Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as hate speech: .56, violence: .43, self harm: .29

    Those numbers in the middle are really ambiguous in my experience.