• 0 Posts
  • 282 Comments
Joined 4 个月前
cake
Cake day: 2025年2月13日

help-circle


  • Yes. Exactly. They’ll also deploy upgrades to themselves painlessly. Thankfully that’s never been a huge ongoing pain felt by everyone paying attention.

    (I couldn’t resisit adding a “yes, and” to your point.

    Edit: And the AI agents will back themselves up correctly, too! We trained them on the activities of all currently living IT engineers, and the average of our work always results in a successful backup…

    If that wasn’t true, we would be having a new ransomware crisis every month…

    I’m sure glad we live in one of the good timelines, and have plenty of clean correct code and configuration data to train our AI on!

    (This is, of course, sarcasm. Companies that shift to AI IT agents today can expect to very quickly reach today’s median IT outcome. There’s not enough popcorn in the world for what is coming.)


  • Stupid question but what is stopping the software engineers to poison the well?

    Great question. I agree with other responses - it happens, and there’s motive to hush it up, so we tend not to hear about it.

    It’s also just really hard to tell the difference after the fact between “Dave sabotaged us” and “no one knows how to do what Dave did”.

    But I’ll add - there’s currently little need motive sabotage AI implementations. Current generation AI is largely unable to deliver on what is promised, in a business sense. It does cool but useless things, like quickly generating low maturity code, and writing a summary any seven year old could have wtitten.

    Current generation AI adds very little business value, while creating substantial risks. Nevermind that no one knows how Dave worked, now no one knows how our AI works, and it’s so eager to please everyone that it lies at critical moments.

    Companies playing around with current generation AI to boost next quarter’s stocks will hit plenty of “find out” soon enough, with nothing beyond the natural consequences of ignoring their own engineers advice.

    All that to say - if we see what looks like sabotage, it may well just be the natural consequences of stupidity.






  • With the federal government gutting funding of it’s own agencies, we may see more of this.

    Federal laws are effective if they’re effectively enforced. If states lose confidence in federal enforcement, it makes sense that they will try to do their own thing, and see if the federal courts are understaffed and lethargic or able to act.

    And if the federal government succeeds in using AI instead of human staff, then all each state will need to do is pass the same law a few different times with slightly different wording to hit the right gap in the AI.

    There’s interesting times ahead.


  • It’s a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing “terrifying” about this imo.

    Right. There’s nothing terrifying about the technology.

    What is terrifying is how people treat it.

    LLMs will cough up anything they have learned to any user. But they do it while successfully giving all the human social cues of an intelligent human who knows how to keep a secret.

    This often creates trust for the computer that it doesn’t deserve yet.

    Examples, like this story, that show how obviously misplaced that trust is, can be terrifying to people who fell for modern LLM intelligence signaling.

    Today, most chat bots don’t do any permanent learning during chat sessions, but that is gradually changing. This trend should be particularly terrifying to anyone who previously shared (or keeps habitually sharing) things with a chatbot that they probably shouldn’t.












OSZAR »