The tech giant is among companies pushing out AI tools while promising to build more tools to protect against their misuse

WP gift article expires in 14 days.

https://ghostarchive.org/archive/5UW77

  • wahming@monyet.cc
    link
    fedilink
    English
    arrow-up
    37
    ·
    2 years ago

    Seems a lot of people are misinterpreting this.

    The goal is not to protect the general public from misinformation. The goal is to prevent the pool of new training data from getting TOO contaminated with AI generated images, which would make it worthless for training new AI

    • jana@leminal.space
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 years ago

      The article itself makes the connection:

      As the 2024 presidential campaign ramps up, concern is quickly rising that such images might be used to spread false information.

      Though, I guess shame on us for expecting better journalism these days.

  • restingboredface@wayfarershaven.eu
    link
    fedilink
    arrow-up
    11
    ·
    2 years ago

    Why is the focus only on identifying AI generated photos? Why not force a tag on all AI generated content period? That would help with a lot of applications.

  • jana@leminal.space
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 years ago

    The solution is … Embed a watermark when the image is generated? How will that help stop deliberate disinformation created with other tools

    • Norah (pup/it/she)@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 years ago

      Oh, they’ll totally sell the ability to generate without the watermark. Because of course, corporations have never been responsible for spreading disinformation.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      ·
      2 years ago

      I guess they could call it out better, even automatically, but someone further up is suggesting the real goal is to stop AI photos from appearing in future AI training sets, which would be counterproductive.