This has been bothering me lately because I understand that in the future everything will most likely only get worse and it will be impossible to tell whether a product was made by a person or generated by AI. Of course, there won’t be any normal divisions; most likely, everything will resemble a landfill, and who knows who did it AI or people, and trusting corporations, as you know, is a bad idea; they are lying hypocrites.

In that case, are there any databases or online archives containing content created exclusively by humans? That is, books, films, TV series, cartoons, etc?

  • SuluBeddu@feddit.it
    link
    fedilink
    arrow-up
    5
    ·
    16 hours ago

    I recently made a pdf with some of my notes on hints of AI in images and music, but I’m not sure how to send files here

    It’s not easy ofc, and it will get harder with time, but I am convinced we can tell if trained a bit. Because there are clear differences in the creative processes between humans and machines, which will always result in different biases

    With time I think we’ll learn to only trust people we have some social connection with, so we know they are real and they don’t use AI (or they use it up to a level acceptable to us)

      • SuluBeddu@feddit.it
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        Two that I noticed are:

        For drawings in the ghibli style, you can see noise on areas that should have all the same colour. That’s because of how the diffusion model works, it’s very hard for it to replicate lack of variation in colours. If fact that noise will always exist, it’s just more noticeable on simple styles.

        For music, specifically with Suno, it tends to use the similar sounding instruments between different tracks of the same specifispecified genres, and those sounds might change during the track and never come back to their original sound (because it generates section by section of the track from start to end, the transformer model will feed the last sections back as input to generate the new ones, amplifying possible biases in the model)