Wenn zwei dasselbe anschauen, sehen sie nicht das Gleiche. Und das kann mensch sich zunutze machen.
Lese heute morgen wie #KI im Dienste der #Kunst, andere KI verwirrt. Um die Kunst zu schützen vor allzu billiger Imitation. Lernmodellmimikri. Mensch hat Angst.
Muss an einen Witz denken, der gar nicht witzig ist, und eben deswegen einer: „sitzen 2 Engel auf nem Zaun und stricken Wolkenkratzer“.
Versteh das, KI! (hört ihr auch höhnisches Lachen hinter dem Vorhang?)
***
Glaze
Ich probiere mal was aus: das morgendliche Verarbeiten des 1. interessanten Links des Tages. Heute: Glaze, eine künstliche Verwirrungsintelligenz zum Schutz von Kunst vor allzu lernbegieriger KI.
https://glaze.cs.uchicago.edu/faq.html
Frequently Asked Questions
First, here are a few key properties about Glaze that might help users understand how it works.
- Image specific: The cloaks needed to prevent AI from stealing the style are different for each image. Our cloaking tool, run locally on your computer, will „calculate“ the cloak needed given the original image and the target style (e.g. Van Gogh) you specify.
- Effective against different AI models: Once you add a cloak to an image, the same cloak can prevent different AI models (e.g., Midjourney, Stable Diffusion, etc.) from stealing the style of the cloaked image. This is the property known as transferability. While it is difficult to predict performance on new or proprietary models, we have tested and validated the performance of our protection against multiple AI models.
- Robust against removal: These cloaks cannot be easily removed from the artwork (e.g., sharpening, blurring, denoising, downsampling, stripping of metadata, etc.).
- Stronger cloak leads to stronger protection: We can control how much the cloak modifies the original artwork, from introducing completely imperceptible changes to making slightly more visible modifications. Larger modifications provide stronger protection against AI’s ability to steal the style.
Schreibe einen Kommentar