In the quickly growing and new world of generative AI, where models like Midjourney and OpenAI’s DALL-E 3 can translate text into intricate artwork, ethical concerns have arisen regarding the unauthorized use of artists’ work in training these models. Many AI models are trained on datasets containing artwork without the artists’ knowledge or consent. To address this issue, entrepreneurs and activists are developing tools to empower artists to protect their work from being used in training generative AI models.
Kin.art, a platform specializing in art commissions management, has launched a free tool designed to prevent generative AI models from training on artwork without permission. Co-developed by Flor Ronsmans De Vry, co-founder of Kin.art, the tool utilizes image segmentation and tag randomization to disrupt the model training process. By making subtle changes to the pixels of an image and altering image metatags, the tool aims to make it challenging for vendors to incorporate the artwork into their model training datasets.
Image Credit: Kin.Art
The co-founder, Ronsmans De Vry, believes in the importance of an ethical approach to AI training throughout the art industry, respecting the creative rights of artists. Their tool aims to create a landscape where traditional art and generative art can coexist harmoniously.
While there are existing solutions that attempt to mitigate damage after artwork has been included in datasets, Kin.art’s tool focuses on preventing unauthorized use from the outset. Ronsmans De Vry positions the tool as a philanthropic effort, stating that Kin.art plans to make the tool available to third parties in the future after testing its efficacy on its own platform.
Although the tool is currently free to use, artists are required to upload their artwork to Kin.art’s portfolio platform. This strategic approach may lead artists to explore Kin.art’s range of fee-based art commission-finding and -facilitating services. Despite the potential business aspect, Ronsmans De Vry emphasizes the broader significance of defending platforms’ data in the age of AI, offering solutions to protect against unlicensed use. The startup envisions providing its tool as a service to enable various platforms to safeguard their data from unauthorized AI training.