,

Fairly Trained Introduces Certification for Ethical AI Use of Copyrighted Data

Guardrails AI Secures $7.5M in Seed Funding to Enhance AI Safety

In a significant move toward ensuring ethical practices in the realm of artificial intelligence (AI), Fairly Trained, a nonprofit organization founded by former Stability AI vice president for audio, Ed Newton-Rex, has launched a certification program for AI models. This initiative aims to distinguish companies that obtain consent before using copyrighted material for training their AI models from those that do not, thus addressing concerns about copyright infringement.

Fairly Trained’s first certification, known as the Licensed Model certification, is awarded to companies that can demonstrate their commitment to ethical data use. This certification is granted to generative AI models that refrain from using copyrighted works without a proper license. The organization explicitly excludes models that rely on the ‘fair use’ copyright exception, ensuring that rights-holders have given explicit consent for their work to be used in training.

Newton-Rex, who founded Fairly Trained in response to concerns about generative AI “exploiting creators,” stated that the organization has already certified nine leading generative AI companies. Among them are Beatoven.ai, Boomy, BRIA.ai, Endel, LifeScore, Rightsify, SOMMS.AI, Soundful, and Tuney, spanning diverse sectors such as image, music, and voice generation.

The certification initiative gains significance against the backdrop of increasing legal battles between creators and major AI companies, such as OpenAI and Meta, over the unauthorized use of copyrighted works in AI training. Fairly Trained seeks to address this issue by creating a transparent framework for companies that prioritize obtaining consent from data providers.

The Association of American Publishers (AAP) supports this groundbreaking effort, aligning with Fairly Trained’s mission to promote a more consent-based approach to AI training. The launch of Fairly Trained coincides with the media industry witnessing a surge in licensing deals between organizations, including major players like the Associated Press and Axel Springer, with AI companies.

The certification program, resembling an ‘organic’ or ‘fair trade’ label for AI, reflects a growing awareness of the importance of ethical AI practices. In a blog post, Fairly Trained emphasized its commitment to making it clear which companies adopt a consent-based approach to training, thereby treating creators more fairly.

The Licensed Model (L) Certification sets rigorous requirements for companies, organizations, or products engaged in generative AI models or services. To qualify, AI providers must adhere to the following criteria:

L Certification Requirements:

  1. Data Sources:
    • All training data used for the certified model(s) must fall into specific categories, including explicit provision under contractual agreements, availability under open licenses, global public domain status, or full ownership by the model developer.
    • Obtaining a license from an organization that licenses from creators, such as a record label or stock image library, is considered consent.
    • Third-party or open models used, as well as any models generating synthetic data, must also adhere to these requirements.
  2. Data Due Diligence:
    • AI providers must implement a robust due diligence process, conducting thorough checks into the rights position of the training data provider.
  3. Record Keeping:
    • A meticulous process for record-keeping of the training data used for each model training is mandatory.

Certification Process:

  • Initiate the application process by completing a short online form.
  • Fairly Trained guides applicants through the comprehensive written submission process.
  • Upon submission, a fee is paid, and Fairly Trained reviews the submission, potentially requesting further information.
  • Successful submissions require payment of the annual certification fee, completing the certification process.
  • Certification undergoes an annual reevaluation.

Certification Fees:

The pricing structure is based on the organization’s annual revenue.

  • Revenue < $100k: $150 submission fee, $500 annual certification fee.
  • Revenue < $500k: $150 submission fee, $1,000 annual certification fee.
  • Revenue < $5M: $250 submission fee, $2,000 annual certification fee.
  • Revenue < $10M: $350 submission fee, $4,000 annual certification fee.
  • Revenue > $10M: $500 submission fee, $6,000 annual certification fee.

Fairly Trained’s Licensed Model Certification not only sets a standard for ethical data use but also signifies a commitment to transparency and accountability in the ever-evolving landscape of AI development. As conversations around legislative measures continue, initiatives like these play a crucial role in fostering responsible AI practices.

As legislative discussions on AI transparency progress in Congress, initiatives like Fairly Trained contribute to the broader conversation about responsible AI development. This certification program not only sets standards for ethical data use but also signals a shift towards a more sustainable and respectful relationship between AI developers and content creators.

Leave a Reply

Your email address will not be published. Required fields are marked *