Actually relies on humans to be

Actively looking at ways to block LLMs from scraping their content. It’s a tough technical challenge In this video MarTech contributor Greg Krehbiel discusses ways publishers might try to block LLMs. He also makes a case for changing terms and conditions to prepare the grounds for future lawsuits. As he seems to acknowledge none of his suggestions are a slam dunk. For instance is it practicable to stop Google crawling your site to grab content without also stopping it crawling your site to place it in search results? Also lawsuits

The National

Are costly But how about a regulatory fix? Do you remember the endless annoyance of telemarketing calls? Do Not Call register put a stop to that. Everyone who cared was able to register their b2b leads number and telemarketers could continue to call it only at the risk of the FTC imposing hefty fines. Registering domains with a National Do Not Scrape register might be a heavier lift but one can see in general terms how such a regulatory strategy might work. Would every infringement be detected? Surely not. But the same goes for

Example for

Monetize what they write. They too do not seek to have third-parties profit from their work with no recompense for the creator. Everything I say here about written content applies equally to graphic video and any other creative content. We do have BTC Database US copyright laws of course that protect publishers and authors from direct theft. Those don’t help with genAI because it crawls so many sources that the ultimate output may not closely resemble just one of the individual sources (although that can happen). Right now publishers are