Business

PIETMAN ROOS AND DAAN STEENKAMP: Is AI copyright medieval or Marxist?

0

In the 2020 Bergh v the Agricultural Research Council (ARC), the SCA found that since the coder Bergh worked at risk without compensation and minimal oversight from the ARC, it could not be said that the ARC had the necessary “control” to vest copyright over the software.

Which raises an important question: who should control the value derived from an AI algorithm that generates content, often for free and with minimal prompting? Should a subscription and instruction give a user a right over generated content?   

This is not to say parties cannot explicitly contract to transfer copyright, but that is also not a perfect solution. When last did you read through a user agreement in general, to say nothing of AI software? The risk is that a software provider could simply retain copyright indefinitely over what users create, which could place users at a loss.

It may not sound like too much of a societal concern if funny AI generated images do not belong to those who used AI software to make them, but if the application of AI becomes half as pervasive as predicted, a lack of copyright could affect much of what we do.

If that sounds alarmist, consider the adoption of computers and internet in the workplace in the span of less than a generation. Even the most hardline opponents to using computers had to learn and use the technology or lose their jobs. The same can happen with AI.

In fact, the recent news that LinkedIn surreptitiously uses user generated content and data to train its AI models suggests this process is already under way. It eerily corresponds to the warning from Greek economist and politician Yanis Varoufakis, who coined the phrase “technofeudalism” — a dystopian future in which the relationship between technology owners and users become more like that of feudal lord and peasant.

On the other end of the spectrum, copyright as a concept could become redundant if AI generated content is not protected at all. Collective ownership of IP might democratise technology, ensure better alignment with public interest and mitigate against market abuse by monopolies. However, significant governance and incentive problems would remain.

Even though AIs scrape “original work” from non-AI sources, over time a feedback loop will develop where the inputs of AI are from preceding AI outputs. If AI generated content is not protected, ultimately no IP will be protected. And if no IP is protected the free market is placed at risk, since the very notion of profit is based on know-how, or IP. At its core, the reason the goods and services trade attracts profit is because the IP needed to produce and sell has been priced in.

Take away IP, whether it is market insights, production design or indeed technical product differentiators, and you take away profitability and in turn private industry. Society is ultimately left with organs of state filling the gaps, “technomarxism” if you will. If the past 100 years is any guidance, complete state control does not lead to classless utopias, but merely severe power concentration.

From a liberal democratic perspective both extremes are unpalatable. The seemingly obvious path is to aim for a middle ground by protecting the IP of training content and AI users as the default option, as digital content becomes susceptible to being re-input to AI. But this is difficult to enforce, in the same way as holders of “first-generation” non-AI content struggle to protect their IP.

While it may seem patently unfair to first-generation IP holders whose content has already been scraped, a second solution would be to view all web content as res nullius — belonging to no-one — as the default. Web content would then be like wild animals. If one would want to protect content from scraping, one would need to enclose it using anti-scraping technologies.

The wild web would then be considered common property, and certain types of socially beneficial content such as public data, should be allowed to graze freely. Of course, it is arguable that even such an approach to the data ecosystem may erode the incentive for content creators to share, especially to open-source projects. 

It is a good idea for SA to define the IP of AI-generated content while there still is a chance to do so.

• Steenkamp is CEO at Codera Analytics and a research associate with the economics department at Stellenbosch University. Roos is an associate with Codera. 

Source

Takealytics platform launched for sellers on Takealot

Previous article

Namibia: MTC Maris Launches Mobile Money to Boost Financial Inclusion

Next article

You may also like

Comments

Leave a reply

Your email address will not be published. Required fields are marked *

More in Business