Is OpenAI fit to flag its own product’s plagiarism?

Six months after birth, a baby begins crawling and tries mimicking its parents— with uncertain success.

By contrast, in its sixth month, OpenAI’s large language model ChatGPT has reportedly passed medical, law, and business exams (albeit with a little human help). Enthusiastic advocates for the technology believe it will soon be able to write books, compose lyrics, churn out screenplays, and take over entire creative sectors.

With the arrival of the more advanced GPT-4, which OpenAI calls its “most capable model”, in late March, those in the business of rooting out plagiarism have their work cut out for them.

Romance: the last obstacle?

From a technical standpoint, one glaring issue that stops ChatGPT from putting authors out of business is its content policy, which (sometimes) stops the chatbot from generating explicit content. This is a natural step for OpenAI to take, as explicit content can create safety issues—such as child abuse material produced by AI.

But it may create…

Exit mobile version