OpenAI has been sitting on a tool that detects when individuals use ChatGPT to write essays or research papers. Despite rising concerns about AI-driven cheating, the tool remains under wraps, reports the Wall Street Journal.
Internal debates have kept this technology on the sidelines for over a year. With the introduction of anti-cheating features, a significant portion of ChatGPT’s regular users are rethinking their usage, sparking serious concerns about the repercussions.
OpenAI’s anti-cheating technology leverages a powerful watermarking technique. It subtly alters the tokens—small text segments—produced by ChatGPT to create a pattern that indicates AI authorship.
As per internal documents, this watermarking method boasts a 99.9% success rate.
Specialized technology that scores the accuracy of an AI-generated document can detect the watermark, even though it’s invisible to the human eye.
Critics argue that this decision will disproportionately affect non-native English speakers, leading to what many consider unfair results.
“The text watermarking method we are developing is technically promising but carries important risks, which we are balancing against,” a spokesperson for the company said.
OpenAI isn’t the only company tackling this issue. Google’s SynthID is watermarking its AI model, Gemini AI, which is also currently under beta testing.
Despite the technical effectiveness of the watermarking method, OpenAI remains cautious.
What are your thoughts on OpenAI’s watermarking method and its implications for AI-generated content? We’d love to hear your perspective. Drop us a line below in the comments, or carry the discussion to our Twitter or Facebook.
Editors’ Recommendations:
Follow us on Flipboard, Google News, or Apple News
Source link
lol