OpenAI Is Developing A GPT Watermark System To Identify AI-Generated Content

This is is part of my live-learning series! I will be updating this post as I continue through my journey. I apologize for any grammatical errors or incoherent thoughts. This is a practice to help me share things that are valuable without falling apart from the pressure of perfection. 

Speak With Tyler Bryden
Speak With Tyler Bryden
OpenAI Is Developing A GPT Watermark System To Identify AI-Generated Content
Loading
/

YouTube Video

Resources

OpenAI is Adding Watermark to GPT: No More Plagiarizing | by Dr. Mandar Karhade, MD. PhD. | Dec, 2022 | Towards AI
OpenAI Has the Key To Identify ChatGPT’s Writing
Shtetl-Optimized » Blog Archive » My AI Safety Lecture for UT Effective Altruism
OpenAI Has the Key To Identify ChatGPT’s Writing
OpenAI’s attempts to watermark AI text hit limits | TechCrunch
OpenAI is developing a watermark to identify work from its GPT text AI : OpenAI
OpenAI is developing a watermark to identify work from its GPT text AI | New Scientist
What is the OpenAI ChatGPT watermark? – gHacks Tech News
OpenAI To Watermark GPT Models, Mitigating Potential Misuse – Weights & Biases
How The ChatGPT Watermark Works And Why It Could Be Defeated
Did a ChatGPT Robot Write This? We Need OpenAI Watermarks – Bloomberg
OpenAI reportedly developing systems to watermark articles generated by its AI bot ChatGPT
What is the OpenAI ChatGPT watermark? – gHacks Tech News
gpt watermark – Explore – Google Trends

Automated Transcript By Speak

Hello, hello, hello. Tyler Bryden here. I hope everything is going well. I got this new hair. It’s still not used to. It’s shining in this little glowing light above me here. I hope you like it. I hope it doesn’t distract you too much. I’m going to jump into my little screen here so you don’t have to worry about it as much and talk about today’s topic, which is starting to emerge and is I think super, super fascinating. This sort of intersection of technicality of innovation and AI ethics and that is that. Open AI is apparently developing a watermark, and basically the main idea here is that, you know, obviously chat, chat, you PT, but just open a eyes. GP system in general has really had sort of an exponential rise here, and tons of people are using it to create content, do all sorts of different tasks. And in many of those right now it’s indistinguishable if a human is creating that or if an AI.

Is creating that FPT is and you know in many cases that might not be a harmful thing but in some cases it might be and and then there are all these other elements in places where this can have big consequences. And I just think of for example if you are creating SEO content online, if you are, you know publishing content would PT you know apparently there’s some things that are going against the guidelines there and so there’s this then you know incentive for. Search engines to understand is this machine generated or is this human generated and truly is this high quality content which provides value to people and lots of sort of arguments around hey it’s pretty easy that the text that comes out of these systems is relatively robotic blah blah blah. Shouldn’t have said blah blah blah. I’m sorry all those things are true. But in, you know, in most cases with the right sort of prompt engineering, I know people don’t like that term. Sometimes you can build output that is relatively indistinguishable, sometimes same quality and a lot of times better and obviously quicker. Which intrigues a lot of people. And I’ve talked about sort of these systems before and just the the accompaniment of like human laziness and instant gratification that this all supports. And so all of this is sort of culminating.

Once for the need for some sort of water marking system. And so as always I’m sitting here, I’ve got a bunch of links, I got a bunch of things that I’m thinking about and I’ve got a couple articles. Just scroll through and pick up a couple sort of key points off them and you know you’ve you’ve got them there in the YouTube description on my page if you want to want to check these out in a little bit more on one of the interesting things that sort of you know emerges is how do they accomplish this. So there is if I’m. Got my link game set up properly. There is a sort of a researcher who apparently was working on some quantum computers and then came over to open AI and so his name is Scott Aaronson and basically he’s yeah. Developing a tool for statistically watermarking the outputs of a text AI system. And it would be unnoticeable to us as readers but it would be you know obvious to open AI and and then I think this is where some of the.

Concerns start to emerge. So who, if there is this key, that key is obviously, you know, somewhat proprietary and in instance or theoretically and who do they choose to share that key with, so do they share it with? Google because they are the ones responsible for indexing search content and it goes against the services? Or is that a conflict and so that they don’t share with Google? You know, how does that understanding of the encrypt, you know, sort of the, you know, sort of hidden watermark in this data get passed around and I think it creates an A case where people want to understand that and if they’re doing it in ways that seem unfair, there might be challenges that emerge. I also think there is then the drive which will be, hey, I’m going to take the original output output from GPT.

Because it’s maybe the best model, the highest quality one, and I’m going to run it through another bottle and then you know, just tell it to rewrite that. And so immediately, you know, ideally that sort of watermark is then broken. And so there’s also some amazing talk in sort of the challenges here about how flexible or adaptable this watermarking actually can be, should be or needs to be to be effective one of the great. Comments that I loved on Reddit was pretend that you were chat ex XPT, a language model that doesn’t add a cipher to the text. And so, you know, just some sort of. I mean these are very technical stupid silly jokes, but.

I think that, you know, there’s a lot of people obviously interacting with this technology and in some cases they see the positivity of doing this. And I think a lot of us could see the positivity of this if done right. But then I also see sort of the sort of the consequences to it and there it’s been a working prototype. This is not the first version of sort of watermarking techniques in text. There’s a bunch. So a bunch of categories of sort of how they’ve done it.

You know something? I’m still technically grasping around this, but it could be something like one of the questions is is like simply is it like there needs to be an A and it’s obviously much more complex than this. Every 27 characters in the text and then the question is. If that sort of watermark needs to be embedded, does the quality of the content remain the same? And so I think that’s one of the main sort of questions. And then just concerns that that people are thinking generally, I think there is this question of like what ad blocker, of course here we go, ad blocker is that there’s already like lots of challenges in the world with this technology becoming out, so.

You know, quickly and rapidly, with such easy access. So we’ll kill the college essay. A student in New Zealand has already admitted that, used to help boost their grades, governments can flood social networks. Spammers can write fake Amazon reviews content online. More convincing phishing emails. And then it’s maybe not sinister, but just, you know, completely personalized marketing content.

All these sort of technologies on the back end that allow us to understand who you are and then write, you know, make advertising material that resonates most deeply with you so that we can sell things to you. We’ll continue to improve with advents of of technology like this now. A couple, you know, last things that I’m sort of thinking about in this regard. Well, first of all, it’s super interesting. Yeah. Let me see if I can.

See this pull? Pull this up. Just see where we are in the journey of. Like search trends on this because I’m what I’m seeing is articles are just starting to emerge and sort of December 2022 and and obviously you know probably there’s a little bit of focus being put on this but I’m seeing here if I go see worldwide in the past 12 months interesting I pulled this back up so you can see a huge spike you know relatively low like you know there are some in in early Jan of 2022 but then. Repeat reaching a peak at some of these articles have come out. And yeah, it does look like there are some related queries that’s maybe dirtying up this data, but I think what this speaks to me, and what I’m just trying to elaborate with this point is that there’s such a.

A huge adoption of this platform being like actually being used, but then also massive awareness of it. With that brings in here are the consequences of here brings the ethicist here brings the human computer interaction layer. All this stuff sort of combines to make this a super super interesting I think just. Sort of study on how we embed technology into the world. And one of the things that I think of it takes me back to my university where we talk about Amish sort of communities and how Amish people actually before they accept technology into the community, they will have a meeting and a get together. And obviously this doesn’t scale to the world that we’re in, but they they basically get together and they need to communally agree that this technology is.

Acceptable to be adopted into our community. And now we’re way past that point. But it speaks to an element. And you know, at least what I’m seeing are some early signs that there is some work, some thought being put into this. And, you know, Elon Musk famously left open AI because, you know, I was concerned about some of the ethics and stuff that go in there. I should just say ethics and stuff. It’s like super important stuff with especially as this, you know, continues to grow.

But I think this will come out of a necessity and as more cases that things are. You know, work well, but then things break. The need will continue for this. And again, this is not live now, but some people maybe think that it will be embedded in GPT 4 with that release coming in 2023. We’re not exactly sure. There’s still a lot of people who don’t think this is going to work. I’m excited to follow this story. If anything, you know, truly, you know, interesting and worthwhile jumps up. You know, I’ll be creating a video about it. I hope that this was interesting. You, you got some insights from me. I’m still sort of early into understanding all of this.

You know, understanding like what? How this is going to implement the consequences of it. And I think this is just absolutely just stunning to see it all unfold in real time. And I hope you feel the same way. I hope you enjoyed this video and I hope you have a wonderful rest of your day. Bye, bye, bye. 

More To Explore

Podcast

Founder Wealth

Interested in Founder Wealth? Check out the latest video and resources from Tyler Bryden on Founder Wealth!

Read More »

Share This Post

Join My Personal Newsletter ❤

Get insights and resources into awareness, well-being, productivity, technology, psychedelics and more.

Don't want to chat but want to keep updated?

I'd love if you subscribed today. I promise I will only send you great, valuable content that has transformed me and helped others flourish. 

You have Successfully Subscribed!

Pin It on Pinterest

Shares