OpenAI, Cohere & AI21 Labs Collaborate on Best Practices for Large Language Models (LLMs)

This is is part of my live-learning series! I will be updating this post as I continue through my journey. I apologize for any grammatical errors or incoherent thoughts. This is a practice to help me share things that are valuable without falling apart from the pressure of perfection. 

Speak With Tyler Bryden
Speak With Tyler Bryden
OpenAI, Cohere & AI21 Labs Collaborate on Best Practices for Large Language Models (LLMs)
  • Twitter spaces conversation on properly deploying LLMs
  • Challenges in training data sets with bias, sexual content and hate speech online
  • Preventing negative uses of language generation tasks

#cohere #openai #a121 #largelanguagemodels #llm #dalle #gpt #gpt2 #gpt3 #copyai #ethics #ai #artificialingelligence #naturallanguageprocessing #nlp


Best Practices for Deploying Language Models
AI21 Labs
Home | Cohere
This is the worst AI ever – YouTube Create Marketing Copy In Seconds

YouTube Video


Automated Transcription

All right, hello Tyler Bryden here. It’s jump right into it. Didn’t want to talk about really interesting development from three companies who are pretty instrumental leading companies in this idea of large language models and. Probably most familiar is Open AI. Open AI is behind GT3 and DALLE, and for you who aren’t aware of those platforms, basically able to they you know, scraped as much data as they can from the web, and then they are generating models that then can create. So we’ve seen lots of examples with Dolly where you are giving a prompt with text and then it’s generating an image and these are becoming more and more beautiful. Artists are upset.

It’s challenging our questions of what is our art. You know what is creativity? What is art? What is intellectual property? And so these companies are having an impact on the world today and in a very significant way. And it’s this is still early, but this is going to continue to.

Have a significant impact so cohere a company. Great company here in Toronto. I think they’re about almost moving towards 100 employees raised about $125 million, maybe even more than that, with some some really talented people in that open AI and then AI 21 Labs collaborated together and what they did is built out these ideas for best practices around like large language model deployment and. Ellie here talks about 3 principles prohibit misuse, mitigate, unintentional harm, and then thoughtfully collaborate with stakeholders. And I think this is, you know, an important collaboration that needs to happen when the just the reach of these systems can be so significant, and we’re already seeing the the applications of this. Famously Elon Musk left open AI with concerns about the impact of this on the world, and there are then examples.

One of them, a pretty stark one about this. For example GDP 4 Chan, where I mean very interesting video trained on three years of four chance politically correct, and then they used it to then populate and post on different forums and it. Get posted some pretty vile horrible things and they did it in a way that some people you know maybe thought that it was a machine, but a lot of people didn’t. The the posts were realistic enough that it sort of passed that test of us. Thinking of it was a human and because of that data that had trained on it fit right in with the kind of content that it was creating. And so that’s just one example of where we’re seeing these large language models have an impact, and this was obviously a sample sort of test in in application, but.

Far reaching consequences are not lost on, I think first obviously myself. I’m talking about it right now, but these companies who are building these and starting to understand the scope and the scale of these systems. Additionally, one of the other sort of leaders in this has been another company and this is sort of hit. There’s a bunch of them all that came around at once and when this is one of them is copy AI and basically what it was helps you write creative content blog. Content etcetera, etcetera and so now you’re interacting with ads that might have been created by a machine and they are, you know they’re then being used to persuade. Create prompts. Like all these machines are now helping real people generate the copy that is then persuading us to take actions online, and so there are definitely sort of ethical challenges to think about here. And I was trying to find this image of it. But what we’ve also seen is.

Basically, with these models you can then for example build a website, deploy a ton of content, rank really, quickly and and as a reader you’re getting on that content. It’s pretty high quality content, it might be better than most of us can write content because of its understanding of the web and Internet. And then you can deploy that start ranking on search engines very quickly and maybe do affiliate links. You can sell products. You can do whatever you can to sort of promote this funnel.

Then turns into a monetization option. The graph that I was trying to find an image of was a company that had used a tool like copy AI to generate their content, and then they Google recognized this. So this is the battle that’s happening. Google recognized that this was search engine optimized or sort of SEO produced content by these large language models penalize the site and so it went up to like 3 million people in traffic to 0 the next month. So there’s also this battle that is now emerging between.

These systems, like for example Google and search engines, where they’re looking for high quality human readable human created content, and then these systems that are then deploying are having they’re screwing around with this system. They’re screwing around with what the definition of that is, what is high quality content? What is human generated content? So there’s lots of challenges that are emerging. There’s other sort of companies that are using these to generate, you know, game design, storytelling, and scripts like there’s so many different applications of these.

Persian language model, which is all you’re doing, is maybe giving a text prompt and then it’s generating an entire set of content that is passing plagiarism checks. It is, you know, generally considered completely original, but then deployed at scale. It was sort of detectable that that there is something going on here and that these this content is written by machines, and then that’s how these sites are being penalized. I would say that if you are connected to this, your personal accounts, you know you’re doing a risk when you’re playing around with these systems, especially in this sort of wild Wild West of it. And then this could have impact on all your website properties or anything that you’re doing in terms of business and marketing. So that’s my little bit of rant on that. I’ll hop back into sort of what I was getting to here is that these? These companies then collaborated, and so you know this, you know, we don’t know exactly how deep this goes here. There’s a joint. There’s a joint statement. Obviously this is an important step. It won’t open this up. Should I open it up, I did open it up, and it’s early.

Into this process, right? So signed and. Really, what they’re doing? This reminds me there’s a parallel here. When I was sort of, you know, contributing being part of the psychedelic movement and its first started to see commercialization and so in this Twitter space, which was what I was sort of getting to help sort of hosted by Percy.

A bunch of people on the team join and I’ve got their links so I’ll share this as resources so you can follow them on Twitter and check them out. You can see that a lot of the questions here are around ethics and bias in the data and how can you avoid the negative applications of this technology. So these are tweets before and these are also, I think, really the questions around this are why these companies building these models are banding together to try to figure this out and I have no doubt that. That there are hail, give open AI fall. I have no doubt that this is going to be continued to be a challenge, and when I think back to the parallel of psychedelics.

It was like an Alex in in some way or almost like a one to. It’s not actually, but like a one to one where you know you are sort of accepting to take psychedelics and there that’s part of that is on your response, your responsibility and that’s a personal experience that you’re having. And relatively, there’s a lot of steps that take you to have that experience, whereas with these large language models you could be interacting with this technology and really have no idea. And during this conversation. That was one of sort of the concepts that emerged, which is, you know, how do we?

Deal with the possible negative consequences of this, and when we look at it. Because these models are being trained on these large data sets across the web, they’re filled with hate speech. They’re filled with sexual content, sexual violence, not even that just Western. You know Western perspective on things because of the proliferation of Internet in in North America and you know developed countries. There are a lot of perspectives that are then missing from the Internet. The people who are labeling these systems to help build these initial data sets. They could have bias in it. So when you have this much noise when you have this much data that you’re ingesting and then you know putting into these models and then starting to generate things, there’s a lot of risk along the way. There’s a lot of unknown.

And I think we, you know, we’re very happy with the power of AI, but there’s still a lot of things that were just unaware of and and just don’t understand with some of the output. And I’ve played with GT3. I’ve played with a cohere I’ve played with Dali and you can’t interact with those systems on even a little bit of a scale and not have some sort of output that is pretty pretty wonky so. I do think again, this is a necessary step for them to take a bunch of people, so Percy here, a bunch of people who are, you know, founders leaders, actual researchers, developers. You know, for example, founders and cofounders of of cohere were part of this conversation on Twitter. They it was streamed and then they now got the recording. So I’ll share this link. But I think just a really interesting conversation on how as this technology continues to be adopted that it’s used in.

In the best way and. We will, you know there are a couple of quick notes that I that I that I see here or at least questions that were asked during this conversation. One of the big ones was if you’re a researcher and you’re trying to understand how these systems work, how these algorithms work, how can you get access to that? And so one of the sort of. Drives that these companies are having is trying to give people access to it so they can better. They can better understand the systems and then that will then help the companies who are developing these systems then do it better. How does.

And there’s cost and problems like that. And then, of course, because these are private companies, well funded companies. They have the goals of profitability. They have the goals of growth of revenue. So how does that then impact the use of it? And the deployment of it? At some point they need to scale and start selling this to companies so that they can generate the revenues they needed to to build investor returns to become sustainable. What compromises are made in that process? So I think that’s a big question that is being asked here and.

Notice this idea of then. One of the messages or comments was that overall these people, most of the people interacting with the systems are good. If there’s something that error pops up blah blah, they will report it. They’ll ask if they can use it in this way, etcetera, etcetera, but that doesn’t necessarily mean that everyone’s a good actor. We know that from the world that we’re living in today, and so then there’s other mechanisms that they’re trying to be built into limit this abusive or negative application of these technologies. Rate limits so too many calls per minute. Too much, you know, interaction with the system.

Sort of tamper that down so that there can’t be too much of a consequence of that, but these are still early stages of these them. We haven’t seen play at scale and a lot of these systems are still sort of in this beta environment. Beta environment where not everyone could get access to it. You have to be approved for it and then you can use it for sort of personal use. But then for us you know to go beyond that into commercial use. There’s another whole approval mechanism and those things are really good as we start to send sort of deploy.

Deploy these technologies, so I think overall these are really worthwhile, interesting companies. I’m, you know, I’m I’m. I’m inspired by it. I think that the new there’s a new level of sort of technology that is emerging of our understanding of artificial intelligence that this sort of vision, that artificial intelligence and machine learning could be of the sort of you know, 40 years ago. The sci-fi future of what is possible, and I think that’s been really demonstrated by some of these, Dolly.

Images generated Google’s now contributing to this hugging face. Some great companies and organizations operating in this space, and I think there’s just tremendous. Potential here, but with tremendous potential comments, tremendous risk becomes a tremendous responsibility. And all of this is then sort of culminating into cohere open AI and AI. 21 Labs joined together to at least start the process of figuring how do we best deploy these large language models, so I appreciate them doing this. I know that this is not easy anytime you’re sort of breaking ground in in technology and especially with this.

Level of data. This level of information and then creating these capabilities. There will be challenges along the way. There will be things that are not great and you know we think of. For example, I think of Tesla and there are deployment of you know self driving vehicles. No matter what there will be errors along the way and I think someone said it’s sort of darkly but there will be deaths along the way as we move towards this and and I think everyone in the world what we’re trying to do is mitigate risk.

I think these companies are trying to mitigate this risk too, and again, very interesting conversation to at least start this process. I know them sure they’ve been having once privately. This was a public one. I hope that they do more. There’s lots of ways to get involved with some emails that they shared here. There’s a coherent discord channel and I, you know, I recommend following some of the people who are part speakers of this. There are leaders in this space. They’re the ones shaping it, so it’s good to follow. Could interact if you have thoughts they are open to messaging, they’re trying to figure this out.

And you know overall, appreciate the work that they’re doing and very excited about the potential of large language models. And glad I was part of this conversation and glad to follow these companies on what they’re doing. So there’s been Tyler bride. And thank you very much for checking this out. If you liked it. If you’re watching on YouTube like comment, subscribe all that kind of stuff. LinkedIn send me a note. I hope everything’s going well and hope you have a great rest of the day. This has been taught to Brian. Thank you.


More To Explore


Founder Wealth

Interested in Founder Wealth? Check out the latest video and resources from Tyler Bryden on Founder Wealth!

Read More »

Share This Post

Join My Personal Newsletter ❤

Get insights and resources into awareness, well-being, productivity, technology, psychedelics and more.

Don't want to chat but want to keep updated?

I'd love if you subscribed today. I promise I will only send you great, valuable content that has transformed me and helped others flourish. 

You have Successfully Subscribed!

Pin It on Pinterest