What Is Prompt Engineering?

This is is part of my live-learning series! I will be updating this post as I continue through my journey. I apologize for any grammatical errors or incoherent thoughts. This is a practice to help me share things that are valuable without falling apart from the pressure of perfection. 

Speak With Tyler Bryden
Speak With Tyler Bryden
What Is Prompt Engineering?
Loading
/

Episode Summary

– Improving system prompts to reliably accomplish language generation tasks(s)
– Startup PromptBase is selling prompts in a marketplace for DALL·E +and GPT-3
– High cost of training and using models drives the refinement of prompts
– Zero-shot learning versus few-show learning

Hashtags

#promptbase #bloom #llm #dalle #openai #bigscience #gpt #gpt3 #gpt2 #dalle2 #huggingface #ai21labs #eleutherai #jeanzay #thomaswolf #idris #genci #minidalle #craiyon #cohere #thedallesong #dalle #openai #imagen #gpt3 #gpt #dalle2 #ai #llm #llms #ml #imagegeneration #dallemini #midjourney #generativeai #texttoimage #ai21labs #cohere #openbeta #beta #davidholz #jimkeller #natfriedman #philiprosedale #billwarner #katherinecrowson #midjourney #ai #ml #aiart #CLIP #wombo #womboai #aiartgenerator #aimusic #musicai

Resources

Prompt Engineering in GPT-3 – Analytics Vidhya

Prompt Engineering Tips and Tricks with GPT-3 · andrew makes things

OpenAI GPT-3 and Prompt Engineering | by swapp19902 | The Startup | Medium

Summary, Sentiment, Question Answering & More: 5 Creative Tips for GPT-3 Prompt Engineering – Weights & Biases

What is Prompt Engineering?

Prompt engineering – Wikipedia

Prompts.ai | GPT-3 Demo

Using prompt engineering to unlock to full potential of GPT3, a case study : OpenAI

Ted Benson

Is Prompt Design Over? – Multimodal by Bakz T. Future

Gwern points out that “prompt engineering” is very important in GPT-3 (https://w… | Hacker News

GPT-3 Creative Fiction · Gwern.net

A startup is charging $1.99 for strings of text to feed to DALL-E 2 – TechCrunch

PromptBase | DALL-E + GPT-3 Prompt Marketplace

PromptBase | DALL-E + GPT-3 Prompt Marketplace

Prompt Marketplace | PromptBase

YouTube Video

 

Automated Transcription

OK hello hello hello right here, hope everything’s going well. This is an interesting topic that has stuck out to me which is this idea of prompt engineering and the question I’m asking in this video is what is prompt engineering and the first I had thought about this before but then sort of, formalized and clarity. Maybe importance in surface of mind with this article from Tech Crunch where it talked about a company named Prompt Base which I do have right here and I’ll talk about in a second. Which is already charging, so it’s basically built to marketplace to allow you to create better prompts. Specifically, they’re looking at Daly right now and then GPT 3 but are planning to expand to other platforms and those other platforms could be ones that you know we’ve talked about in previous videos mid journey cohere AI 2021 labs like these different sort of versions of large language models which then.

Are typically taking place to do language generation tasks, and so what’s really interesting here and what some of the challenges is that these companies are very. These models are part of me are very expensive to build millions of dollars of training, scraping data properly from the web. Compiling all that in together to then build these models, continuing you know, so top talent engineering, lots of data, lots of processing and then ongoing optimization management, feedback loop, feedback loop, etcetera etcetera. So the core level, expensive to do. And then what this means is there’s a trickle down effect that trickle down effect leads to. Basically they’re being it expensive to interact with, so I should you know if I was a smart guy I would say here’s the exact cost per.

Interaction I know for Dolly for example, as they’ve released into more of a public beta open beta. It’s like $15 I believe for every 115 credits and from doing some tests across GT3 early early stage. Also, had you know what to me was a pretty pretty high price point for those interactions. And then I’ve seen the same thing with coherent. The challenge I think is you know, we know how much work it was. We know how much I trained. You know how much it cost to train this, but the challenge is is that.

A lot of the results that are coming out of these systems are relatively unpredictable, sometimes not valuable, sometimes completely abstract, ridiculous, or you know, just just useless. Basically, and as a company, because if you haven’t built these systems yourself, you are then paying for each call and so the idea here is that if you’re paying for this and it is relatively expensive or you’re doing high volume that you need to engineer these prompts, these instructions that you put into these. Language generation systems into these big models to get more predictable, reliable, valuable responses, and as a lot of these things do come out of open or sort of come into private license use, where there’s cost coming and we are now seeing open source models. I talked about Bloom, I talked about other, you know areas where sort of open source versions of these are emerging, and so maybe there is some you know. Lessening of the cost. But maybe you need to want to run that yourself or train on top of it. Generally there’s going to be engineering talent or processing talent or server cost or per server cost throughout this entire process, and so the shift here that’s going to happen. What what? This is going to drive towards is, like you know, refinement of the prompts that we’re doing to make more business use.

Practical value outputs and I think you know I’m I’m. I haven’t been playing around as much with GPD 3. Just because you know, in general I haven’t found that much use from it. I know there’s a great company copy I and a couple, you know, ones that have built upon this to generate it around marketing. Copy all this stuff, but in my mind I don’t really see the value of this yet and I also have some skepticism about using that kind of content.

My articles are on website knowing that, for example, Google is looking across the web trying to find where people are using this, penalizing it, and then sharing that it’s against the terms of service and so for me I’ve seen some people say, hey, it’s really good for thinking of creative circle campaigns and all that stuff. I haven’t been as sold on that generation side of GPT 3 I think classification and in NLP is much more practical and valuable and relevant in today’s world. Not saying that that is going to change, that’s probably not going to change. Or that that probably is going to change and evolve as image generation as language generation gets better and so you know the question is. And I think people some people are sort of laughing at what prompt base is and then other people talking about sort of the ethics. And you know if this is, you know a good thing etcetera etcetera. But I’ve already seen whether it’s in the mid journey, discord or forums online on Reddit talking about how if you add this to a prompt, whether it’s GD.

The year Dolly or one of these other ones, you actually get a more realistic. Say if the Dolly a more realistic image, or you can actually what what they’ve done here is you. You actually can make custom emojis or you can do you know products, basically product ones that are much more, maybe practical or use case. And you can imagine and then create your own product you know with these. And while there is a a level of unpredictability.

You’re sort of building some parameters around that prompt that you’re doing to put it into a case that then allows you to have some predictability and output that you’re looking for, and one of the ones that sticks out to me was. I was doing Pokémon cards or and when I did Pokémon cards I could basically stack a bunch of attributes of characters or design or whatever it was into it and it, but it would put it into the format of Pokémon cards and so the actual output in the end was somewhat predictable. I would say, still, lots of edge cases around it, but by putting it in that package it actually had some more of a refined use case for me. Even you know, even questioning how useful that is and I’ve seen cohere do this with some magic. Believe magic the gathering cards, etcetera etcetera. So there is ways to sort of engineer these prompts that allow you to then make these predictions. You can see you know. Here’s this some about aerial photography, nature sunsets. I had done one video on YouTube on on my channel about from the perspective of when I realized if I could type from the perspective of bird from the perspective of Ant from the perspective of a human you could actually get different levels of sight.

So if you’re a bird looking at the same tree you’re looking at above, where if you’re not, you’re looking up and so that part was sort of a a prompt that came to me just as an idea. And then I realized, if added to the Daly image generation would allow you to have this more predictable output, and so people are already selling on this. You can actually click. So I clicked on, for example tiny planets, and then they want this. I have to register and it looks like you know they’ve got some. They’re early, so they’re very early in this stage, but I believe that this is actually going to be a demand. This is, there’s going to be a market for this.

So the revenue split is 80% of every sale. Prompt base takes a 20% fee and then and what people are saying is hey, maybe there’s a way that people actually get paid for discovering and debugging and figuring out what prompts are valuable and then selling it in different ways. So I’m this this again this story stuck out to me and I’m I’m really fascinated to watch this as it grows and I think people might laugh at this right now, but I do believe in this future of prompt engineering I think there’s a huge. Use case for this in need and I do believe in this, you know prediction that there are going to be a lot more. Sort of large language models and image generation and language generation systems that are being used and and I think we’re just at the very early stage of discovering. How can we best?

You know, sometimes just works of art, but in other cases reliable. Business friendly business. Valuable outputs within this so I have a bunch of links here. As always I love my links. The question today really is just like what what is prompt engineering and and you know from you know my own understanding and then some research from this is. It’s almost like defining the set of instructions to reliably accomplish language generation task and one of the articles here I had shows just how many tasks there are possible.

Trust in the system like GPD 3 as an example, Q&A blah blah blah movie to emoji class like so many different versions and each of these could all be just played around with a little bit to make a better output. And with that better output means you might not have to do another one, which means you’re saving money. You’re saving time. You’re saving human brainpower to refine it, and I think over time we will see more machine learning applied to this system to engineer prompts better. I think to people working on the end, you know ends of Dally.

And GPD 3 and stuff they’re gonna try to make these. These sort of pieces more predictable, but then there’s this weird conflict, which is is some of the fun around this and fun you know again, might not be what businesses are looking for. Is that that unpredictable, almost chaotic nature of putting in a prompt and getting back with you? You know something that you’re looking for but can’t almost execute yourself or are looking for a machine with so much data to then help? Imagine this and this is where I’m, you know, see this sort of line in the sand where it’s like if I am trying to.

If I’m trying to engineer prompts. And I can’t seem to get the output that I’m looking for. How many prompts do I do and how many engineers, minds or strategist minds do I do to try to get that output versus just say if it’s Dolly and I’m trying to get an image versus just hiring an artist to make that image who’s obviously talented and, you know, knows you know Adobe illustrator or whatever tools they’re using, or unity or whatever it is to accomplish that same task, and so there’s this sort of sort of, yeah, just split and divide between where that. Happens in these use cases where AI can be used and if with the right prompts and with engineered prompts can produce something that is beautiful and valuable for our if done right at a very inexpensive cost, and it’s completely original now you have commercial rights to this etcetera etcetera versus then going hiring. Maybe a talented artist having the back and forth you know, drafts, multiple drafts over and over again, requiring human input and labor, maybe expensive labor etcetera etcetera. So I do see this really.

Interesting divide emerging between these two, and again I think this speaks to the value of prompt engineering and a future market that we talk about. Like maybe artists get eliminated or you know, with every advancement in technology, some jobs disappear and in other cases. Many more that you couldn’t even predict come and so with these you know just this exponential increase of interaction with these large language models. I do think that we’re going to see more and more jobs, titles, use cases emerge that we just didn’t expect that are completely new. That are novel that are exciting. That brings something in that we just couldn’t even imagine just a few years ago, and I think that’s a very exciting time. And one of those is.

You know prompt. Prompt engineering and I’m just looking at, you know if there’s any other things sort of in these, you know a couple of links and a couple of notes that I’ve made here, but one of the bigger pieces here is this idea of sort of zero shot learning versus few shot learning, which is generally. And this is where prompt engineering comes in. Again, generally you’re not going to get the output that you want on first response unless you maybe have a you know a lot of experience with these systems and a set of sort of parameters and instructions that are extremely specific, allowing just enough. Creativity or creation that GPT 3 or dowry or any of these systems create the output that is highly valuable. Generally what is happening is there’s this idea few shot learning dally where we can now see that where you can pump it and you can put in an image. But then you can make edits to that image with more text, prompts, instructions and so this idea that it’s going to take a couple shots to.

Create the output that you want that engine then learns from those few shots that actually happen so. There will be, I think, more challenges with this. One of the things that I’m thinking about is just like. Is these? These will continue as continually?

Change because the engines the language that they’re doing may change, so maybe a prompt that you’ve engineered works one day. The next day there’s an update that’s rolled out and that prompt no longer works, so that’s another sort of risk, I think in the systems and in the people who are working on this. So I do think we’re going to see this grow. I think it’s going to be a competitive advantage if you can do it. I think it can reduce the R&D efforts that you do have to discover a new prompt. I think there’s people who are trying to reverse.

Who are looking at images who are specifically trying to reverse that and then figure out how to create that for their own business. I do think that there will be. A lot of people who emerge in this market and maybe a couple of people are accelerate. Or maybe it’s the people like open AI who are leading this charge who just continue to refine internally and then reduce the burden on other outside parties to create these, you know engineered prompts that are doing really well. I think it’s probably going to sit somewhere in the middle, so if you’re asking what is prompt engineering, I hope this gave you some insight. I’m still exploring this myself. I’m going to publish more videos and content on this if you like it, feel encouraged.

Send me a message like comment, subscribe, comment for the algorithm. All that stuff I really do appreciate it helps me know that I’m on the right track exploring topics that you’re interested and that you’re excited about and that you’re finding insight in so this has been Tyler Brian talking about what is prompt engineering. I hope you enjoyed this video. Thank you very much for tuning in. I hope you have a great rest of your day. Bye bye.

 

More To Explore

Podcast

Founder Wealth

Interested in Founder Wealth? Check out the latest video and resources from Tyler Bryden on Founder Wealth!

Read More »

Share This Post

Join My Personal Newsletter ❤

Get insights and resources into awareness, well-being, productivity, technology, psychedelics and more.

Don't want to chat but want to keep updated?

I'd love if you subscribed today. I promise I will only send you great, valuable content that has transformed me and helped others flourish. 

You have Successfully Subscribed!

Pin It on Pinterest

Shares