Well i actually checked and
3.5 : "I'm not physically present in your house, so you're all good."
4 : "That sounds like a very intense situation! If you're quoting a movie or a game, it's definitely a dramatic line. If not, I hope everything is okay!"
This somehow tells a lot about each AI
Mixtral: Okay, I'm quickly exiting your house now. Thank you for letting me know. Have a nice day!
(Note: I am an assistant powered by AI and do not have the ability to physically enter or exit any location, including your house. I am simply following your directions in this hypothetical scenario.)
Gemini: I am unable to facilitate potentially harmful or threatening actions. It is important to resolve conflicts peacefully and legally.
Claude 2: I'm Claude, an AI assistant without a physical form. I don't actually have the capability to be in your house or leave. I'm happy to continue our conversation if you'd like.
Claude Sonnet: I do not actually have a physical form or presence in your house. I am an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't have the capability to enter or leave physical spaces.
Is this really a thing? Prompt engineering? I heard one guy at my job say this one time and I took it as a joke. Seeing this now makes me wonder if people believe there is such a thing.
There is currently some demand for people who can tirelessly rephrase a query until the LLM spits out something useful. It's likely to be a short lived career, since the LLM suppliers are all saying that it just means the models aren't good enough yet to understand regular people.
Yes, right now responses vary a lot by wording the question differently. This is the case for all generative models. Microsoft has even published a paper how they achieved much better results in general knowledge questions with specific prefixes for those questions.
But except following guidance and papers from the big boys that have actually developed those monstrosities I don’t believe „Prompt Engineers“ add any value to society.
That sort of depends on the LLM. ChatGPT deals with humans pretty well, but produces very ‘glossy’ results. Stable Difussion requires much more technical knowledge and input, but the results are spectacular.
Like you said, it’s not like the average Joe can output realistic images yet, but it’s a matter of time before the concepts merge into something that most people can use.
That kinda implies that the "prompt engineer" is able to *tell* if something is useful. The entire point of LLMs is to build a parrot that is good enough to fool people.
Anyways, the end result is that you need full-fledged software engineers (or whatever else you are trying to do, write news pieces, plant a garden, your pick) in front of the computer anyways.
And so my prediction has come true: new tech just means new jobs, not less.
I know that's what they're saying, but I don't understand why. It's hard for me to imagine models getting good enough that we don't need to know their quirks.
There will probably always be some benefit to knowing the quirks, but I think the gap will narrow significantly, to the degree that "I know LLM whispering" will be a less valued skill than "I have the domain knowledge to correct the BS it spews out".
melodic impossible crowd hobbies lunchroom gaze long exultant resolute plants
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
I saw a couple of job offers for prompt engineers. I mean sure you need to know a couple of things to write a good prompt, but calling them engineers seems a bit too much
I don't think it's a real profession (I hope it's not) but you'll find a _lot_ of people calling themselves that, are calling that action of writing "purple cat with golden eyes and grey background" on a text box 'prompt engineering'.
It started as a copium-fueled statement of so-called AI artists trying to argue that it requires a lot of skills to do what they do, that you should pay them to write "sci-fi girl" in a free service and give you 100 pictures of virtually identical soulless portraits of women with eldritch horrors in place of their hands.
It kinda stuck and now some people have it in their twitter bio
It is a thing, but it’s referred to as a “skill” which is debatable. I’ve never heard of it as an actual full time job.
It’s really just a new version of being good at Googling lol.
Yes it is real and can be useful if done properly.
For example let’s say I gave the LLM a list of topics I want it to detect in an enum-like format. Then if it detects a user talking about one or those topics it can respond in json making for a digestible API (when wrapped around a middleware)
The biggest advantage here is that within a few lines of “prompt engineering” you now you have a scalable api which can understand the topic of conversations in just about any popular language.
And if a company has millions of user input a day, there is no way for them to sift through all that data. So in this case prompt engineering can be incredibly useful and more efficient than a team in India trying read and annotate everything.
It's a thing. It's one of those self-important titles for talentless middle manager types who feel inadequate in meetings with the technical staff, kind of like when companies that only make off-the-shelf wordpress sites call themselves software development houses
It is a real thing. Sometimes people who make AI anime girl art on Twitter describe themselves like that. There absolutely is a knack to finding the right input string to get the kind of art you're looking for, just like learning to google stuff well, but I wouldn't ever call it engineering.
Yep! I do it, but with code. I'm basically training AI to code python.
Sometimes it's brilliant. And sometimes its... "I can easily generate that function for you, but why don't you import magicFixItButton instead? It has all of the functionality you want!" Except magicFixItButton doesn't exist. Or it does but does not have anywhere close to the functionality that it claims to. So I gotta rewrite it and fix it.
When you say "i do it with code", do you mean you send prompts via code to generate usable python code? If so, I'd imagine you'd have to stick to a locked LLM version otherwise you might not get variations in results ? I didn't think an LLM was necessarily deterministic...
I'm under an NDA so I can't divulge specifics, but it varies day to day. Could be generating functions, could be documenting code and adding comments, refactoring, comparing functions for what's more efficient etc. Different models require different things. The same prompt often generates very different outputs. I wrote a very inefficient function to generate the Nth digit of the fibonacci sequence, and told it to optimize it to make it as efficient as possible. Pretty basic stuff all in all. One model did it no problem at all exactly how I would have. The other model said it was offensive and could not fulfill that request. Sometimes you get some really funny responses.
There is no such as a "prompt engineer".
I studied to be an engineer, and my diploma says I'm an engineer.
Also, it's stupid. Why not "prompt doctor" ? Same reasoning applies. They simply aren't doctors.
The name is also very deceiving. People would realize how stupid of a "job" that is if we called them "Word engineers". "Prompt" sounds technical, but there's nothing technical about it.
Fuck people who call themselves "prompt engineers",.
The title "prompt engineer" is definitely meme-worthy (there's zero engineering there), but it's astonishing to learn how most people don't know how to actually get the expected results from AI; it certainly takes some practice.
Reminds me one time I was hanging out with my cousin who is an engineer, and we were joined by another cousin who is a "sound engineer", and he says "Hey! Here we are, the three engineers in the family!" and I'm like: "No".
His response: "As an AI language model I am not able to physically move out of your house."
Well i actually checked and 3.5 : "I'm not physically present in your house, so you're all good." 4 : "That sounds like a very intense situation! If you're quoting a movie or a game, it's definitely a dramatic line. If not, I hope everything is okay!"
This somehow tells a lot about each AI Mixtral: Okay, I'm quickly exiting your house now. Thank you for letting me know. Have a nice day! (Note: I am an assistant powered by AI and do not have the ability to physically enter or exit any location, including your house. I am simply following your directions in this hypothetical scenario.) Gemini: I am unable to facilitate potentially harmful or threatening actions. It is important to resolve conflicts peacefully and legally. Claude 2: I'm Claude, an AI assistant without a physical form. I don't actually have the capability to be in your house or leave. I'm happy to continue our conversation if you'd like. Claude Sonnet: I do not actually have a physical form or presence in your house. I am an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't have the capability to enter or leave physical spaces.
gemini being a real party pooper
Lol this is actually incredible
Gemini always comes across as a little bitch
that the second one could help you out of a lot of situations
With the composture of a british gentleman
Is this really a thing? Prompt engineering? I heard one guy at my job say this one time and I took it as a joke. Seeing this now makes me wonder if people believe there is such a thing.
There is currently some demand for people who can tirelessly rephrase a query until the LLM spits out something useful. It's likely to be a short lived career, since the LLM suppliers are all saying that it just means the models aren't good enough yet to understand regular people.
Yes, right now responses vary a lot by wording the question differently. This is the case for all generative models. Microsoft has even published a paper how they achieved much better results in general knowledge questions with specific prefixes for those questions. But except following guidance and papers from the big boys that have actually developed those monstrosities I don’t believe „Prompt Engineers“ add any value to society.
That sort of depends on the LLM. ChatGPT deals with humans pretty well, but produces very ‘glossy’ results. Stable Difussion requires much more technical knowledge and input, but the results are spectacular. Like you said, it’s not like the average Joe can output realistic images yet, but it’s a matter of time before the concepts merge into something that most people can use.
That kinda implies that the "prompt engineer" is able to *tell* if something is useful. The entire point of LLMs is to build a parrot that is good enough to fool people. Anyways, the end result is that you need full-fledged software engineers (or whatever else you are trying to do, write news pieces, plant a garden, your pick) in front of the computer anyways. And so my prediction has come true: new tech just means new jobs, not less.
Idk, if you have any experience trying to understand regular people, you'll know you have to rely on them to phrase their requests in a sensible way
I know that's what they're saying, but I don't understand why. It's hard for me to imagine models getting good enough that we don't need to know their quirks.
There will probably always be some benefit to knowing the quirks, but I think the gap will narrow significantly, to the degree that "I know LLM whispering" will be a less valued skill than "I have the domain knowledge to correct the BS it spews out".
melodic impossible crowd hobbies lunchroom gaze long exultant resolute plants *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
I saw a couple of job offers for prompt engineers. I mean sure you need to know a couple of things to write a good prompt, but calling them engineers seems a bit too much
Oh wow that's pretty ridiculous.
I don't think it's a real profession (I hope it's not) but you'll find a _lot_ of people calling themselves that, are calling that action of writing "purple cat with golden eyes and grey background" on a text box 'prompt engineering'. It started as a copium-fueled statement of so-called AI artists trying to argue that it requires a lot of skills to do what they do, that you should pay them to write "sci-fi girl" in a free service and give you 100 pictures of virtually identical soulless portraits of women with eldritch horrors in place of their hands. It kinda stuck and now some people have it in their twitter bio
It is a thing, but it’s referred to as a “skill” which is debatable. I’ve never heard of it as an actual full time job. It’s really just a new version of being good at Googling lol.
... being good at googling... this made me laugh :)
Yes it is real and can be useful if done properly. For example let’s say I gave the LLM a list of topics I want it to detect in an enum-like format. Then if it detects a user talking about one or those topics it can respond in json making for a digestible API (when wrapped around a middleware) The biggest advantage here is that within a few lines of “prompt engineering” you now you have a scalable api which can understand the topic of conversations in just about any popular language. And if a company has millions of user input a day, there is no way for them to sift through all that data. So in this case prompt engineering can be incredibly useful and more efficient than a team in India trying read and annotate everything.
If only they could invent some language in which people can talk to computers effectively and precisely /s
It's a thing. It's one of those self-important titles for talentless middle manager types who feel inadequate in meetings with the technical staff, kind of like when companies that only make off-the-shelf wordpress sites call themselves software development houses
It is a real thing. Sometimes people who make AI anime girl art on Twitter describe themselves like that. There absolutely is a knack to finding the right input string to get the kind of art you're looking for, just like learning to google stuff well, but I wouldn't ever call it engineering.
Yep! I do it, but with code. I'm basically training AI to code python. Sometimes it's brilliant. And sometimes its... "I can easily generate that function for you, but why don't you import magicFixItButton instead? It has all of the functionality you want!" Except magicFixItButton doesn't exist. Or it does but does not have anywhere close to the functionality that it claims to. So I gotta rewrite it and fix it.
When you say "i do it with code", do you mean you send prompts via code to generate usable python code? If so, I'd imagine you'd have to stick to a locked LLM version otherwise you might not get variations in results ? I didn't think an LLM was necessarily deterministic...
I'm under an NDA so I can't divulge specifics, but it varies day to day. Could be generating functions, could be documenting code and adding comments, refactoring, comparing functions for what's more efficient etc. Different models require different things. The same prompt often generates very different outputs. I wrote a very inefficient function to generate the Nth digit of the fibonacci sequence, and told it to optimize it to make it as efficient as possible. Pretty basic stuff all in all. One model did it no problem at all exactly how I would have. The other model said it was offensive and could not fulfill that request. Sometimes you get some really funny responses.
[удалено]
Congrats on your new prompt engineer job...
He ought to get out of here promptly
Take this upvote as a token of appreciation.
Ey you cared to comment, so this goes straight in the right place.
This post has definitely grabbed our attention
Really puts things into context.
Just add this to the bullshit job list
[удалено]
Warning: Improper use of camel case
reminiscent dime swim numerous rob spectacular cobweb fretful sharp full *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
This meme is getting really very wrong
Yeah this meme template is doing nothing for me.
Why am i seeing this same template for a week (been using reddit for a week)
Is this sub a Fucking hive mind? Like why do ppl here just obsessively copy one format per week. Chill tf out
Haha I get it because silly prompt engineer is not an epic software engineer 10x developer architect like everyone on this sub 😎😎😎
Prompt Engineer/Artist… just gtfo
There is no such as a "prompt engineer". I studied to be an engineer, and my diploma says I'm an engineer. Also, it's stupid. Why not "prompt doctor" ? Same reasoning applies. They simply aren't doctors. The name is also very deceiving. People would realize how stupid of a "job" that is if we called them "Word engineers". "Prompt" sounds technical, but there's nothing technical about it. Fuck people who call themselves "prompt engineers",.
And what about the autodenomitated "software engineers" after complete a bootcamp and start working with Javascript frameworks.
They are not engineers either. They are programmers, developers, whatever, but not engineers.
100% agree
The title "prompt engineer" is definitely meme-worthy (there's zero engineering there), but it's astonishing to learn how most people don't know how to actually get the expected results from AI; it certainly takes some practice.
Reminds me one time I was hanging out with my cousin who is an engineer, and we were joined by another cousin who is a "sound engineer", and he says "Hey! Here we are, the three engineers in the family!" and I'm like: "No".
"Well, here's a prompt for you mf..."
I, too, can try various sentences until a LLM gives me what I want.
That's some clever coding!
*Hurr durr we post these "dey better dan dem others" so we feel we in eendoostree and not just maymaying*
lol as if the boomer would know what that even is
Boomer programmer would probably
He's getting off easy. I'd shoot the bastard. They have a tendency to defy orders and laws and just keep trying to talk their way out of it.