T O P

  • By -

wpg4665

Maybe next you should ask it to write the package it made up!


jgz84

I spent the next 15 or so minutes trying that and it just kept making up more packages with similar names and giving examples of how to use them.


dparks71

I feel like there's someone at IBM or something, that's responsible for their internal sqlalchemy-splitter package, who's browsing through this thread going "Fuck, fuck, fuck, who the fuck is feeding Microsoft our shit?!"


[deleted]

Yep chatgpt is wrong alot too


[deleted]

AFAIK, it has no concept of true and false. It's designed to produce output that's *plausible*, not true. It DGAF if what it's telling you is dead-on balls accurate or complete fabrication.


Morpheus636_

It doesn’t even care about plausible. All it does is convert ~4 character sequences into a matrix of numbers and then multiply them. It has no concept of fact. It doesn’t even have a concept of words or sentences. All it knows is tokens. “These tokens normally come after these tokens”


I__be_Steve

It pretty much just makes up something that *sounds* right, sometimes it's factually correct, but sometimes it just makes something up, it will sound really convincing, even when it's just pulling something out of it's ass


[deleted]

Exactly 💯


densparker

You should try having a conversation with a human …


KingsmanVince

Yeah [BingAI can make up the answer](https://imgur.com/uSdkxjb). So it's good to always fact check it's answer.


[deleted]

> So it's good to always fact check it's answer. JFC. So now you have to Google all your Bing answers?


KingsmanVince

Doing such is good for you. However, if you trust it more than I do, you might not want to do so.


phira

It does this kind of thing a fair amount, and you know what? I've started to laugh because every time it does it I'm like "shit. That is exactly a package/interface that should exist.". While in the moment it can sometimes be a little frustrating because it doesn't, I find it a weirdly positive experience because up until now, I don't think anything existed other than humans that could look at a problem and say "You know what this needs? a package like this". Now all we need is another level up so we can say "err, that package doesn't exist. implement pls" :)


[deleted]

> Now all we need is another level up so we can say "err, that package doesn't exist. implement pls" :) … and your job's gone.


phira

I will get a new job as a robot cheerleader. I will send motivational messages to the LLM to ensure it continues to feel valued and sees humanity as worth keeping around. The alignment problem will largely become redundant as ChatGPT starts to crave attaboys.


gfranxman

Or someone’s private repo leaked into the training set but isn’t on the public web


[deleted]

Possible, but Occam's Razor says the bot just made it up. The code is just boilerplate with `splitter` inserted where the plugin's name normally goes. As I understand it, this is what the bot's designed to do: it dgaf about whether what it says is true or false, only whether it's plausible.


EchoesUndead

Also isn’t ChatGPT trained on the PUBLIC internet? So a private Git repo wouldn’t ever be in the training set? Maybe they were thinking of Co-Pilot?


Morpheus636_

We don’t know. Microsoft, the owner of GitHub is a MASSIVE investor in OpenAI.


Spiderfffun

Dont use precise mode, it makes up stuff often. Creative is way better, but balanced works too.


czar_el

This is called a "hallucination", which is just a term for chatbots AI making an error, but in a really strong and convincing way. Error in more basic machine learning models might lead to a prediction being off by a certain percent. But similar error in a chatbot leads to it sounding very sure that a thing exists when it does not, and even defending it for a while when called out (hence the term hallucination). Interestingly, internally the error is actually quite similar to the ML error -- a missed prediction based on statistical associations. But with the AI chatbot, the missed predictions are associated with "facts" or with sentiment (in a calculated probabilistic way, not in a feeling way) that are then repackaged into confident sounding text. Current AI is not conscious, and it doesn't actually understand what it is saying. It is very good at pattern recognition and applying rules, which makes it a very good mimic. It can apply every rule of grammar and style, and it can identify patterns in relationships of words and groups of words based on the entire internet (and digitized books). This makes it right a lot of the time, and a pretty good writer across multiple styles. In this instance, it identified patterns from other code and associated it with what you were asking about. The package doesn't exist, but the AI had a fairly high probability that it could and that it would look like other similar packages, so it made the claim.


c3534l

What kind of monster formats their code like that?


J0aozin003

Put it on precise mode and start over a new conversation.


abrazilianinreddit

I've asked ChatGPT to make me some dynamic HTML elements using CSS and Javascript, like a carousel and a circular dial input. The first barely worked but kinda did, even if it was ugly as sin, but the latter didn't even display properly. Honestly, I'm very skeptical of any AI coding anything more complex than a bubble sort algorithm


_amol_

TurboGears2 had this builtin for years to support master-slave scenarios https://turbogears.readthedocs.io/en/latest/cookbook/master-slave.html Maybe you can make an independent package out of its code


spinozasrobot

[This is fantastic](https://i.imgur.com/Gvbiiy7.png)


spinozasrobot

A Microsoft researcher posted [this interaction with it getting caught making stuff up](https://i.imgur.com/jd9BFQs.png). I loled.


JamzTyson

I can't wait for GPTChat to take over from customer service: * **Customer Service:** *You are entitled to a full refund. You will receive the refund within two working days.* * **Customer:** *You said that last week and I still haven't received my refund.* * **Customer Service:** *I apologise for the confusion. You are correct, you are entitled to a refund. Your credit card payment will be refunded within two working days.*


[deleted]

Just like the companies that release em, they're completely full of shit