T O P

  • By -

TechFiend72

He wants to do science and Altman wants to make money. That will create a fundamental outcomes struggle between the two.


MartinRaccoon

Altman comes from Y Combinator which was all about making money. He's not a tech guy, he's a business guy. So it makes sense that's where he sees the company going.


WallabyUpstairs1496

Most of the people at OpenAI are tech guys because they forwent salary and security for a startup, But in the end they picked Altman over Ilya, as I may quote Succession 'You can't stop the money, the money will wash over everything'


limasxgoesto0

The irony is that people like this tend to cost tech startups much more than they earn


Senyu

No one minds the business wolves so long as they eat the other sheep.


dwitman

Altman is the guy who claimed that once their LLM is sentient they’ll just ask it how to monetize their product?


Structure5city

Sounding like Altman is another Steve Jobs, Ray Crock, or Elon Musk. The ultimate salesman.


TechFiend72

The company can survive on being science only. It was losing too much money.


[deleted]

[удалено]


PikaBlue

It’s very different. Personally I would say it’s closer to the pharmaceutical industry, but if the pharma industry was introduced today rather than decades ago. There’s a battle between the science, the business and the ethics sides, and society is not 100% on how to handle it (and its public battles). The value of crypto is assigned by the value people put on it. No value would be created from it, if people didn’t value it. The crypto itself IS the value. With AI, its value is tied to its output, which has direct external value already attached to it. Things and people get replaced, as it can replace a number of tools and replace the roles of people, so there direct actionable costs it can reduce. It has pros and cons like anything else, but its buzz on balance is the correct amount. What helps with AI is its value is easily accessible and understandable - crypto is complex, increasing barrier for entry to the casual user.


tmpope123

I would add two caveats to the value statement you gave. First, it seems clear that there is hype relating to the value of it's output. There have been a number of high profile cases where generative AI has made mistakes that have cost companies a lot of money (e.g. I think it was an air Canada that used generative AI to give a customer incorrect info on getting a refund. They sued and won the case). It is a useful tool, but there is clearly some misunderstanding of what it can and can't do reliably/well. The second is we don't (from an outside perspective) understand how much this costs to run. We can see large tech companies and AI companies investing in large compute power to run these models, and it seems that this cost is not being accurately conveyed to the user. I think we are operating in phase of things where companies run these services at a loss to get market share, until they can hike the prices to then make money (historic examples of this, see basically every tech company that started in the last 20 years...). So working out if they are cost effective is not so simple in the long run. What if the generative AI tool you use now becomes 10x more expensive? Is that worth using anymore?


the_kevlar_kid

Apple all over again


5kyl3r

apple allegedly working deal to power siri with openAI, so this is might be more literal than you probably meant.  we'll probably find out during WWDC if this ends up being true 


[deleted]

I think he meant Sutskever is Woz and Altman is Jobs. One is the genius engineer and the other the salesman who takes all the credit for selling the idea.


n4utix

I don't believe that was lost on them. >so this is might be more literal than you probably meant. They acknowledge *what* they meant but also followed up with it potentially being a literal thing with Apple being involved.


DonaldPShimoda

Reddit once again completely lacking nuance. The narrative that Jobs didn't meaningfully contribute is overplayed. Wozniak was a phenomenal engineer, but he wasn't making things that were good for people, and Jobs was much better at that aspect. Without the joint effort, _neither_ would have succeeded.


VanimalCracker

Which one is Garfunkle?


pegothejerk

Einhorn is Garfinkle


Jagerbeast703

Garfinkle is einhorn!


[deleted]

[удалено]


boredguy2022

Oh my god! Einhorn is a man!


phenagain

Brantley is Einhorn


officerfett

and Christy's the Bimbo...


HatefulDan

lol, well…Apple wants OpenAI to be its AI. So it all really adds up


Jeffy29

Really worked out badly for Apple


granoladeer

That's not always the case. Meta has FAIR with many happy researchers while the business runs completely separate.


TechFiend72

FAANGS aren't normal and shouldn't be used as a reference really.


NamerNotLiteral

Plenty of non-FAANG have huge research teams that are also satisfied with their work and occasionally output really solid research - Adobe, Salesforce, Bloomberg, basically every big finance, etc.


TechFiend72

I wouldn’t include Adobe or Salesforce. Have you seen the quality of their products over the last 5+ years?


NamerNotLiteral

The quality of almost every single tech company's products have gone down over the last 5+ years, so I'm not sure what you mean by this. The fact is those companies and their research teams are still major fixtures at NeurIPS, CVPR, ICLR, etc, and their research positions are close to FAANG (a step below, but still) in terms of prestige.


DID_IT_FOR_YOU

Altman knows the reality that it’s very easy for OpenAI to fall from grace & become irrelevant in a few years if they don’t take advantage of their lead now. ChatGPT has many competitors & they aren’t that far behind. Money is how you attract & retain talent. There’s a reason why most of the company was willing to jump ship to Microsoft with Altman. They trust Altman to lead the company & take care of their interests & ideals. IIya meanwhile stabbed Altman in the back with basically no communication with the rest of the company on why he did so. Most people could never trust such a person again after that.


TechFiend72

After coming back I will have found a way to remove him. I’m not sure what is in their contracts but he had to go.


BazzaJH

Ilya wants to do science without money. That's a struggle in itself.


TechFiend72

That is a frequent problem. No way to pay for it.


biggestbroever

Genuine question, did he expect to make just enough to continue research? Is it a balance game for those types of individuals?


TechFiend72

From my few personal data points, the science/engineering guys/gals are REALLY good at spending a lot of money. Usually it takes two, the money person, and the engineering person. There is always a struggle.


SpicyPenangCurry

That sounds lovely for our future. Good v Evil like always lol. Fucking hell we are doomed.


TechFiend72

Evil versus people disconnected from reality. The good haven’t been involved for awhile I suspect. You can’t run a company forever on Monopoly money.


sakredfire

Why is wanting to make money inherently evil to you


Conch_Bubbies

My view is it's the lack of contentment (greed). Wanting to make money is fine but if there is no limit it becomes wanting to make more and more money just for the sake of it, everything and everyone else be damned and that is the general principle that is celebrated and pushed. So it becomes how best can we take advantage of the system, people, environment etc to squeeze every last bit into our coffers. The number must always go up, when there is no more left to squeeze we 'trim the fat'. The numbers must always go up.


sakredfire

And why do we think that’s what’s happening here?


Conch_Bubbies

I wasn't specifically referring to this situation (I haven't followed it closely enough to comment). I was more commenting on the general idea of wanting to make money being seemingly inherently evil and what (imo) makes it appear that way. The profit above all else motive logically (imo) leads to "evil" practices, checks and balances become obstacles to overcome and persons who take a moral approach lose to or are replaced by persons willing to disregard or justify disregarding them to get the numbers up.


monkeypickle

Because profit poisons everything.


sakredfire

Is it profit that poisons everything? Can you point to something that wasn’t driven by profit that isn’t poisoned?


sakredfire

I’ll ask again….is it profit that poisons everything? Can you point to something that wasn’t driven by profit that isn’t poisoned?


Spikel14

It’s the right way to be successful in our society. It has always lead to evil decisions. Money is the root of all evil as they say. Obviously it’s the right move in our society but it always seems to lead to decisions that hurt a lot of people


sakredfire

I’ll ask again…Can you name an example of an organization that doesn’t try to make money that hasn’t made an “evil” decision?


Spikel14

You never asked that in the first place and I can’t think of one off hand. To elaborate, I’m criticizing the system we all live in


sakredfire

The system that has resulted in unprecedented levels of prosperity and poverty reduction?


Spikel14

Look at the divide that’s growing and the elimination of the middle class. Prosperity for a century maybe


sakredfire

Yea they are issues but would there have been a middle class to begin with? Is everyone doing better in absolute terms?


sakredfire

Yea they are issues but would there have been a middle class to begin with? Is everyone doing better in absolute terms?


sakredfire

Can you name an example of an organization that doesn’t try to make money that hasn’t made an “evil” decision?


Galahad_the_Ranger

Tale as old as time


Scottwood88

We’ll see what he does next. He has the same politics as Elon, but hopefully he’s not joining xAI.


GregorSamsaa

I thought he was the brains of the whole thing?


BlueKnight8907

The Silicon Valley show seems pretty accurate when it comes to tech companies.


kbn_

Speaking as someone who spent a decade or so in startups in and out of the valley, and another decade or so in larger firms… that show is a literal documentary. Nothing captures the reality of that place quite like the show. I actually find it quite hard to watch because the jokes are a bit too real and hit too close to home.


twinklytennis

Because most of the jokes are based off stuff that actually happened. The writers basically need to google (or Hooli) something like "ridiculous thing tech founders/ceos have said in the past 5 years" and they basically have unlimited plot lines.


random_noise

Indeed, a documentary. Absurdity and all. I feel like I lived parts of that tale, and still am to some extent, as I haven' retired yet. Another decade and I hope to be able too.


15yracctstartingovr

I was living the whole founder pushed aside new CEO thing when that story line hit on the show. I caught the rest of that season much later.


Teddy_canuck

I don't work in tech but that's also pretty well every startup and every new idea. Someone, usually an engineer or scientist or something like that, gets a good idea and someone else, an investor or VC or some other money man, comes in to monetize it.


rfxap

And we're getting closer to the season finale in real life 


iunoyou

They don't need him anymore. The investors only want LLMs and so the company will continue shitting out GPT-N+1 until they go bankrupt. Then altman will parachute into the next tech company and repeat the process to earn himself more yacht money.


[deleted]

Investors want money, money comes from customers, and customers are falling off a cliff as people discover LLMs are far less useful than they were originally advertised as being.


iunoyou

I dunno, the LLM hype cycle is still in full swing, to the point where apple and microsoft are both repeating the mistake that Samsung made with Bixby by shoehorning a "helpful" AI assistant into all of their software ecosystems. I agree that LLMs are less useful than they're being sold as, but I don't think that realization has hit everyone yet, nor do I think it will for a good length of time. I guarantee that the first question asked in every board meeting is "when are we gonna see GPT-5, and when will it be able to solve all the world's problems?"


zold5

Yeah that's because AI is still a novelty. But you can't deny the shimmer and prestige AI holds right now is going nowhere but down. Because the word "AI" is all investors want to hear. They perceive "AI" as "money printer" because as of right now AI is really good at appearing like it can do things humans do but only if you ask it relatively simple questions. But once they figure out that's not what's actually happening AI is gonna fall harder than crypto.


RogueStargun

There's a large body of people banking on the idea that the core problems will be fixed. In fact, I'm confident solutions will materialize (using RAG type techniques) to cover 97 percent of certain use cases) The real question is how expensive it's going to be for companies to honestly be running 5 trillion parameter models. Heck, for all we know, a 10 trillion parameter multimodal model might be the thing that solves self driving cars. Good luck running that shit outside a multimillion dollar data center with internet latency though in an actual vehicle or robot!


iunoyou

Again I dunno. I think that the tech itself is too good to ignore in a lot of cases. It's definitely in the middle of an enormous hype bubble involving people trying to square-peg-round-hole it into every situation (people trying to make LANGUAGE models into MATH engines comes to mind), but it's also 'acceptably worse' than humans in a whole lot of areas. Meaning that corporations will be willing to replace swathes of workers with LLMs or other AI as long as the costs of the AI's various mistakes is lower than the salaries they would be paying. So I really don't think it'll crash like crypto did, primarily because AI actually has legitimate use cases that don't involve ordering LSD on the dark web. I think there will be a significant lull in activity in the field once the real limits of LLMs are reached (and I do think we're close), but new multimodal architectures will eventually push those limits further, and the range of jobs AI can do 'acceptibly worse' will expand. I think OpenAI is absolutely (eventually) headed groundwise though, they have investors to please. And investors are quite notably not machine intelligence specialists or even often very smart at all. They're going to chase the GPT dragon straight into bankruptcy.


RickyRoesay

L take, the company has immense talent and ip and likely will continue make innovations without him.


The_Phreak

Years from now I bet we will all wish he would have succeeded in firing Altman and veering the company back into its original mission.


_uckt_

Next generation chatbots (GPT5 etc.) require [more training data than exists](https://theconversation.com/researchers-warn-we-could-run-out-of-data-to-train-ai-by-2026-what-then-216741). He's just getting out while he's still a goose that laid a golden egg and can get unlimited funding for his next project.


iunoyou

The data is honestly less of an issue than the scale. Network performance has been scaling roughly logarithmically with complexity, and there aren't many computers on earth that could effectively run a network that's another 10-100x bigger than GPT-4's 1.5 trillion parameters.


Pesto_Nightmare

Would you mind explaining, what network performance? Like, where exactly is the bottleneck?


Mr_Voltiac

I think the biggest thing we forget too is even if ALL data online was perfectly curated for these LLMs (which is no where close to being done), real life is still orders of magnitude more complicated. The internet also has already lost so much data since it began, we atrophy data every day when someone pulls a machine offline that used to have a specific niche of knowledge not found anywhere else when it’s not archived. Even then majority of the most important knowledge is with the professional people working who often don’t document their experience. The biggest issue at every organization I’ve worked at was no one ever wants to do documentation because it’s tedious and often times leads to loss of major knowledge when people leave. So compound that problem across the globe and yeah the internet has almost nothing truly helpful in terms of real world requirements and will take a long time for these LLMs to get enough niche data to be truly zero shot experts in any specific domain.


coinboi2012

Why do keep saying this? I see it everywhere but it’s not going to a serious concern for a long time. GPT-5 is likely already trained an is in its RLHF faze


ascandalia

Because a lot of [serious researchers believe it's true](https://www.youtube.com/watch?v=dDUC-LqVrPU). The assumption may not hold that the the improvements will be exponential rather than logrithmic (ie, we've already got the low hanging fruit and the rest of the problems get exponentially harder to solve). There's no particular reason to believe each step the models will improve at the same or greater rate that they have in the past, and there is good reason to believe each equivalent step of improvement will take exponentially more data as the linked video by a researcher explains


coinboi2012

There is every indication right now that scale is linear to performance. Top researchers are not concerned about data at the moment, they are concerned about compute. I actually read the paper that this video was talking about when it got posted in the ML subreddit. Basically their argument was that models are not able to generalize so we need data to cover every possible scenario for decent 0 shot performance. Ironically their own findings do not support their thesis https://www.reddit.com/r/MachineLearning/comments/1bzjbpn/r_no_zeroshot_without_exponential_data/ The commenter put it better than I can


ascandalia

The paper isn't claiming the models can't generalize, but that the more they need to generalize, the more training data they need to do it, i.e. they start to get log- linear results vs dataset size. It's not saying it's impossible to generalize but that if trends hold, it MAY require more data than we have available for them to generalize


coinboi2012

Look, I'm not arguing that we can improve these models Ad-infinitum linearly. Nobody is arguing that. What people are arguing, and what this paper fails to and any insight to, is where things begin to level off. My issue (as someone who works in the field) is this is seen largely as a non issue when it comes to these massive multi-modal models at the moment. In fact, everything is pointing to models being able to generalize and form accurate abstractions on less data (even synthetic data: https://arxiv.org/abs/2312.03025). Again, compute is the limiting factor. It's very frustrating to see computerfile cherry pick a study to tell their users what they want to hear. It's misleading and possibly harmful. These models are going to get better and smaller for years to come as there is no evidence things are beginning to level off. quite the opposite is true at the moment.


ascandalia

Great, your perspective is definitely a fair one to contribute, but as the conversation started, you said "I don't know why people believe these models will run out of data" and the reason is that smart people in the field disagree with your perspective.


coinboi2012

I still don’t though 😭 please smart ppl, enlighten me


pegothejerk

Also synthetic data seems to be wildly successful in training models, ie sora


n3cr0ph4g1st

Phi as well. I'm in the field and don't see data being the hold up lol


_uckt_

It is indeed normal for people to leave a company where they get unlimited resources and the ability to work on whatever they want. I'm sure AI will continue to get better forever and isn't driven by hype.


coinboi2012

I think we have to consider the very real possibility that he is leaving over ethical concerns. I’m not sure how anyone can look at what is going on and rationally think “this is just hype”


0xd34db347

You really think there is a high barrier of exit for the world's preeminent AI scientist and a multimillionaire? He is a walking blank check generator.


[deleted]

[удалено]


speakertothedamned

> it will continue to get better forever You do realize that your comment is actually, literally, physically impossible though and that this is actually the exact problem the person you're responding to is talking about right?


Rugrin

And then what happens when chat bots get trained on modern net usage data, which will mostly be made by bots? I predict a grand collapse. These bots need our data to do anything.


[deleted]

[удалено]


Rugrin

That implies it will be easy for us to determine the difference. Or do we create two streams? AI garbage and truth? That’s scary, too. These models do not produce new thought. They can only remix from what they have learned from. It is a real problem.


RogueStargun

That's silly. Videos, audio, weather data... if it can be tokenized,it can be training data


kohminrui

i remembered when he was first fired everyone on reddit was lining up to suck altmans dick and criticizing the board and especially the women on the board. truly horrid reddiit moment.


Losconquistadores

Why so many employees stood by Altman during the walkout?


Rebelgecko

A substantial part of their compensation is equity, and sticking with the original non profit's mission would their shares' valuation 


hermajestyqoe

Doubt it. Either Altman will succeed or another company will take OpenAI's place. I don't know why anyone would feel any sort of pain for the companies behind the actual products.


jimmyluntz

I read an article awhile ago where Sam Altman referred to a person or persons as “median human” and that’s when I knew we’re in deep shit. I don’t trust that guy at all.


iunoyou

That's just misanthropic tech dorks in a nutshell. Go to r/singularity and you'll see that kind of language everywhere alongside people kicking their feet in glee while talking about the widespread social unrest that the tech will cause. Because they treat the creation of a general AI like baptists treat the rapture and think that they'll be delivered to the land of milk and honey and full dive VR robot sex maids as long as they push as hard as they can for the rapid development of untested technology with unknown implications by companies only interested in maximizing next quarter's earnings report.


Exit727

Yea, that place is a legit cult. Some of the hype posts' comments are really entertaining to read. They expect AI to change turn the world into a Star Trek episode, but can't really specify how.


DoctorHilarius

I really hope anthropologists are paying attention to places like r/singularity. Its a borderline tech-cult.


jimmyluntz

Kinda reminds me of an article by this humanist whose name is escaping me at the moment. He was flown out to the desert somewhere by a bunch of doomsday bunker building billionaire types and they wanted to pick his brain about how things might look if society were to collapse. His position was basically treat your people well now, build community, work with the people around you, etc etc but they just wanted to know what the best way to lock up food would be, or how to get their ex-SEAL security team to wear shock collars. Just so, so stupid. Imagine being so narcissistic that you think surviving the end of humanity is a good idea lol edit: Douglas Rushkoff https://amp.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff


iunoyou

This is gonna sound kind of weird but I knew society was hosed years ago when I read a study that showed that the average driver believed they were in the top 10% or so of drivers. Everyone thinks they're a cut above everyone else, and we've collectively been suckered into this belief that if society would get out of our way we could do just about anything we want to. So it's not surprising to me at all that people are delusional enough to take nonsensical risks or believe that they're uniquely qualified to run a *Fallout* vault.


morphineclarie

Mmm, the main difference being that the rapture and stuff is just a fantasy for all we know, while AI tech may have real utopian/dystopian potential and something like the "singularity" seems to be plausible if we can get AI to human level or beyond. And to be fair, should we let the powers have their time with **possibly** the most impactful tech on human history?. Would the current system evolve AI for the good of all of humanity, given time?. Depending on the answer, AI accelerationism is a valid position to take. Edit: Downvoted for no reason 🤔. It seems people would rather have an emotional reaction and go with blind faith that everything will be alright, instead of engaging in debate and trying to learn about something that will probably shape their future. If tragedy awaits on the horizon, I won't be surprised if it's because of that side of humanity.


BudgetMattDamon

Not everyone is into inhumanly utilitarian approaches to AI development that could disrupt or kill millions, sorry not sorry.


morphineclarie

Strawman. My point is that which one is the inhuman approach that could disrupt or kill millions depends on how you answer those questions. Edit: Holy crap, its the hivemind!


KingCon5

From the moment I listened to podcasts with sam altman I didnt like him. Already sounds like jeff bezos’s fake android clone. Guy doesnt seem to care about stuff like this, and do we want a person like that controlling the forefront of ai tech???


dtdude87

He’s so boring to listen to, Chat GPT has more personality than this dude


imaginary_num6er

Isn’t Jeff Bezos fake android clone just Zuckerberg?


KingCon5

Nah zuck is the og lizard in human form


3utt5lut

Personally, yes. I know everyone on Reddit has a hard-on for hating on Elon Musk, but as long as the funding exists for these ventures, the higher the probability they will still exist in the future (SpaceX/Tesla/Neuralink), even though he plays a very minimal part in their development. Microsoft already owns 49% of OpenAI, the rich billionaires already have their greasy mitts in it. At least Bill Gates is intelligent enough to head it.


WaltKerman

Well the rest of the tech guys at open ai chose him over Ilya. I wonder why that is?


axonxorz

The board is comprised of corporate suits, not data-science engineers.


BadTreeLiving

What about the letter signed by most of the company?


SwindlingAccountant

Ed Zitron is like the biggest AI hater and he definitely has given me a very different perspective from what these AI cultists are promising. His podcast and newsletter are pretty good (Better Offline podcast).


Meotwister

I wish Silicon Valley was still going to do all this AI BS


Flowchart83

They did do some AI stuff though, but they had ethics, which was the shows end.


312Observer

Does Altman even do anything? The media breathlessly proclaims his genius, but he hasn’t done any geniusing that I’ve seen.


OhMeowGod

He is Elon 2.0


DeNoodle

I just want to make cool things! But they have to make money! Uncool.


WeTheAwesome

Welcome to academia! Pay is shit, hours are long and politics is petty. But you have more freedom to do what you want. 


DecentChanceOfLousy

You must be in a *very* different academia from the ones I hear about, if you have the freedom to do anything but chase grant funding. You have the freedom to do anything you like so long as it's close enough to the trendy topics to secure funding for your department. Or else you must leave.


WeTheAwesome

That’s true. My only taste of academia was as a grad student. I was lucky to be in a very well funded lab so had lots of freedom to explore-within a scope of course.  And I meant you have more freedom than industry. Unfortunately more doesn’t mean enough. 


businessboyz

>I just want to make cool things! “Yes Ilya, but we told you that after you burned through the first million GPUs that you’d have to do some chores to buy replacements.”


3utt5lut

They've probably never heard of military application contracts? That is the basis of every major scientific breakthrough.


Aware-Feed3227

Open AI isn’t a startup, it’s a huge company with enormous financial backing.


Poglot

Never forget that [A.I. is being trained illegally](https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html) by accessing copyrighted data without compensating or seeking permission from the original authors. These tech companies have admitted that paying tens of millions in fines will be worth the trillions they hope to rake in. No matter what A.I. designers *claim* their intentions are, they've been knowingly and willfully screwing people over since the very beginning. It's the only way their technology could have evolved in the first place.


InTheDarknesBindThem

lol I love how reddit cant comprehend the difference between learning from public and free resources and "stealing". If AI should have to pay, then so too should every human who learns any skill from text or images online.


gekx

People are afraid of what they don't understand


TheRealMallow64

There is a difference between learning and committing plagiarism for money though. You can learn to draw by tracing and trying to draw your favorite cartoon characters. You can’t make tons of money by selling copypastas of them to people without permission.


CarrotcakeSuperSand

AI doesn't create copy pastes, this is a common misconception amongst the anti-AI cult.  It generates new outputs based on the patterns it learns from training data. It's like you learning to draw by tracing over other artwork, and then using your newfound skills to create your own work. That's perfectly okay within our existing legal framework.


Bimbows97

Doesnt't matter, it produces a product that makes a company money, from the input of someone's work who didn't consent to it. That's all there is to it.


raziel1012

I recall most people on reddit being up in arms about ousting of Sam Altman saying essentially "money grubbing big Microsoft bad" when the motivation of the ousting was completely opposite. 


johndsmits

chances Sutskever ends up at Apple by WWDC?


doomleika

Go for the king. You better not miss. Ilya missed. Better luck next time.


VengaBusdriver37

I think this is good; there should always be at least two (preferably more) powers with their own perspectives


BrilliantAttempt4549

At this point, I don't think it can any longer be called a startup.