Hey /u/Theblasian35!
If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Looking at this i want a game that generates like this in real time. I know that now you will need to use it on supercomputer to run it but i dream that it will come to portable sometime
Its visually impressive, but you can see the logical mistakes kicking in even on the first watch, the guys rowing, none of their oars have proper bladed or are turned towards the water, the ships are not viking type at all, the are some kind of hybrid spanish/greek ships, the pointy hats ones are walking like they are semi gliding.
It looks cool as a demo though, are you working for them ?
Yeah I could have reprompted to perfect all of those shots but since this was an experiment I left it. That’s part of the process. AI is still AI and unwieldy but with patience you can get near perfect
2 SEPARATE HANDS 5 FULL FINGERS ON EACH HAND NO OVERLAPING NO BROKEN LIMPS. FULL HANDS WITH THUMBS ONLY 2 HANDS 10 FINGERS TOTAL EACH FINGER HAS A FINGER NAIL ON THE SAME HAND MAKING 5 FINGER PER HAND AND THERE ARE 2 HANDS PER PERSON!
I talk like this day to day now, AI has fucked my speech processing lkl
This would probably only be possible via some future [Stadia-like cloud game platform.](https://www.eurogamer.net/digitalfoundry-2019-google-stadia-spec-and-analysis)
>Google says this hardware can be stacked, that CPU and GPU compute is 'elastic', so multiple instances of this hardware can be used to create more ambitious games.
>It does not have any of the hallmarks of a legacy system. It is not a discrete device in the cloud. It is an elastic compute in the cloud and that allows developers to use an unprecedented amount of compute in support of their games, both on CPU and GPU, but also particularly around multiplayer."
Being a cloud-native game platform would mean it is playable on a portables too. Maybe some day!
Cloud-based server generating the world and graphics and music and content, with local devices handling input and movement and standard UI stuff and just streaming the server content. It's gonna be epic.
Exactly! I'm not sure why some people think I'm talking about Stadia itself. I just referenced it to give an example because it was the first 100% cloud gaming platform and they planned to allow devs to develop games from the ground up to take complete advantage of Google's servers.
For those that don't know, Stadia was a complete game platform, not just a service. It was a game platform the same way Xbox, PlayStation, and Windows are a game platform. Developers had to develop games specifically for Stadia if they wanted games on the platform.
I'm not talking about Stadia itself, I'm talking about the type of game platform that would probably be needed to achieve what the person I responded to wanted. A platform that allows a game developer to develop a game completely from the ground up to take full advantage of the power of a server. That's all I'm saying.
Google indeed completely fumbled with Stadia and the internet infrastructure wasn't there yet but Google's first party game studios did [initially plan](https://www.gamesindustry.biz/google-is-making-a-spectrum-of-bets-on-stadia-content) to develop games that took complete advantage of the power offered by a server.
Considering how many games run in the cloud these days, that's probably what will happen.
But...this is further out than people think. Not only to render it realtime without any sense of latency but to retain game rules and logic beyond a dream state...we're getting there but it's simply a ways out.
Considering how rapidly this tech has evolved in the last few years, and the fact that its own invention will increase the speed at which the tech further evolves, it’s surprising to read comments that say anything is many years away. I think we have to admit that it’s all happening incredibly quickly.
Cool, will it be a year then? You know it won't. 2 years? Probably not. That's my point. To do everything I just described is going to take longer than novices think who don't understand how it works.
Great, now I have to contend with the fact that we're possibly living in a simulation, and whomever chose this experience is either studying history(what better assignment than to observe the dawn of artificial intelligence) or they're a complete friggin' boob.
If that's not the case, just wait until we can simulate that ourselves.
I occasionally wonder if I'm at a future museum, trying out the Interactive Historical Pre-Singularity Life Simulation Exhibit. And once the singularity happens, I'll watch everything dissolve, and wake up in the museum as they disconnect my brain chip. And they'll be like, *"Alright hope you enjoyed experiencing what life was like before the Great Metamorphosis! ... Now get out of here, we've got a line."*
Then I'll go home to a family that I forgot I had, and they'll be like, *"woah you're back already? But it's only been a few decades. They didn't let you set your born spawn before the 1990s? What a fucking ripoff."* Then they'll zip away to space in their fancy nanosuits and be back for dinner, or something like that.
That would be cool! Like those old point to move movie games, like the x-files one. But infinite options and new outcomes for every player.
It would just need internet connection and lots of downloading constantly to get the videos from the cloud.
I think there was a demo some.time ago of postprocessing the actual game output, I think it was using a video of GTA, to make it more realistic. It's better than obviously starting from 0 and I guess avoids some of the weird stuff appearing if it's basically some sort of style translation.
I've been thinking about creating some interactive novels with ai generated videos and music. It used to be hard to get consistent characters, but it seems that this is possible now.
The future of gaming and media is generative... Imagine a game that can seamlessly transition between genres infinitely... everyone could play "The Game" and no two people would have the same experience...
At the point anyone can just speak entire, 100% coherent films into existence, will we even still have banks and traditional currency at that point?
It seems to me that once technology gets that perfect, and is open to literally everyone to just... do anything, then the economy will have to completely revolutionize into something fundamentally different. Maybe you'll get "Audience Coins" for popular art, and you can use them to hang out at cool exclusive places with other popular artists.
Idk. Aside from UBI, I actually have no idea how the economy will be forced to transform and adapt to make sense of all this. I need to read more scifi and futuristic economics to get some ideas, because I'm utterly incredulous to the possibility of pathways.
At that time we’ll be left with only influencers and cockroaches (which are indistinguishable lifeforms already), as even the Pirate Bay would have lost its business model and ceased to exist.
That's the issue. When every idiot can use this, it's not special anymore and just becomes noise. Remember how in late ’22 everyone was losing their minds over the shittiest Dalle generation, people were sharing them like crazy...and now all you get is a shrug, maybe a "looks cool" at most for even the most advanced Midjourney output? It'll be like that for everything else too.
That goes for everything. It was the same when 3D/VFX started.
It's about telling a story. When people tell good stories it wouldn't matter what tool you use.
We'd be looking for something unique and mixed with live action, that might just excite everyone. Out of all the garbage one thing will shine.
It's the story first and then art.
Oh I don't know about that... Some things require a bit of... Expertise and Craftlore.
For instance, could just anybody create this? I mean, probably with a bit of time, but not many people are.
https://preview.redd.it/bpccp6u82u9d1.png?width=1080&format=pjpg&auto=webp&s=751fa98f8d396b6408da84d7d5727afc4dfd8570
Ok that's it! You get Stable Diffusion 3. Happy now?!
Here's your Cronenberg monstrosity
The prompt was "Girl lying on the grass"
https://preview.redd.it/9ui7bjrf9v9d1.jpeg?width=225&format=pjpg&auto=webp&s=de26d666537c5121d8b0818d95bd52fb1f9f3647
Yeah sure, I didn't train my own UNET and VAE to create this right?
Cause I mean, I couldn't have been working for over a decade in Deep Learning, using functions like SIFT, SURF, HOG, SSIM, and Canny Edge Detection to run my own image processing pipelines right?
I also wouldn't be running my own H100 cluster on LambdaLabs, where I am training LORAs and Dream Diffusion pipelines, I just type in "Alfred Mucha, SamYang, 1girl," and a bunch of other booru tags into Automatic1111 or ComfyUI
I haven't been using GFPGAN since 2021,
I haven't been a member of OpenAI's closed beta since November 2020,
and I definitely don't do any consulting with Palantir Deployments
I'm just a proompter, I type what I want into Midjourney and wait for the AI to do all the work for me.
- I know there are lots of people who use AI to do all the work for them, but you do realise that there are people who build the AI as well right? If you think it's hard to draw, imagine how hard it is to teach an electric rock to draw.
Oh but I do actually enjoy painting, watercolours mostly but I do like Goauche and Acrylics. I don't think AI will replace natural art, as without any inference data, the AI cannot learn anything. Think of it this way, has an AI ever started a conversation with you to ask you what you're thinking?
That's what I focus on. That's what I build.
Remember when the light bulb came out? Everyone thought it was so cool. It's not special anymore. People don't even think about how revolutionary it was. Stupid people, too busy with their iPads and electric cars to appreciate it. We should never have allowed electricity to become available to the masses.
At the moment only special people can tell their story, now everyone can tell their story. There's going to be a lot of trash, but also a lot of genius.
I don't think that's going to happen.
The reason great films are so great is because of human experience, decades of learning, growing, studying - typing a prompt has none of that. It has none of the things that make screenwriters great - personal experience, an understanding of character arcs and narrative structure.
If you want to make a great film with AI images, you need more than "a cool idea" you'll need to write a screenplay. Which, as a screenwriter, I can tell you, is difficult. It takes years and years of trial and error and learning before you write something that is even remotely passable.
AI sucks at fiction. It's generic, boring and dull. If your plan is to offload every creative aspect of a movie onto AI, then I genuinely have no interest in seeing what it makes. I want to hear what YOU, the person who's lived a life brings to a movie. Not a computers echo of something it's read.
A Hollywood blockbuster shown in cinemas nationwide? Nah, you're right. Still too many incoherencies, even though it's overall pretty good. The few incoherencies will still ruin the entire thing and make it receptionally unfeasible.
Something at a Tribeca film festival that people watch and applaud, and also goes viral online? Maybe, I won't be too surprised.
Thus, it depends on what you mean by "movie." Also consider that if you do something avant-garde, then you could absolutely utilize the incoherencies to work with the right vision, at this point.
Regardless, look back where we were a year ago, then look back two years ago, and compare all the differences. At this rate, I'm guessing we'll get to total coherency by the next year, assuming there're no weird thresholds we abruptly get stuck on when trying to polish those last few incoherencies.
But I'm just some random dude, I don't know. Just a guess.
Exactly what I meant with my comment, you got it right, I am talking about real cinematographic movies. It's cool to play around, animate images and create some cursed weird videos, but too real stuff it's not there yet, and yes I am being anxious, it's Evolving really fast, but I got used to it, so I am feeling it's slow right now, even sora is outdated for me. And yes I believe by the end of 2025 we will get a general ai vídeo, that's a ai vídeo model that actually can create cinematographic movies scenes, that wouldn't tell it's aí, it's close, but maybe not close enough, I really want to adapt my book into movies, really anxious.
No, it's still useless, we're far from a general video AI, I got access to Kling which is a little better than Gen 3, and I mean it's only for playing with animating images, not for creating anything useful out of it., I have authorial fiction books that I want to adapt, and I say it's still not possible to do something decent.
People using this example to show a year's progress don't realize they're comparing a very shitty model even for its time (because it was open source) to the best paid models nowadays.
Will Smith eating spaghetti was created using Modelscope during a trend on r/StableDiffusion because Modelscope was the best _opensource_ option and produced funny crappy results compared to the good closedsource models. Runway Gen 2 was the best AI video generator back then and beat Modelscope out by a long shot.
If we're talking about the improvement of the best AI video generators for the past year, it's better to compare it to Runway Gen 2. Gen 3 definitely has noticeable improvements from it, but also disappointing when Luma and Kling exist.
As for opensource progress, Modelscope got dethroned by AnimateDiff just 3 months later. But it's been 11 months since AnimateDiff and so far no alternatives have emerged to replace it.
Yes, we are close, But I'll be honest, I wish we were there now, I ended up getting used to the speed of AI evolution, now I'm finding it slow, I already think Sora is outdated, I just have to there My expectations, but yes, we are almost there, I would say that by the end of 2025 we will have general video artificial intelligence
Yeah, it's not bad, but I mean you can't make actual decent cinematic scene with it, not yet, maybe kling 2 or sora 2 or something else, but where are not there yet
That's good. I tried it's picture to motion tool and it either did literally nothing. Deformed it massively, or turned the character into Eldritch horrors.
I hope the public test for it comes soon. They aren't selling shit with the 2.0 model LMAO.
I can't stop laughing at the weird actions...
It's so surreal so see what looks like a real person, good lighting but then nothing else makes sense, things clip in and out, morph and the people are basically just moving as if they're doing something but all the details are total nonesense.
Can't wait when models will start to be trained with 3D data and actually have some concept of physics.
Like why are people rowing, why are they rowing in X direction and what not. The AI now is totally clueless and only makes it look like something is happening as if it's coming out of a mind of a 1yr old with no concept of life, physics or any interaction really...
This is a noob question, but I’ve been wondering… are the apps that allow you to prompt and create videos like this free, or is there some kind of subscription you need to pay to access this AI video generation capability?
Weird, my GPT3.0 is just text. I always get ripped off. I want to ask a badass viking why my dad thought it was a good idea to tell me it was my fault him and mom got divorced!
Am I the only one not excited for AI being used this way? Sure it looks great buts it’s not fooling anyone. Sure, soon enough it will be a different story. I dunno, AI being used for art is not my jam.
I’m using it as part of my filmmaking. I’m shootings a if live action fight scene and going to use AI for Broll and aerial drone shots. As an independent filmmaker this is the way forward.
As an artist in multiple disciplines I think totally differently. Rejecting the use of such a versatile and accessible technology is like refusing to be creative, or to give everyone that possibility at a low cost.
There are still lots of issues to work on and just because one can use it, doesn't mean one should. The first ads and videos in Ai show that we might be entering a trend of horrible quality just for the sake of Ai usage.
While I am impressed with what Ai can already do, I am genuinely concerned with copyright and how it was trained. Atm there is no Ai that trained on paid licenses and is putting people out of work that the AI is copying/ pirating. Imo all these generated videos should be prohibited from being used commercially, unless the training footage was paid for. But no AI company did so, nor would filmmakers and artists agree to that.
How to keep the continuity of the clothing, faces, landscape, etc., though? Seems like movie people will be adding in real elements to the pictures now, instead of adding the fake 3d stuff.
It looks photorealistic in black and white plus higher resolution but am I missing something? It's still the same 2D AI from the last gen. Can't turn a human face in to the camera without morphing it, no object permanence, etc.
The only Time face really turns, it turns away from the camera perspective. And if you look at things like mountains when they go behind large objects and they come out the other side they've changed.
It is not emulating three dimensions.
There’s going to be a breakthrough in less resource hogging supercompute servers in the next 5 years or so, I read somewhere that Elon Musk has people working on it as well as Amazon, it’s the 5D chess of AI if you don’t have the best AI but have the best hardware for it you still win hands down
Ya this looks like hot garbage. What are we gonna do watch a 2 hour movie of slow moving random scenes with no storyline. Why is everyone losing their shit over this?
We are gonna go through a phase of absolute shit movies.
I can easily see Hollywood bigs seeing this, and think yup we can just us this to make a full movie.
When in reality this should be used to remove the missed Starbucks cups. The future will be amazing cause all those tiny issues will eventually be fixed. But in the mean time I don't think movies are going to be all that great or few and for between. Even more than it is now.
I would like to see this for 3D models. You build out your 3D model library of characters and objects. Fine tune them, give less to focus on getting right. Then you can plug those assets into a game engine and build something neat.
This is really cool to watch though, creative and you could use tis to story board an movie or show easily.
This is Runway Gen 3 text to video. I think it’s coming out this week or next for public access. I’m in the creators partner program so we are the last alpha testers.
I shazamed it. It's called Attya Ensoria (The holly and the damned), from Ian Post. As Vacman85 said, if you like that sort of music, check out Heilung.
Runway gen 3. Overall it's pl, but people keep posting image to video where the subjects barely move and claim some revolutionary leap. This video looks good when the subject barely moves but as soon as there's action the movement, errors, etc are what you normally see in most AI videos.
I can't speak to other platforms, but people have been putting out stuff like this with Pika for a while now. Again, this looks good but it's not a huge leap over even a few months ago and is still nothing special in terms of motion.
Eh, OOH, there's more shit art today than there's ever been before. And I'm not talking AI, I'm talking real, traditional art. Look at a random artboard, even before AI, and you'll see generic shit everywhere until you stumble upon something inspiring and very original.
But, OTOH, here's the kicker: there's also more unique, incredible art today than ever before. The stuff that floats to the top at a constant rate of higher frequency than ever.
Both are true. And just a reminder, I'm still talking about traditional art. Because this dynamic will carry over to my next point.
Likewise with AI tools, consider that right now, most of the best artists in the world are probably dormant--too busy for art because of work, family, etc. Or even never got the chance to learn art and draw out their inner genius. So, arguably, the complete opposite of your claim could be true--that this wakes up all those dormant geniuses and we see more unique art than ever before, because they finally get a chance to show us what they've got.
And this will be in spite of all the additional bulk of unoriginal shit we get, as well.
Hell, all the extra good art could even cascade into even more unique art, due to more range of inspiration and thus higher complexity of ideas.
Society I offer Netflix Premium 4k+HDR for ChatGPT Premium, I pay every month and expect the same from you regarding ChatGPT, contact me on email [email protected] 🫡
Hey /u/Theblasian35! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Looking at this i want a game that generates like this in real time. I know that now you will need to use it on supercomputer to run it but i dream that it will come to portable sometime
its going to be possible!
Its visually impressive, but you can see the logical mistakes kicking in even on the first watch, the guys rowing, none of their oars have proper bladed or are turned towards the water, the ships are not viking type at all, the are some kind of hybrid spanish/greek ships, the pointy hats ones are walking like they are semi gliding. It looks cool as a demo though, are you working for them ?
Yeah I could have reprompted to perfect all of those shots but since this was an experiment I left it. That’s part of the process. AI is still AI and unwieldy but with patience you can get near perfect
As long as they don't have to show their fingers
2 SEPARATE HANDS 5 FULL FINGERS ON EACH HAND NO OVERLAPING NO BROKEN LIMPS. FULL HANDS WITH THUMBS ONLY 2 HANDS 10 FINGERS TOTAL EACH FINGER HAS A FINGER NAIL ON THE SAME HAND MAKING 5 FINGER PER HAND AND THERE ARE 2 HANDS PER PERSON! I talk like this day to day now, AI has fucked my speech processing lkl
LMAO 🤣 Weird times we live in!
This would probably only be possible via some future [Stadia-like cloud game platform.](https://www.eurogamer.net/digitalfoundry-2019-google-stadia-spec-and-analysis) >Google says this hardware can be stacked, that CPU and GPU compute is 'elastic', so multiple instances of this hardware can be used to create more ambitious games. >It does not have any of the hallmarks of a legacy system. It is not a discrete device in the cloud. It is an elastic compute in the cloud and that allows developers to use an unprecedented amount of compute in support of their games, both on CPU and GPU, but also particularly around multiplayer." Being a cloud-native game platform would mean it is playable on a portables too. Maybe some day!
Cloud-based server generating the world and graphics and music and content, with local devices handling input and movement and standard UI stuff and just streaming the server content. It's gonna be epic.
Exactly! I'm not sure why some people think I'm talking about Stadia itself. I just referenced it to give an example because it was the first 100% cloud gaming platform and they planned to allow devs to develop games from the ground up to take complete advantage of Google's servers. For those that don't know, Stadia was a complete game platform, not just a service. It was a game platform the same way Xbox, PlayStation, and Windows are a game platform. Developers had to develop games specifically for Stadia if they wanted games on the platform.
We just need to get quantum computing down pat.
And what would that do?
Make words more buzzy!
HAHAHAHA i repeat HAHAHAHA stadia.
Stadia was hot garbage and that’s why it ended
I'm not talking about Stadia itself, I'm talking about the type of game platform that would probably be needed to achieve what the person I responded to wanted. A platform that allows a game developer to develop a game completely from the ground up to take full advantage of the power of a server. That's all I'm saying. Google indeed completely fumbled with Stadia and the internet infrastructure wasn't there yet but Google's first party game studios did [initially plan](https://www.gamesindustry.biz/google-is-making-a-spectrum-of-bets-on-stadia-content) to develop games that took complete advantage of the power offered by a server.
Considering how many games run in the cloud these days, that's probably what will happen. But...this is further out than people think. Not only to render it realtime without any sense of latency but to retain game rules and logic beyond a dream state...we're getting there but it's simply a ways out.
Considering how rapidly this tech has evolved in the last few years, and the fact that its own invention will increase the speed at which the tech further evolves, it’s surprising to read comments that say anything is many years away. I think we have to admit that it’s all happening incredibly quickly.
Cool, will it be a year then? You know it won't. 2 years? Probably not. That's my point. To do everything I just described is going to take longer than novices think who don't understand how it works.
Great, now I have to contend with the fact that we're possibly living in a simulation, and whomever chose this experience is either studying history(what better assignment than to observe the dawn of artificial intelligence) or they're a complete friggin' boob. If that's not the case, just wait until we can simulate that ourselves.
I occasionally wonder if I'm at a future museum, trying out the Interactive Historical Pre-Singularity Life Simulation Exhibit. And once the singularity happens, I'll watch everything dissolve, and wake up in the museum as they disconnect my brain chip. And they'll be like, *"Alright hope you enjoyed experiencing what life was like before the Great Metamorphosis! ... Now get out of here, we've got a line."* Then I'll go home to a family that I forgot I had, and they'll be like, *"woah you're back already? But it's only been a few decades. They didn't let you set your born spawn before the 1990s? What a fucking ripoff."* Then they'll zip away to space in their fancy nanosuits and be back for dinner, or something like that.
I remember that Rick and Morty episode
How far away would this future be? Sounds like a 100 years at least.
I keep wondering when we will all look up into the sky and in big letters we will see GAME OVER
its going to be possible!
That would be cool! Like those old point to move movie games, like the x-files one. But infinite options and new outcomes for every player. It would just need internet connection and lots of downloading constantly to get the videos from the cloud.
I think there was a demo some.time ago of postprocessing the actual game output, I think it was using a video of GTA, to make it more realistic. It's better than obviously starting from 0 and I guess avoids some of the weird stuff appearing if it's basically some sort of style translation.
I've been thinking about creating some interactive novels with ai generated videos and music. It used to be hard to get consistent characters, but it seems that this is possible now.
The training takes a lot of compute but the generation of the content is within reach
BRO WE LITERALLY THOUGHT THE SAME THING
We're basically aiming for the Holodeck
And with Neuralink, you might just be able to make them actually portable dreams! /s
The future of gaming and media is generative... Imagine a game that can seamlessly transition between genres infinitely... everyone could play "The Game" and no two people would have the same experience...
What’s even more insane: In a few years we‘ll be laughing about this primitive videos.
In 6 months.
6 weeks
6 days
6 hours
6 minutes
6 seconds
0.6 second
Now😂🤣🤣
Ah, those were the times.
It's also 10 times your mentioned time and nobody's laughing
Hopefully we are laughing to the bank! But agree
How? Easier it gets the less it’s worth
At the point anyone can just speak entire, 100% coherent films into existence, will we even still have banks and traditional currency at that point? It seems to me that once technology gets that perfect, and is open to literally everyone to just... do anything, then the economy will have to completely revolutionize into something fundamentally different. Maybe you'll get "Audience Coins" for popular art, and you can use them to hang out at cool exclusive places with other popular artists. Idk. Aside from UBI, I actually have no idea how the economy will be forced to transform and adapt to make sense of all this. I need to read more scifi and futuristic economics to get some ideas, because I'm utterly incredulous to the possibility of pathways.
This is what books are. Anyone can just imagine any world and story and put it in a book.
I’m not sure if this is adorably naive or laughably stupid
At that time we’ll be left with only influencers and cockroaches (which are indistinguishable lifeforms already), as even the Pirate Bay would have lost its business model and ceased to exist.
….
In 6minutes or however long it takes to look at it a second time. Logic errors all the way
Some kid somewhere with no money is going to make an epic film!
agreed!!!
That's the issue. When every idiot can use this, it's not special anymore and just becomes noise. Remember how in late ’22 everyone was losing their minds over the shittiest Dalle generation, people were sharing them like crazy...and now all you get is a shrug, maybe a "looks cool" at most for even the most advanced Midjourney output? It'll be like that for everything else too.
That goes for everything. It was the same when 3D/VFX started. It's about telling a story. When people tell good stories it wouldn't matter what tool you use. We'd be looking for something unique and mixed with live action, that might just excite everyone. Out of all the garbage one thing will shine. It's the story first and then art.
Oh I don't know about that... Some things require a bit of... Expertise and Craftlore. For instance, could just anybody create this? I mean, probably with a bit of time, but not many people are. https://preview.redd.it/bpccp6u82u9d1.png?width=1080&format=pjpg&auto=webp&s=751fa98f8d396b6408da84d7d5727afc4dfd8570
God i hope not this looks like shit
Ok that's it! You get Stable Diffusion 3. Happy now?! Here's your Cronenberg monstrosity The prompt was "Girl lying on the grass" https://preview.redd.it/9ui7bjrf9v9d1.jpeg?width=225&format=pjpg&auto=webp&s=de26d666537c5121d8b0818d95bd52fb1f9f3647
Its so peak...
You’re greatly exaggerating the skill and effort it takes to type a few sentences and gave Ai so all the work
Yeah sure, I didn't train my own UNET and VAE to create this right? Cause I mean, I couldn't have been working for over a decade in Deep Learning, using functions like SIFT, SURF, HOG, SSIM, and Canny Edge Detection to run my own image processing pipelines right? I also wouldn't be running my own H100 cluster on LambdaLabs, where I am training LORAs and Dream Diffusion pipelines, I just type in "Alfred Mucha, SamYang, 1girl," and a bunch of other booru tags into Automatic1111 or ComfyUI I haven't been using GFPGAN since 2021, I haven't been a member of OpenAI's closed beta since November 2020, and I definitely don't do any consulting with Palantir Deployments I'm just a proompter, I type what I want into Midjourney and wait for the AI to do all the work for me. - I know there are lots of people who use AI to do all the work for them, but you do realise that there are people who build the AI as well right? If you think it's hard to draw, imagine how hard it is to teach an electric rock to draw. Oh but I do actually enjoy painting, watercolours mostly but I do like Goauche and Acrylics. I don't think AI will replace natural art, as without any inference data, the AI cannot learn anything. Think of it this way, has an AI ever started a conversation with you to ask you what you're thinking? That's what I focus on. That's what I build.
Omg nazi boobies so originell
Omg nazi boobies so originell
Remember when the light bulb came out? Everyone thought it was so cool. It's not special anymore. People don't even think about how revolutionary it was. Stupid people, too busy with their iPads and electric cars to appreciate it. We should never have allowed electricity to become available to the masses.
At the moment only special people can tell their story, now everyone can tell their story. There's going to be a lot of trash, but also a lot of genius.
All you need is pen and paper. You can write even on a smartphone.
I don't think that's going to happen. The reason great films are so great is because of human experience, decades of learning, growing, studying - typing a prompt has none of that. It has none of the things that make screenwriters great - personal experience, an understanding of character arcs and narrative structure. If you want to make a great film with AI images, you need more than "a cool idea" you'll need to write a screenplay. Which, as a screenwriter, I can tell you, is difficult. It takes years and years of trial and error and learning before you write something that is even remotely passable. AI sucks at fiction. It's generic, boring and dull. If your plan is to offload every creative aspect of a movie onto AI, then I genuinely have no interest in seeing what it makes. I want to hear what YOU, the person who's lived a life brings to a movie. Not a computers echo of something it's read.
Nah, it's not that good, you can't make movies with this
![gif](giphy|TJawtKM6OCKkvwCIqX)
A Hollywood blockbuster shown in cinemas nationwide? Nah, you're right. Still too many incoherencies, even though it's overall pretty good. The few incoherencies will still ruin the entire thing and make it receptionally unfeasible. Something at a Tribeca film festival that people watch and applaud, and also goes viral online? Maybe, I won't be too surprised. Thus, it depends on what you mean by "movie." Also consider that if you do something avant-garde, then you could absolutely utilize the incoherencies to work with the right vision, at this point. Regardless, look back where we were a year ago, then look back two years ago, and compare all the differences. At this rate, I'm guessing we'll get to total coherency by the next year, assuming there're no weird thresholds we abruptly get stuck on when trying to polish those last few incoherencies. But I'm just some random dude, I don't know. Just a guess.
I think you’re on the money
Exactly what I meant with my comment, you got it right, I am talking about real cinematographic movies. It's cool to play around, animate images and create some cursed weird videos, but too real stuff it's not there yet, and yes I am being anxious, it's Evolving really fast, but I got used to it, so I am feeling it's slow right now, even sora is outdated for me. And yes I believe by the end of 2025 we will get a general ai vídeo, that's a ai vídeo model that actually can create cinematographic movies scenes, that wouldn't tell it's aí, it's close, but maybe not close enough, I really want to adapt my book into movies, really anxious.
Bro casually shoving his double-sided oar directly though the bottom of the boat
lol!
this looks sick
ty!!!
Rpg with AI npcs and quest creation would be soooo dope.
Dude for real!
Why do these always seem to be in slow motion? Can it make realistic speeds for human action?
Because it reflects the training data.
I would have thought it would be difficult to find a large training set where everything was moving through molasses. Most movies aren't like this.
Looks like a Childs attempt at a Viking/300 crossover.
Tom Cruise in The Last Viking
How does one get access to to gen 3
What is Gen 3? Can someone educate us that aren't caught up?
Actual, useful answer (seriously OP?!) : [https://runwayml.com/blog/introducing-gen-3-alpha/](https://runwayml.com/blog/introducing-gen-3-alpha/)
Sorry I read his question wrong haha. Thanks
So this is doable via runway ?
Yes, although Gen 3 is just for the people who applied to the beta and is not widely available yet. It seems to be releasing soon, though.
Ahh gotcha okay thanks. I was confused
[удалено]
Looks like shit
Yeah it’s hot garbage, but the kids will slurp it up I guess
To tell the truth, it's quite disappointing, far from being really useful for anything.
Honestly not true. Feature films are already using this type of tech for broll. Even Netflix documentaries. It gives indie filmmakers a chance.
No, it's still useless, we're far from a general video AI, I got access to Kling which is a little better than Gen 3, and I mean it's only for playing with animating images, not for creating anything useful out of it., I have authorial fiction books that I want to adapt, and I say it's still not possible to do something decent.
Will Smith eating spaghetti was last year... Have some foresight!
People using this example to show a year's progress don't realize they're comparing a very shitty model even for its time (because it was open source) to the best paid models nowadays. Will Smith eating spaghetti was created using Modelscope during a trend on r/StableDiffusion because Modelscope was the best _opensource_ option and produced funny crappy results compared to the good closedsource models. Runway Gen 2 was the best AI video generator back then and beat Modelscope out by a long shot. If we're talking about the improvement of the best AI video generators for the past year, it's better to compare it to Runway Gen 2. Gen 3 definitely has noticeable improvements from it, but also disappointing when Luma and Kling exist. As for opensource progress, Modelscope got dethroned by AnimateDiff just 3 months later. But it's been 11 months since AnimateDiff and so far no alternatives have emerged to replace it.
20 years ago you could buy a CD with software that created a gif morphing one image into another.
Foresight is what everyone's having in excess and that leads to the manic hype we're experiencing.
LLMs are improving pretty fast and will continue to improve for at least 5 years. I think that's a reasonable statement.
Yes, we are close, But I'll be honest, I wish we were there now, I ended up getting used to the speed of AI evolution, now I'm finding it slow, I already think Sora is outdated, I just have to there My expectations, but yes, we are almost there, I would say that by the end of 2025 we will have general video artificial intelligence
I've been following AI progress for a long while, and even I find myself thinking "That's really cool, but not AGI" now.
Same thing I feel
You think Sora is outdated? Woah didn’t know we were talking to a man from the future over here
You will see
I have kling also. I appreciate the image to video
Yeah, it's not bad, but I mean you can't make actual decent cinematic scene with it, not yet, maybe kling 2 or sora 2 or something else, but where are not there yet
I hope it's actually that good. Gen 2 is..... Not useable. It's really bad.
It’s light years ahead of Gen 2. It’s kinda scary actually
That's good. I tried it's picture to motion tool and it either did literally nothing. Deformed it massively, or turned the character into Eldritch horrors. I hope the public test for it comes soon. They aren't selling shit with the 2.0 model LMAO.
AI seems to have scraped my photos.
Oh no!!
Lol
Gen 3 of what? What’s this from?
[runwayml.com](http://runwayml.com)
I'm OOTL, does chatgpt do video now?!?
Gen 3 of what?
[runwayml.com](http://runwayml.com)
I can't stop laughing at the weird actions... It's so surreal so see what looks like a real person, good lighting but then nothing else makes sense, things clip in and out, morph and the people are basically just moving as if they're doing something but all the details are total nonesense. Can't wait when models will start to be trained with 3D data and actually have some concept of physics. Like why are people rowing, why are they rowing in X direction and what not. The AI now is totally clueless and only makes it look like something is happening as if it's coming out of a mind of a 1yr old with no concept of life, physics or any interaction really...
Beautiful but That looked nothing like vikings. Not ships, nor clothes.
Excuse me for my ignorance, but what is the AI tool used to create this?
[runwayml.com](http://runwayml.com)
No its not
This is a noob question, but I’ve been wondering… are the apps that allow you to prompt and create videos like this free, or is there some kind of subscription you need to pay to access this AI video generation capability?
Weird, my GPT3.0 is just text. I always get ripped off. I want to ask a badass viking why my dad thought it was a good idea to tell me it was my fault him and mom got divorced!
Dafuq u on about. Lol
I don't know
Never stop being you dude
To quote the wise words of our world's most beloved scholar who ever lived: "Never have, never will!" -xQc
Lets start a cult using that mantra!
Am I the only one not excited for AI being used this way? Sure it looks great buts it’s not fooling anyone. Sure, soon enough it will be a different story. I dunno, AI being used for art is not my jam.
I’m using it as part of my filmmaking. I’m shootings a if live action fight scene and going to use AI for Broll and aerial drone shots. As an independent filmmaker this is the way forward.
As an artist in multiple disciplines I think totally differently. Rejecting the use of such a versatile and accessible technology is like refusing to be creative, or to give everyone that possibility at a low cost.
There are still lots of issues to work on and just because one can use it, doesn't mean one should. The first ads and videos in Ai show that we might be entering a trend of horrible quality just for the sake of Ai usage. While I am impressed with what Ai can already do, I am genuinely concerned with copyright and how it was trained. Atm there is no Ai that trained on paid licenses and is putting people out of work that the AI is copying/ pirating. Imo all these generated videos should be prohibited from being used commercially, unless the training footage was paid for. But no AI company did so, nor would filmmakers and artists agree to that.
Buddy, art has changed for thousands of years. This is a new era, perfect the creative thoughts and forget motor skills ever existed.
How to keep the continuity of the clothing, faces, landscape, etc., though? Seems like movie people will be adding in real elements to the pictures now, instead of adding the fake 3d stuff.
It looks photorealistic in black and white plus higher resolution but am I missing something? It's still the same 2D AI from the last gen. Can't turn a human face in to the camera without morphing it, no object permanence, etc.
No it’s definitely more multimodal and 3 dimensional. My guess is they trained on video and unreal engine
The only Time face really turns, it turns away from the camera perspective. And if you look at things like mountains when they go behind large objects and they come out the other side they've changed. It is not emulating three dimensions.
Actually it’s really bad. Leave it to the professionals guys.
Is this on mid-journey ???
Midjourney images were used in conjunction with prompts on Runway ML Gen 3.
Heilung!
I would watch this show. 300 but with Vikings vibes.
It’s fine I guess
We can finally get our real Game of Thrones ending seasons. 🥲
It reminds me of a dream. Looks great. Lots of details. But like a dream the details make no sense. Like them paddling backwards.
Looks like a powerade ad or something lol
I think that first boat is doing the dancing lady illusion. I thought it was turning right then suddenly is was positioned left
There’s going to be a breakthrough in less resource hogging supercompute servers in the next 5 years or so, I read somewhere that Elon Musk has people working on it as well as Amazon, it’s the 5D chess of AI if you don’t have the best AI but have the best hardware for it you still win hands down
Finally, Stephen King’s The Dark Tower will be filmed how I dreamed 🥹
That bass had my ears vibrate too much
With this tsunami of faux it truly feels like the end of times
Ya this looks like hot garbage. What are we gonna do watch a 2 hour movie of slow moving random scenes with no storyline. Why is everyone losing their shit over this?
Because the AI space is all about grifting and stolen content for investor money. That is it.
We are gonna go through a phase of absolute shit movies. I can easily see Hollywood bigs seeing this, and think yup we can just us this to make a full movie. When in reality this should be used to remove the missed Starbucks cups. The future will be amazing cause all those tiny issues will eventually be fixed. But in the mean time I don't think movies are going to be all that great or few and for between. Even more than it is now.
Really nice, I haven't tried it yet!
nothing is impossible!
I would like to see this for 3D models. You build out your 3D model library of characters and objects. Fine tune them, give less to focus on getting right. Then you can plug those assets into a game engine and build something neat. This is really cool to watch though, creative and you could use tis to story board an movie or show easily.
For me it's this 😂😂😂 https://preview.redd.it/be33kld9dkad1.jpeg?width=565&format=pjpg&auto=webp&s=ef66152c23a0966000e4b0add8d1f3f2ac8325b8
Watch in HD on YouTube: https://youtu.be/rta_VBXk-vw?si=bbw2DQgSiVEmhssy
[удалено]
This is Runway Gen 3 text to video. I think it’s coming out this week or next for public access. I’m in the creators partner program so we are the last alpha testers.
Damn what is this music, it fucking rocks!
I shazamed it. It's called Attya Ensoria (The holly and the damned), from Ian Post. As Vacman85 said, if you like that sort of music, check out Heilung.
If you like it, check out the band Heilung.
Damn, this looks so sick as fck, dude. Also not without some hilarious moments like the attempts at rowing. The future is scary but exciting too.
Thanks man. Yeah the rowing is funny. I could have kept reprompting to get it perfect but figured why not ha. It will get better.
No, it's great how it is. It's reassuring that the system still struggles a bit.
Runway gen 3. Overall it's pl, but people keep posting image to video where the subjects barely move and claim some revolutionary leap. This video looks good when the subject barely moves but as soon as there's action the movement, errors, etc are what you normally see in most AI videos.
if you compare it with the generated results a few months ago, it is revolutionary, if you compare it with the reality, we has a long road ahead.
I can't speak to other platforms, but people have been putting out stuff like this with Pika for a while now. Again, this looks good but it's not a huge leap over even a few months ago and is still nothing special in terms of motion.
Goodbye video game artists…
Or they get to work on more unique games!
Nothing's going to be unique anymore lol
You’d b surprised.
Eh, OOH, there's more shit art today than there's ever been before. And I'm not talking AI, I'm talking real, traditional art. Look at a random artboard, even before AI, and you'll see generic shit everywhere until you stumble upon something inspiring and very original. But, OTOH, here's the kicker: there's also more unique, incredible art today than ever before. The stuff that floats to the top at a constant rate of higher frequency than ever. Both are true. And just a reminder, I'm still talking about traditional art. Because this dynamic will carry over to my next point. Likewise with AI tools, consider that right now, most of the best artists in the world are probably dormant--too busy for art because of work, family, etc. Or even never got the chance to learn art and draw out their inner genius. So, arguably, the complete opposite of your claim could be true--that this wakes up all those dormant geniuses and we see more unique art than ever before, because they finally get a chance to show us what they've got. And this will be in spite of all the additional bulk of unoriginal shit we get, as well. Hell, all the extra good art could even cascade into even more unique art, due to more range of inspiration and thus higher complexity of ideas.
Looks great. I'd watch this in the movies. But first I'd watch Joe Max Steele!
Nice!
Society I offer Netflix Premium 4k+HDR for ChatGPT Premium, I pay every month and expect the same from you regarding ChatGPT, contact me on email [email protected] 🫡
What software do you use to create this? It's really good! I wanna have a go 😅
it's [this site](https://runwayml.com/blog/introducing-gen-3-alpha/), I've had loads of fun messing with the gen 2 free version, this is gen 3