Animator vs The Machine

Storyboarding Reimagined: The New Way Or The Old? A Chat with Zeyd Taha Candan from Storyborder.AI

Alex Season 1 Episode 10

Is storyboarding going to be changed forever? We're getting up close and personal with Zeyd  Taha Candan from FYNAL, the brain behind the cutting-edge AI solution that is shaking up the filmmaking industry. Discover how Storyboarder.AI, is making storyboarding more accessible to a wider audience. This program claims it is able to  analyze scripts and  generating custom images in a matter of moments.FYNAL's Storyboarder.AI could be a game-changer.  Programs like these can be especially useful for  independent filmmakers and those with restrictive budgets. Listen in as Zed spills the beans on harnessing open-source software and pre-defined prompts to bring your words to life without having to put lead to paper. 

We'll also be digging deep into the AI's impact on the creative process. Why does it matter what datasets AI models are trained on? Can artists contribute their own material? How can Shutterstock and similar platforms enable artists to opt-in with their material? We weigh in on  how these technological advancements can enhance, rather than hamper, the creative process. But we're not stopping there! We'll explore how Storyboarder.AI is proving to be a powerful tool in the job market, aiding storyboard artists in creating complex visuals quickly. Don't miss out on this enlightening conversation; let's untangle the intricacies of AI in film-making together. Is this beginning of something new or did we just choose the form of our destroyer? Found out in Animator vs the Machine!

Speaker 1:

All right, on today's podcast we have a very special guest. Before we get to that, I would like to explain how we got here. So we got here because a friend of mine sent me an article from cartoon brew talking about Two tech pros that were trying to make it one man studio by like, by using technology to make a boy basically, quite said, a one-man studio. But in that article they talked about companies that are looking to me use AI to make storyboards. So I found they found their webpage, emailed them and, surprise, surprise, I got an email back. So today we're talking to Zed from final easy.

Speaker 1:

Hi, yeah, thanks for having me All right. So the people that don't know what I'm talking about what is your product and what is it?

Speaker 2:

Yeah, so basically it's an AI solution solution to create a storyboard from a movie script or any kind of script okay, and so how does how does it do this, like, how do you work this?

Speaker 1:

This is like a stable fusion. You typed in prompts or is it like a clicking functions like how does it work?

Speaker 2:

So we've built. So it's a. It's a software as a service platform. It's fully in your browser and how it actually works is if you have let's say, you have a movie script from for movie and you just you just wrote a script and you want you need some financing, financing for it, or you want to prepare your team. Maybe you already have your team, but you you are just in the pre-production phase of your project.

Speaker 2:

So you are planning on shooting and you just take your script as it is. You don't need any markers or anything at all. You just take your script as it is and then upload it to our tool and our tool will automatically detect all the scenes inside your script and Inside each scene, all the shots inside, yeah, inside each scene, all the shots. And then it will Generate, for each shot inside your scenes, it will generate a custom image by a predefined art style and then just give you export functions for both shortlist and also storyboard and, yeah, you can just print both or download it to just refine your storyboards even further. Like we have some customers who are doing this already, just Exporting the plain images and then importing them into the photoshop or anything. Which software? Just, yeah, image manipulation software they're working with to edit these further, and yeah, and then just have a finished storyboard and the shortlist.

Speaker 1:

Okay, so then it sounds like this is for people that are independent filmmakers and don't have the Access cash to hire a storyboard. Is that correct? A storyboard artist story? I meant to say so. Who is this for? Yeah, who is it this for?

Speaker 2:

So we the idea for storyboard AI emerged from the need of creating storyboards for smaller project. So we ourselves, and so we are film production company called final and Germany Dortmund, and we are working with many Producing commercial stuff, but also some feature firms. We already did, I think, three or four feature firms in the last years and but most of the stuff we are doing is commercials and so some of our clients have a bigger budget and some have a smaller budget, and especially the Clients with smaller budgets, and we don't have the capacity to To create storyboards for each of these projects. So for each project, we obviously have a shortlist, because we need that for pre-production, but a storyboard, which is a very useful tool, and on set and Also in the pre-production phase, I we don't have the time and or the money to create storyboards.

Speaker 2:

So for the smaller projects, we thought, hey, it would be cool to use AI to create storyboards for these projects as well. That's basically where the idea came from. Then we thought, yeah, it's helpful for us we are a production company but maybe it's also helpful for others. We started planning this whole thing out and launched like end of August this year. Oh, cool.

Speaker 1:

I'm going to circle back to how it works. I forgot a question. Yeah, sure, you said it reads your script and analyzes it and gives you how is it analyzing your script? How's it doing that?

Speaker 2:

So there's happening some magic Magically detects all the seeds? No, so what we're using is we are actually using we don't have the capacity as a film production company, so we don't have a big IT team or whatever so we are using accessible and open or mostly open software and tools which are available for everyone and fine-tuning these models and fine-tuning these programs to do what we are doing essentially. And, yeah, text analytics for, let's say, if you have a movie script which you upload we have, since we are a film production company ourselves, we have a bunch of data on movie scripts ourselves and also in shortlist, and we know how these are structured and what specifics there are, let's say in a shortlist. And we just train these models on our own material to just detect scenes and shots better.

Speaker 1:

Okay, so I can hear the pitchforks behind me and the animation community behind me right now. And when you said you can choose styles, how is it gathering those different styles? I'm just curious about that. Like, how is it collecting what it deems as a style? Do you understand?

Speaker 2:

So we are using some. So we thought it would be useful of fine-tuning different styles with our own stuff. But we so we are a film production company, but we don't have as many storyboard artists in our company and different styles, et cetera. So what we are basically using is open source software for image generation and just pre-defining these prompts. So they so, they, so we can expect the outcome to be like the things we want. So we don't have just one specific style which would be like one storyboard artist style, but we have different styles which we pre-defined and, yeah, and using as standards in our tool. Okay, Does this answer your question?

Speaker 1:

Yeah, I think so. Yeah, it sounds. Yeah, I can piece it together. So what are some limitations of the software right now? Because everything always gets updated in patches and it gets better as it goes. So what are the limitations as of right now?

Speaker 2:

Yeah, so that's an interesting question, because when we started visualizing how this cool tool could look in the very beginning and we already at the research step of this project we realized that we will have some setbacks and some problems with this whole project. But the thinking behind all of what we've done until now, and also in the future, was and will be and that AI technology doesn't seem to get any worse. So it gets better from day to day, from week to week, month to month, and we are always trying to update our tool. But some common issues we actually do have right now are, for example, consistency in faces.

Speaker 2:

So this is like it's not unsolvable, but it's a very tricky thing to accomplish. So we try to pre-define some faces and then just replicate these faces with detailed descriptions of how the face looks, etc. But also there are different approaches to this. So one approach would be we can just describe in text form what the face looks like, or what we could do is just upload the face. But then the problem is we need different angles of this face to train or fine-tune our AI model to just recreate this face. And yeah, so, for example, this is a complicated problem which we are facing right now.

Speaker 1:

Okay, yeah, because I was going to say when I looked at the demo, like I saw there was a little drop bar menu where it was like close shot, medium shot, extreme, far shot, but I didn't see any for like angles. So I guess you'd have to type that in. It had to be in your script, being like I want a low angle, and then I guess you'd have to write close up for it to scan properly and give you an approximation. Is that correct? Yeah?

Speaker 2:

Yes, so actually we do have the option to select that. I think one is perspective and the other is short size, and with short size you can just select close up, medium, short, full angle or whatever. And then with perspective, you can, for example, choose ground short or high angle, short, or birds I view, or whatever, and you can just combine these things. But you could also write it directly into the description. So everything will be. So. We have a table, a shortlist table, and everything will be prefilled while generating the shortlist. So we are analyzing the uploaded screenplay or script and then the whole thing gets just prefilled. And you as a filmmaker, let's say, you want to change different things. You want the medium shot instead of the close up. You can just go into all of these cells you can see in the tool or the website. You can go into all of these cells and just customize like you want Interesting.

Speaker 1:

So are there other competitors out there that are doing similar things to you? What are you guys are doing? Or are you guys the only game in town right now?

Speaker 2:

So when we started research on this whole thing, we were the first ones, so I think at that time I can imagine that some others were experimenting with that as well, but nothing public, so we couldn't find anything on the internet at all, and we did quite a lot of research on this. And so right now there are some competitors from different companies. I think there are three or four, two or three companies that are doing similar things, but all with a different approach. So one approach is so with the one tool you have and we also test all these products et cetera, just to make sure that we have the best or we built the best out there and one of these tools to give you an example of what the differences between these tools are and one of the tools you can't upload anything at the start of a project, but you just have to write an idea into a text box.

Speaker 2:

Let's say you have an idea for a short film or something. You just write your idea in there, it generates automatically your script and then it generates your storyboard. What we are doing is a different approach. So our approach is more like for professional film productions which already have a screenplay, or for professional screenwriters which already have a screenplay, and we have this step between. So, between idea and final storyboard, we have this step of if you don't only have the idea but also already have a finished script or script or screenplay, you can just take that, upload that, reanalyze all the scenes and shots and then we generate the storyboard. So that's something that none of our competitors are handling right now, right now.

Speaker 1:

So you answered my question. I was like what makes yours stand out? You already did it, perfect.

Speaker 2:

But some things. For example, there are also some competitors who have a completely different approach. Like I can't recall their name right now, but I think it's close, so you can't even log in or anything on their website, but you just have to contact them and what they're doing is they take material from your own storyboard artist and then train an image model on that material for you, so you can just upload your own scripts for the next project and then get the same results or similar results as your storyboard artist. So that's a completely different approach, interesting.

Speaker 1:

Good, now to get to the nitty gritty, as I like to call it. It's no secret there is a lot of contention between artists and AI models that are being created, because a lot of AI models are sampling artists' work and just taking what their style is considered. And there's companies out there that are quote unquote helping artists. Like there's that Glaze app that disrupts AI from using your work, or there's a new one, nightshade, where it quote unquote poisons the artwork so that when you can't sample it. So if you're like, oh, there's a dog, and then what Nightshade does? It corrupts the file, so it doesn't produce a dog, it produces cats or porcupines, so it doesn't do prompts properly. Besides that, what do you think are? What do you think the social and a moral responsibility should be towards? Should the companies that are making these AI softwares think about these things, or is it is the onus on the user that are using these products? What are your opinions?

Speaker 2:

Yes, so that's a very difficult question to answer and I can just answer from my own perspective on this whole topic. So I developed Starrywater AI, but actually we didn't develop a whole model ourselves, so we don't have much training, or we don't have, I think we don't have any training material from outside our company, but we just use some things to test, but we never use something to actually train and we're using available, mostly open source, software to generate the images we are using, and so I think it's a very important topic to look at what data sets are these AI models trained on and are these artists, which artworks were used to train these models? Are they compensated, or are they even recognized, or are they even? I don't know, are they even? Do we know who they are? Or is it just an anonymous bubble of data where we can't tell what artworks from which artists were used?

Speaker 2:

So that's a very difficult thing to answer, in my opinion, and I think the best option so let's say it's a perfect world and we can just create something beautiful and I think the best way to do something like this would be give artists the opportunity to opt in with their own material, like with their own photography and with their own text and with their own everything, to opt in into AI models, pay them for using their data to train these models and then use that. I think that would be the cleanest option. We would have to solve this properly because these artists have opted in or would have opted in, and some companies are already doing some things like that. For example, I think it was Shutterstock which is there letting their artists opt in into training their own AI models, and I think that's beautiful. I think that's a really great way to solve this issue.

Speaker 2:

But since this is a very new technology where many people don't know how the technology actually works, it's very difficult to negotiate. That, I think, and for some models that are already mid-journey or stable diffusion or dolly or anything, I'm not sure if we know what data sets are these models actually trained on, and it's very hard to get back to the source. So how do we find out if these models were trained on some artists' works? Who doesn't want that? And I think that's a really difficult question. I don't know how we can solve this now, but I think if we were to train a new AI model, a good way would be to just let artists opt in with their material and just pay them.

Speaker 1:

Right, I totally agree with you, because I've seen artists being like, oh, this AI ripped off my work. I'm like I look at the comparative images and I'm like, oh, it looks similar. But I've also done everyone's done stable diffusion or dolly or whatever they flew around with it. When it generates something, you can't really tell what it is or you can't tell what style it is. You're like, oh, okay, Unless you're like, very specifically, like I want Jack Kirby, blah, blah, blah, blah, blah, blah. That's like okay, now you're being very specific enough. But if it's just general stuff, it's hard to tell sometimes where the style is coming from. I totally agree. But, yeah, your idea of I think that is the best approach, whereas if you're generating an AI model, you produce all your artwork in-house and let the AI base it off of that, or paying the paying artists and all that.

Speaker 1:

But this will be my wrap up question. Very short interview, it's all good, I know you're busy. So AI to me is kind of like Mary Shelley's Frankenstein, where it's like this doctor creates this impossible technology and then, once it's out in the world, he's like struggles to fix it when he can't because it's out there. So like there's no way AI is stopping. It's just going to like. This version of AI, generative AI and deep learning is going to stop. It's just going to keep going. So how can we ensure that the introduction of this technology enhances creative process rather than restricts it? So how do we stop it from you know everything looking exactly the same versus you know the creative freedom of like everyone has a different style or different way of doing things, versus an AI who's like this is the way we're doing, everything looks the same.

Speaker 2:

I think it's not so much a question of can we prevent AI from producing the same image or the same style all the time, so make it deterministic or something. I think that's not the question we should be asking. But I think the question we should be asking is more like can we defend ourselves and our artistic voices against AI, to stand over this, so to be able to prevent our human aspect inside all of our artistic strife? And I think so, what AI can do right now like the image generation, but also text generation is you can. It doesn't have to be all the same. So I think it was today morning I was on the stability so stable diffusion the company stability AI developed. I think it's called stable music or something where you can just generate music and it's pretty cool they are cooperating with like a music platform. And then I was thinking about OK, but if two people are giving music prompt to generate a music track and entering the same prompt, would it be similar? And then in the FAQ and we've also tried some things they answer it as so every time you try to regenerate a track, even though the prompt is the same, the output will something different, and it also if you listen to it it's totally different and it's the same style and it's similar in a sense, but it's totally different. So, let's say, if you just open chat GPT and write something inside chat GPT, but even use the open API where you can adjust the temperature for deterministic or non-deterministic, and you can just vary between different outputs of these models where totally different things can happen. So, for example, if you want a totally creative output, you can get that, and if you want a totally fact-based and logical output, you can get that as well, and these are totally different outputs and I think so.

Speaker 2:

I don't think we are heading into a future where everything will look the same or will be the same or similar, but I think we will have indefinitely different niches and aspects and everything of everything, and I think that's beautiful. I think that's a beautiful thing to have, even to be able to even catch some nuances in speech, for example, or also an image generation, just specifying tiny details, even in images, to be able to capture these in text and in images and also in music and everything else, and I think that's beautiful and I think that's enhancing human creativeness in a way that was not possible before. So if I, as a non-artist person, wanted to generate an image, I can just do that and get inspired by these artworks that AI creates. And, like I said, I think the more important question is not if everything will be the same, but the more important question will be will be as a human or as the human spacious will be? Will be?

Speaker 2:

Will be will be able to prevent AI from kind of taking away our jobs, and will there be any force to let us be creative anymore? So let's say, if I'm going to work and I'm creative at my job, then that's because I need the jobs and I go to the job and I need to do some work to get paid. But if that's not a thing anymore, will we, as humans, be able to be still creative, even though, let's say, we don't have to be creative for our job or for payments or whatever? And I think that's the more important question here will we be able to prevent AI from?

Speaker 1:

Yeah, still be creative at the end of the day. People say it's ingrained into us. Right, we're always there to tell stories and express ourselves. But if I don't know, there's part of me is like, yeah, ai can do it all. But part of me is also like AI democratizes art. Right, it allows everyone to do it. There's no longer like an elite class of people who are like, oh, you either have talent or you trained a lot of hours, or both. Usually it's both. And now it's like oh no, everyone can do it. If you have this vision in your head of like, oh, I want this cool wizard doing you know, boo, boo, poochoon powers or whatever, it's like you can do that, and then like it's kind of beautiful. But I also understand like people like myself or in the art industry, where it's like, oh, that's kind of scary Because it's like, oh no, everyone can do it. Why do they need to hire people like myself if everyone can do it? But yeah, yeah, it's interesting.

Speaker 2:

I think the beautiful thing about this so you put it in words very, very good, I think and it democratizes, like everything we do, or content creation is a general thing, but also like going through emails or something like these are things, so going through it fast at least, and I think these are so. I'm a film producer. I studied film production at a film school, etc.

Speaker 2:

And so in film history lessons we learned that the switch from analog filmmaking to digital filmmaking democratize filmmaking as a whole as a category of art, and I think that's beautiful because many, many more people and so there were no gatekeepers anymore, so you just could get, grab a DSLR or anything and just starts shooting a film, right, and you could be, or you can be creative like there and this possibilities. And even now, with our smartphones, the cameras are some. Cameras of the smartphones are better than professional cameras and everyone can go out there and shoot the next blockbuster Hollywood movie. But the question is are people doing that? No, they're not. So the filmmakers are still doing that and I think something similar will happen here, where we do have access to these creative tools to generate text, music, films and storyboards, images, but the art is not to. I think the art is, so the idea is the art, some in a certain, in some sense. So if you have the idea and can can produce it using these tools, then I think it's a perfect combination of both worlds.

Speaker 1:

Yeah, I totally agree. Like you brought up a good point of you know it used to be, there should be gatekeepers. Like this is how we film. We need filmmakers was like you need a film camera with film stock, and then a great example is when smartphones were introduced, right, you had all these kids like, like, wanted to stop motion, and now they could, and using like Lego, so it was anything they had in the rooms and then, like they make these, some of them like those Lego videos are really, really good and like now they're using them in like commercials for everything, like that.

Speaker 1:

And I was an animation style or it's like, or it's Lego pieces and we're moving around like this, like they made how many of those Lego movies that are technically the 3D but they're in the style of, like these little kids going to do, to do, to do so it's like. Without that, like well, without these technological innovations, you wouldn't have new styles or new ways of doing things. So I think you're yeah, you are correct, I don't like, I don't think art is going to go away. I think it's just going to change and innovate in different fashions in my opinion, but yeah, so, as we wrap up, is there anything you want to tell the phantom listeners out there, or artists, anything we missed or anything you want to say.

Speaker 2:

So maybe a small personal thing, but I just want to add this yeah, go for it. So I actually read this Katrumbu was the page, I think the article. I read it as well and I think it's a very detailed, nice article about the capabilities of AI, etc. And then I scrolled down on this page and went into the comments section and I wish I hadn't done this, because it's really so myself as a developer working in a film production company, with the goal to be able to, yeah, make our own craft, sort of craft of filmmaking, so we can do this better.

Speaker 2:

And, like I said, the idea for Storyboard AI emerged from this, because we don't have the budget for each commercial to do some storyboards but at the same time, we want to be as creative as possible for our own clients and for our own work and for our own artistic voices. And so I was thinking about Storyboard AI this way, and still am and I'm. So this is my belief in AI and especially in Storyboard AI and to be able to make it possible for other filmmakers as well, especially for freelancers who can't draw, don't have the budget for any project they do, to do storyboards. And then I go into the comments sections on this article and it's like, yeah, two tech pros are I don't know a company from Germany. They're just, they want to take away our jobs and they're destroying everything and we could boycott them, etc. And I was like, oh no.

Speaker 1:

Yeah, what have I done?

Speaker 2:

Yeah, they got it completely wrong. So this was never our intention to take away any jobs or anything. So, like I said, we do have actually we do have in our company, we do have storyboard artists who are working for us and with us, who are doing beautiful work on storyboards, which we also need for our project, because we know if we need some customized and some perfect storyboards, we just ask them, we don't ask the AI, and so I appreciate all the work from all storyboarders around the world, but this was never our intention to take away any jobs, and this is also not what Storyboard AI is used for right now. So it's more like for projects who can't afford storyboard artists, these people are using storyboarder Sure.

Speaker 1:

To hear that it's not coming after your jobs. It's a tool, right. It's a tool set to allow people that don't have the skills to make art make projects versus. You know we're just going to destroy the industry, no more.

Speaker 2:

Actually we have some storyboard artists as customers who are using the tool to generate the canvas for their ideas in a quick way and then just download the plain images from our tool and then just put them into Photoshop or anything to edit them with their own styles and add some stuff etc. So I think they're pretty happy too. So maybe some storyboard artists will give it a try as well.

Speaker 1:

So if people wanted to check out Storyboard AI, where would they go to? Yeah basically Storyboard AI. Okay, perfect, so does itself Perfect. Thanks, steve, for talking to me today.

Speaker 2:

Thank you very much. Thanks.