Theme:
The everyday life is often gray, oppressive, and monotonous due to the given circumstances. Children and creative individuals escape into relaxing, joyful, life-affirming fantasy worlds in their thoughts. It's a getaway from reality, constraints, and conformity. Some discover themselves in the process, becoming less lonely and more empowered in real life.
"Onirica" is derived from the word "oneiric," which relates to dreams or resembling a dream. It is often used to describe something dreamlike, fantastical, or related to dreams and imagination. In your provided text, the term "Fantasiewelten" could be translated as "dream worlds" or "fantasy realms," which aligns with the concept of "Onirica."
Making of notes:
AI technology came in fast in 2022 and for me personally it was clear, that my next main project needs to revolve around this exciting technology. However at it turns out things are developing fast and are getting tiresome. I remember a time, when Bard, ChatGPT-4 and Midijourney 5 were announced right at the same times. Keeping up with burned me out. You need a full time job on this and maybe even then you only specialize in one area. Also when a lot of the smaller tasks get done by AI, you are left with all the creative work. Constantly being creative and original burns you out, while at the same time you sometimes battle with errors and bad images, that AI gives you.
That's why I felt it was time to try out my first collab between AMV editors, after a raw cut was finished. I joined up with my coworker Arcothy to battle this challenge together. This was probably the best choice for this input. Together there were enough creative power to pull this off. Also we added up pretty nicely. The cut got better, the scene selection more interesting and there were great ideas on how to implement AI into pictures. Refining effects, masks and scenes got more important as well. We worked together via Adobe Cloud Team Projects, which was not as smooth as we first thought, but after a bumpy start it was alright.
So what's left was to get AI work in anime and video. Of course there are possible DeForum animation videos, but they were not easily to control and will always have that certain look to them. Runway was at Gen-1 stage and the costs for AI video generation were simply too high and too gimmicky. What's left were still image generation and implementing them in video. StableDiffussion with the use of different AI models and LORAs gave pleasing results. I felt we had more control over things in comparison to MidJourney or Firefly. Arcothy made it work via Ebsynth or Adobe Generative Fills and I often took the time to give it a well touch up in After Effects or Premiere Pro.
What I really enjoyed was applying abstract Color via ControlNet to scenes. Some scenes turned out really good. Others need lot's of prework with masks, tracking or color layers. The use of various overlay techniques and stock videos helped integrating the images a lot! This and Adobe Generative Fill will highly return in my future projects.
The crazy thing about the technology is, that everything is so fast. A few months later things would look differently, because tech improved or new workflows came out. I hope that this video will not look too dated. Only time will tell. AMVs can be truly timeless works, but this will probably not be one of them.
Thanks a lot to Arcothy for truly immerse into the project and making it possible by giving his best!
Here is the full AI effect breakdown of scenes:
(read it on pastebin here: https://pastebin.com/Hf08x2k5):
00:03:00
Frame interpolation (Flowframes)
00:51:00 - 00:56:00
Partial background video asset generation via DeForum
01:00:00
Asset generation via NijiJourney (Egg)
01:01:00 - 01:05:00
Partial background video asset generation via DeForum
01:14:00
Stable Diffusion with LORA and Adobe Generative Fill integrated via EbSynth
01:24:00
Asset genertion via Adobe Firefly (Bubble)
01:27:00
IMG2IMG with Stable Diffussion with LORA and Adobe Generative Fill integrated via EbSynth
01:30:00
Background video asset generation via DeForum
01:39:00
IMG2IMG batch animation via Stable Diffussion (subtle changes in Background)
01:48:00 - 01:54:00
Stable Diffusion with LORA and Adobe Generative Fill integrated via EbSynth
01:57:00
IMG2IMG with Stable Diffussion and Adobe Generative Fill
01:59:00
Background video asset generation via DeForum
02:02:00
Colour Transformation via ControlNet
02:04:00
IMG2IMG with Stable Diffussion with LORA and Adobe Generative Fill
02:09:00
IMG2IMG with Stable Diffussion with LORA and Adobe Generative Fill
02:10:00 - 02:41:00
Colour Transformation via ControlNet. Partialy integrated via EbSynth
02:54:00
IMG2IMG with Stable Diffussion with LORA and Adobe Generative Fill
03:00:00
Adobe Generative Fill
03:06:00
IMG2IMG with Stable Diffussion with LORA and Adobe Generative Fill integrated via EbSynth
03:08:00
Background image asset generation via Stable Diffussion IMG2IMG (Inpaint)
03:16:00 - 03:20:00
Colour Transformation via ControlNet
03:26:00
Adobe Generative Fill (Flowers)
03:32:00
Background video asset generation via DeForum
03:34:00
Adobe Generative Fill (sides and top third of the image)
03:36:00
Adobe Generative Fill integrated via EbSynth (Hearts). Animated via After Effects
03:40:00 - 03:50:00
Colour Transformation via ControlNet
03:52:00
Adobe Generative Fill integrated via EbSynth (Orb basis, part of grass, background lights)
03:52:00 - 03:55:00
Colour Transformation via ControlNet
04:07:00
Adobe Generative Fill (the drawing)
04:12:00
Adobe Generative Fill (planets)
04:16:00
Adobe Generative Fill integrated via EbSynth
Shorter version of song made with help of Premiere Pro Beta - Essential Sounds
Various scenes got upscaled or cleaned with video AI
Информация
Аниме: Belle, Drifting Home, Weathering with You, Your Name, COLORs, SPY x FAMILY, Akebi-chan no Sailor Fuku, A Whisker Away, Bubble, Penguin Highway, Fireworks, 5 centimeters per second, Children of the Sea, Her Blue Sky, Howls Moving Castle, The Place Promised in Our Early Days, Ef - A Tale of Two, Liz to Aoi Tori, Sora yori mo Tooi Basho, Vivy - Fluorite Eye's Song Sakuga Mad, Hibike! Euphonium, Hello World, Sora yori mo Tooi Basho, Eve - YOKU, Eve - 蒼のワルツ
Музыка: Kid Francescoli - Moon (Custom Edit)
Я такие видео не люблю и осознаю их ценность только во время просмотра. Пока я вижу его перед глазами, я отдаю себе отчёт в том, почему это видео заслуживает нормальную оценку. Как только такие видео свой хронометраж отыгрывают, я о них забываю. Тут стоит ценить сам момент. И это не что-то такое, о чём я буду помнить с желанием в дальнейшем пересмотреть. В пример могу так же привести то длинное видео, которое заняло первое место, по моему, на биг контесте 2016. Ну, или 2017. Перепроверять не буду. Я смотрела его как-то, но в памяти оно не отложилось. Да, красиво. Да, скорее всего, я бы голосовала тогда за него из-за проделанной работы, ибо мои личные симпатии и антипатии не должны влиять на объективную оценку. Но видео по списку (от первого места) ниже мне понравились больше победителя. Особенно тот фроузен харт.
И всякие калечные, всратые, гротескные видео со смешными стоп-кадрами и плохим качеством - я обожаю, в отличии от таких вылизанных. К примеру: https://youtu.be/kjMzzGDv8aI?si=JHPgL-2RJ3gkeURl . Я пересматривала его не менее 18-ти раз. И пересмотрю ещё 118. Хотя, тут не последнюю роль сыграла песня.
Мне и работа самой нейросети не заходит в принципе. Даже арты от живых людей в такой стилистике мне никогда не нравились. И из-за этого внутреннего отторжения у меня совсем котёл не варит, что бы я могла такого для этой номинации сделать, чтобы это нравилось хотя бы мне. Наверное, по этой причине я и клепаю себе свои простые видео на телефончике и ирис кис кис во рту гоняю. И мне вполне достаточно того, что эти видео какому-то малому количеству людей, но всё-таки нравятся.
А этот клип может запросто брать свою номинацию, если кто-то не выстрелит лучше.
First thing worth noting is construction: At first we see gray world with no colors and this world transforms into colorful one. So the theme is escapism here. Sadly (for me) it won't develop into something else and stays like this until the end of video.
Second thing is how scenes change. I really like that you do not change the scenes with every beat, but with subtle separator that signifies the end of something logical (and occasionally ignoring it). This way, we can feel the beauty of the scenes.
Third is scene selection. I find the scene selection making a way from uniqueness to genericness. Maybe you were forced to use them to show how you altered those scenes, but honestly I think it did not need that. Especially the parts starting from 3:03 excluding couple fresh scenes from Belle looks like a timeline filler.
Fourth is use of AI. Everything looks clean, there are no dodgy effects and I appreciate that.
But at 0:04 the interpolation looks a bit "warpy", I think you should have considered to not outright start with that thing.
Also, most effects are done that way: choosing scene, sending imgtoimg, generating only one image that is significantly different but keeps key poses, then masking character, putting the character on top of the "insert your transition here" from original to generated image (sometimes tracking)... I honestly do not see a lot of AI here, I would be more interested in prompt interpolation and deforum that plays main role in the scenes.
And lastly, the integration with music. It is AWESOME. Just the transition at 1:36 shows the professionalism of the author. 0:50 is also very good.
Good luck in the contest.