03_First Projection Mapping Test

This week I finally started with the technical side of my projection mapping project. First, I borrowed a beamer from a friend, but that didn’t go too well. The quality wasn’t that great and I forgot to take the remote with me. Then I got one from our media center at uni, which was made for short distances, so it fit my setup way better. Still, getting it to work wasn’t as easy as I thought. I guess I made things more complicated for myself by just plugging it in and hoping it would just work instead of reading the manual first. Once I got the beamer working and connected it to my Mac, I watched a short introduction tutorial about how to use MadMapper. That really helped me get started. It’s important to make sure it’s not mirroring the laptop screen, but instead working as an extended display. In MadMapper, you also have to make sure to select the correct screen (the projector) and activate fullscreen mode for the output. This way, it’s still possible to control things on the laptop while projecting. Three key technical steps I learned for setting it up properly: 

  • Set the projector as an extended display, not mirrored
  • Match the resolution between MadMapper and the projector for the sharpest image
  • Use the correct shapes in MadMapper (like Ellipse, Quad, or Masks), depending on what object you’re projecting on

After that was done, I moved on to experimenting with the software. At first, I didn’t upload any of my own files. I just played around with the materials that are already available in MadMapper. I projected some of the basic visuals directly onto my wall to get a feeling for how the software works. I spent some time trying out different shapes, effects, and settings to understand what everything does. To support that, I also watched a tutorial. It gave me a better overview of the platform and helped me understand how to create different scenes and manage the workflow. Later, I started getting a bit more creative. I projected some of the visuals onto my analog film photos that I had hanging on my wall. It was interesting to see how the light interacted with the pictures. I chose visuals that would highlight the details of the photos and kind of bring them to life. It actually looked really cool and added a new layer of depth to the images.

After that, I wanted to try something more organic, so I used my Monstera plant as a surface. It has these big, leaves with lots of holes in them – not exactly the easiest shape to work with. First, I projected a still image onto it. I realized that starting with a static image made it much easier to get the mapping right. Once the shape was aligned, I switched to moving visuals. Because the surface was so irregular, the animation sometimes looked a bit distorted, but in a nice way. It felt more alive and playful than just projecting onto a flat surface.

Some of the main takeaways from the tutorial and my own tests were that I now understand how to set up scenes and cues in MadMapper, which will be really helpful when I want to switch between projections during a show or installation. I also learned how to import and organize media like videos and images, which made my workflow feel more structured and less chaotic. And I got a better idea of how to align projections to real-life objects, even tricky ones like plants, curved shapes, or detailed textures.

All in all, I’m happy with my progress this week. I’m still figuring things out, but I’m slowly getting more comfortable with both the technical and creative sides of projection mapping.

From Sketch to Virtual Runway: 

How Hard Is It to Create Your Own VR Clothes?

Virtual fashion has experienced explosive growth because virtual reality (VR) worlds now allow people to express their personal style. Developing VR clothing presents an adventurous creative path for your gaming avatar meetups and digital socializing purposes. Virtual fashion dream translation requires what level of difficulty to execute? Let’s break it down.

As a starting point we need to grasp basic principles of virtual reality clothing

The digital garments of VR are virtual garments which attach to digital avatars. VR clothing design happens through software which enables the production of virtual fabrics alongside textural effects plus motion attributes. VR clothing bypasses traditional fashion limitations through its ability to design unconventional designs with no physical boundaries. Designers obtain full creative freedom because they can experiment with no boundaries.

Virtual fashion design needs proper attention to form elements as well as avatar compatibility and natural movement in addition to technical execution. Inadequate design of clothing products may cause items to penetrate the user’s body structure or create abnormal movements thus interrupting their VR experience.

Essential Tools You’ll Need:

  • 3D Design Software: Programs like Blender, Marvelous Designer, or Clo3D are popular for creating realistic clothing simulations. These platforms provide flexibility in shaping garments and adding details.
  • Texturing Tools: Substance Painter and Photoshop help add colors, patterns, and textures, enhancing the garment’s realism.
  • Rendering and Animation Tools: Software like Unity and Unreal Engine allows you to animate the clothing, simulate realistic physics, and visualize it in a VR setting.
  • VR Platforms: Platforms like Decentraland, Roblox, or Meta Horizon Worlds provide spaces to showcase and sell your designs.

The Learning Curve

VR clothing creation poses distinctive challenges especially to new users who start this process. Substantial creative design along with technical abilities create the productive basis of this process. Learning 3D modeling and UV mapping alongside rigging mechanics for avatar garment attachment and creating textures which resemble reality proves difficult until one gains sufficient practice.

  • For Beginners: The initial learning curve might feel steep, particularly when learning to navigate 3D design software. However, countless online resources, tutorials, and communities can help guide you through the process.
  • For Intermediate Designers: If you have experience in graphic design or fashion design, the skills transfer well. Marvelous Designer, for instance, simulates real-world fabric behavior, making it intuitive for those with garment construction knowledge.
  • For Advanced Users: Professionals can experiment with complex materials, intricate textures, and dynamic simulations to push the boundaries of VR fashion.

Pro Tip: Start with simple projects like t-shirts or jackets to understand the basics before advancing to elaborate designs.

Designing Your First VR Garment

Typically design processes require these sequential phases:

  1. Begin by creating sketches or digital representations of clothing designs during conceptualization. Next evaluate how the design will function and react against the avatar’s body structure.
  2. The base shape creation happens through the use of software such as Blender. Virtual fabric simulation on avatar models works best through the tool known as Marvelous Designer.
  3. Texturing provides the method of applying colorful designs with patterns to finalize the visual creation process. By using Substance Painter users can produce realistic material textures.
  4. Through rigging enable your garment to link up with avatar skeletons then it will follow their motions naturally.
  5. Virtual testing enables designers to position their creations through VR for required modifications.

Experimentation and Creativity

Virtual reality fashion allows creators to transcend physical limitations by developing designs that challenge classical dynamics of physics. Virtual fashion designers craft their designs through excellent ideas such as responsive garments which alter with user movements and through outfits made of glowing dresses or fluid metallic materials. You gain full creative freedom to design nonrealistic concepts and futuristic designs without boundaries.

A person in a long dress

Description automatically generatedA person with a ponytail and a scarf around her neck

Description automatically generated

The virtual fashion marketplaces of several platforms make their appearance to users. Developing a personal commitment to digital fashion allows you to turn your designs into income through digital fashion items as well as NFTs and game skins.

Creative Ideas to Try:

  • Gravity-defying capes
  • Interactive garments that change color
  • Transparent holographic outfits
  • Cyberpunk-inspired metallic suits

Challenges You Might Face

The liberating element of virtual fashion brings both strength and obstacles to users. Some common difficulties include:

  1. Proper avatar compatibility and rigging procedures prove to be technical problems during the process.
  2. The large file size of VR clothing creates management problems which impacts system loading speed and operational performance.
  3. Clients may have to dedicate time and professional skill to reach precise fabric simulation results.
  4. Continuous practice together with persistence enables major improvements in most cases. Designers who work in communities often share their procedures along with offering support to other members.

Conclusion: Is It Easy or Hard?

The process of making VR clothing becomes easier after acquiring experience and learning how to design with the right tools matched to your creative ambitions. The technical complexity initiall Beginning simple projects and increasing practical work and trying diverse styles helps both new and experienced users develop their skill level and confidence.

VR fashion provides unlimited opportunities where personal designers can build their creativity toward establishing a virtual presence including the market. All expert fashion designers began their journey at the beginner level thus becoming experts by starting first.

The Challenges of Creating Virtual Reality Clothing: Where Fashion Meets Frustration

Fashion designers encounter multiple hurdles while developing virtual reality apparel which results in significant challenges during the design process.

New technology in Virtual Reality fashion completely changes our approach to wearing clothes. The fashion world uses virtual reality garments along with virtual avatars to let users express their creativity through sustainable fashion options. The creation of digital clothes involves numerous technical obstacles because of the characteristics of the virtual world. VR fashion developers encounter numerous significant difficulties during their work to create virtual reality clothing.

1. Achieving Realistic Fabric Simulation

The movement dynamics of VR clothing depend on simulated fabric behavior because digital fabrics lack natural attributes of physical textiles. Designers need to duplicate the exact textures and movements of all materials including flowing silk alongside rigid leather. becoming realistic demands scientific mastery regarding material behavior along with high-capacity processing systems. VR designs fail unless simulation programming is precisely accurate since it affects the way virtual characters move through their garments.

2. Balancing Aesthetics and Performance

Duplicating intricate fabric patterns together with high-end textures requires processing equipment with great power intensity. Softwares featuring photorealistic clothing offer better immersion yet they put excessive strain on computing devices. Developing VR content requires developers to strike an ideal stability point that maintains performance speed while showing visually pleasing graphics. The preservation of frame rates through texture simplification or polygon number reduction leads to lowered realistic detail in the virtual environment.

3. Avatar Customization and Fit

Physically produced clothing adjusts to accommodate people of all body sizes as well as shapes. The capability to customize in virtual reality systems remains challenging at present. Programmers develop adjustable clothing systems which handle various sizes of virtual characters through automation. Such systems face technical issues which result in improper clothing conformance and garments that expand abnormally.

4. Physics-Driven Collisions and Clipping

VR fashion faces a major difficulty because clothing frequently intersects with both the avatar and other garments in ways that are known as “clipping.” The detection tools that developers embed in their systems stop clothing from interchanging yet they still have difficulty making interactions between objects work flawlessly particularly when characters execute intricate moves. When collisions are inadequately managed in VR programs they destroy users’ immersion and negatively affect their virtual reality experience.

5. Style Limitations and Creative Constraints

Design capabilities within VR exist without border while the system places specific design constraints on users. The designers have to simplify complex elements which display marginal realism during rendering and animation processes. Processing user interactions such as fabric manipulation becomes difficult within virtual environments because it demands specialized equipment.

6. Sustainability vs. Commercial Viability

Virtual reality fashion has environmental benefits because it cuts out the need for conventional fashion products created from real resources. The development process for high-quality digital garments requires extensive resources to create and demands a team of specialized workers to maintain it. Profitability and sustainable practices exist in constant opposition to each other.

Final Thoughts

To make VR clothing you need both technological ability and creative thinking skills. The multiple challenges of creating VR clothing have started to decrease since real-time rendering and adaptive garment technology combined with AI-driven physics continue to progress. Designer opportunities within the metaverse will expand proportionally to the growth of this virtual realm enabling them to transform virtual fashion limitations.

The initial step towards participating in digital outfit development requires comprehension of these challenges for designers, developers, and fashion enthusiasts alike.

Modellkirche und erste Mapping-Tests

Nachdem die Bauteile für die Modellkirche eingetroffen waren, wurden sie zusammengesetzt, wodurch ein verkleinertes Modell einer Kirche aus Holz entstand. Die Struktur des Holzes erwies sich als leicht faserig mit einem gelblichen Ton. In diesem Zusammenhang stellte sich die Frage, ob ein Anstrich in Weiß vorteilhaft wäre, insbesondere falls sich die ersten Mapping-Tests als nicht erfolgreich erweisen sollten.

Zur Durchführung der Tests wurde ein Projektor des Modells NEC LT20 aus dem Media-Center entliehen. Dieser mobile Projektor verfügt über einen VGA-Anschluss, weshalb ein USB-C-Adapter erforderlich war, um ihn mit dem Laptop zu verbinden. Erste Tests zeigten eine eingeschränkte Farbwiedergabe, dennoch wurde der Projektor für initiale Projektionen auf das Modell als ausreichend betrachtet.

Der Projektor wurde mit dem Laptop verbunden und die Modellkirche so positioniert, dass eine möglichst hohe Pixeldichte erreicht wurde, ohne die Proportionen des projizierten Bildes zu verzerren. Anschließend wurden in der Software HeavyM erste Masken erstellt, zunächst rudimentär für die Dächer, Fenster und Seitenteile. Unterschiedliche Shader wurden angewendet, variiert in Helligkeits- und Geschwindigkeitsstufen, um die Lesbarkeit und Klarheit der Projektion auf dem Modell zu evaluieren.

Die vorliegenden Foto- und Videoaufnahmen dokumentieren, dass trotz der begrenzten Leistungsfähigkeit des Projektors eine zufriedenstellende Projektion erreicht werden konnte. Dies lässt darauf schließen, dass das Modell als Grundlage für weiterführende Tests geeignet ist. Verschiedene Animationstypen und Mapping-Methoden könnten in weiteren Experimenten systematisch untersucht werden.

Der nächste Schritt besteht in der digitalen Rekonstruktion der Kirche als 3D-Modell, um gezielt 3D-Mapping-Techniken zu erproben. Zudem ist geplant, audioreaktive Animationen zu integrieren. Langfristig wäre es denkbar, ein detailreicheres Modell zu entwickeln und leistungsstärkere Projektionstechnologie einzusetzen.

Bereits die gegenwärtigen Ergebnisse zeigen, dass selbst mit einem einfachen Modell und einem veralteten Projektor aussagekräftige Resultate erzielt werden können. Die bisherigen Erkenntnisse bieten eine solide Grundlage für die Weiterentwicklung der Untersuchung.


Disclaimer zur Nutzung von Künstlicher Intelligenz (KI):

Dieser Blogbeitrag wurde unter Zuhilfenahme von Künstlicher Intelligenz (ChatGPT) erstellt. Die KI wurde zur Recherche, zur Korrektur von Texten, zur Inspiration und zur Einholung von Verbesserungsvorschlägen verwendet. Alle Inhalte wurden anschließend eigenständig ausgewertet, überarbeitet und in den hier präsentierten Beitrag integriert.

01. Turnaround Insights

This semester, I want to focus on modeling a 3D character from 2D concept art. I specifically mention “from 2D concept art” because translating a flat design into a three-dimensional model presents unique challenges—proportions, perspective, and maintaining the stylistic choices of the design which might not translate well in a three-dimensional space. 

After abundant research (a dive into YouTube search for video tutorials), I found the following tutorials and insights useful: 

Creating a Character Turnaround from a Concept Piece – This one goes the simple route of creating a character turn-around by first drawing half of the front piece and then duplicating it so the front would be symmetrical, then copying it in order to do the back-side of the character, after which the side-view is made. While the art was solid it did not give much impression of actual rotation in a 3D space, which, for experienced modellers (of which I am not) might not be an issue. The character design was also incredibly detailed, which of course serves its own challenges.

Another tutorial, more advanced one, for a simpler character concept (How I Make Character TURNAROUNDS and Sheets!) emphasizes the importance of keeping the process simple, as well as well-structured, by thinking about the anatomy of the design and using guiding lines to remain consistent in all the angles – front, back, profile and (!) ¾ view. 

The most useful video I found and which I will use to reference primarily my process was this one: Character Turnarounds: like a Pro! Photoshop Timeline

For the purpose of creating a full turnaround, the animator stresses the need to make 8 individual poses of every single angle the character would be turning in (or 5 in case the design is symmetrical, in which case the different angle poses could be duplicated). This animator, interestingly, started with the ¾ pose and began from there. This, to me, seems to be the most logical step. He states he did that, because it is the main pose in most animated scene where the characters have to both interact with each other and show the majority of their face to the audience. To me, however, it makes even more sense, because the ¾ view is where you get the most context for the shape of the features and the angles and curves of the body. A front view is far too flat, and a side view, while providing information on which parts jut out and which are concave, loses information in regards to the over-all design. After the ¾ is done, the neck is chosen as the pivotal axis on which the character is to revolve (two guides along both lines of the neck and one deadcenter) with additional guides at the outer-most extremities – top of the head, feet, shoulders, waist, chin and mouth, which keeps the proportions in check. Interestingly, the pelvis tilt is different for the front and back ¾ views – which means that the two could not be reversed, as could be done for the front view and the side view. Because of the way the pelvis tilts, it is either tilting upward (in the backview) or upward (in the front view). 

The animator also stresses a key difference between designing for 3D and 2D. In 2D animation, artists often use “cheats”—like Mickey Mouse’s shifting ears, which change position depending on the angle to maintain readability. When translated, the model often looks weird and unnatural. THis can be circumvented by “cheating” the model (automorphing) depending on the angle it is being viewed at, as was done for these two models: https://x.com/CG_Orange_eng/status/1482422057933565953 and https://x.com/chompotron/status/1481553948721180677

But that would be a further blogpost all on its own. 

Now that I’ve gathered these insights, my next task is to select a 2D concept art of a character and create a turnaround sheet before moving into 3D modeling.

02_MadMapper vs. After Effects

After getting a first introduction to projection mapping in my last blog post, it’s time to go further with exploring different program options. Since I’m still figuring out the technical side of things, I decided to test two software options that seem to make the most sens to use for my project: MadMapper and After Effects. As both of them provide different possibilities when it comes to animation and projection mapping I wanted to give both a try. This meant that I started to follow two beginner-friendly tutorials for projection mapping: one for MadMapper and one for After Effects. My goal was not only to understand how these programs and tools work but also to see which one might be the better choice for the project I have planned. As I am right now, also dealing with the challenge of learning a few different platforms at once it sometimes feels like I’m jumping from one tool to another without really getting the chance to master any of them in depth. This makes it difficult to decide which platform to commit to for projection mapping, as I don’t want to add another complicated software to my workflow if it doesn’t help me in the future. 

MadMapper

Starting off with one of MadMapper’s tutorials which introduced me to the basics of the software and started to explain how to set up a projection hereby using simple shapes to create its visuals. What I did like was how intuitive the interface was. Everything seemed to make sense and intuitive, which is great when you want to start learning new software. I started to play around with different shapes and movements, trying to understand how I could later apply these. But mostly it was important to me to just get a sense of the software and understand the basic workaround. When it comes to layering and fine-tuning the animations I however still a bit lost. Since MadMapper is mainly built for projection mapping, it makes sense that it focuses more on mapping visuals rather than creating complex animations from scratch. A big advantage of MadMapper is its real-time contour control, which allows for live adjustments during the production phase and not just before it. That is something After Effects doesn’t really offer, as it mostly stacks layers to create detailed effects.

After Effects

I also wanted to do another After Effect tutorial that was more specifically for projection mapping as this is something I haven’t specifically looked at so far. I already have some basic knowledge of After Effects, so the workflow didn’t feel completely new. The tutorial covered mostly simple animation techniques and how to export the visuals for projection mapping. Which was the part that interested me the most. The biggest advantage I see in using After Effect would be its flexibility. As After Effects is not really made for projection mapping, it still allows for more detailed and layered animations, which could be nice if I decide to go for a more artistic approach when approaching the flowers. At the same time, it also means that I would need another software to actually map the animations onto my objects, which again means I need to familiarize myself with another one and also add another layer of complexity. Another important factor is price. Since I already have access to After Effects through my Adobe Cloud subscription, there would be no additional cost to me. MadMapper, on the other hand, requires a one-time commercial license. I would need to purchase this to be able to use it without watermarks or other restrictions. 

Now that I’ve tested both, I have to decide which one makes more sense for my project. Right now, I feel like MadMapper is the better choice if I want a more direct way to work with projections, while After Effects would allow me to create more detailed visuals. The question is: do I want to focus on animation first and then figure out the mapping part, or should I go straight into projection mapping and accept some limitations in animation?

Concept Idea

Looking at another aspect besides the technical side, I also thought about the mood or concept idea as well as the aesthetic of my project. Since at the end of the project I want to project onto flowers, I have two main ideas. One would be to work with motion that brings the flowers to life, almost like they are moving or shifting beyond a still life. Another idea would be to approach it from a different perspective which would be to visualise the process of photosynthesis more abstractly. I am still thinking about both concept ideas and I will go more into depth maybe brainstorm more and create different animations to work with, but I also don’t want to overcomplicate things especially because this is my first attempt at projection mapping.

Challenges

One of the challenges I already thought about is to balance aesthetic and technical feasibility. And also, I have a bit of a frustration limit. I tend to learn fast but if I get a sense that I am not developing or constantly get the same issues I get frustrated and that leads to procrastination. While I would love to create something detailed and unique, I also have to be realistic about what’s possible with my current skill level. Here I think a good way would be to start with simple shapes and flat surfaces for the next step in my project and then refine the concept once I have a better understanding of the tools.

Erste Testungen: Adobe Firefly Video Model und Sora

Testphase: Visuelle und animierte Elemente mit KI gestalten

Um herauszufinden, wie präzise und leistungsfähig aktuelle KI-Tools im kreativen Gestaltungsprozess sind, habe ich zwei vielversprechende Anwendungen getestet: das Adobe Firefly Video Model sowie Sora von OpenAI. Beide kamen im Rahmen der Entwicklung eines Plakats für eine Veranstaltungsreihe zum Einsatz – mit dem Ziel, sowohl ein visuell ansprechendes Grundmotiv als auch eine subtile, animierte Variante zu erzeugen.

Ausgangslage
Für das statische Design des Plakats wurde zunächst die generative KI in Adobe Photoshop genutzt. Ziel war es, ein Hintergrundmuster zu erstellen, das sich stilistisch harmonisch in die Serie der bereits bestehenden Plakate einfügt. Dabei war wichtig, dass das visuelle Erscheinungsbild – insbesondere die Farbwelt und grafische Struktur – konsistent bleibt, aber dennoch ein eigenständiges Muster aufweist.

Der verwendete Prompt in Photoshop lautete:
„blaue Farben, feine Linien, Stil ähnlich, aber anderes Muster“

Nach einigen Variationen und Anpassungen wurde ein Ergebnis generiert, das sowohl ästhetisch als auch kontextuell gut zum bestehenden Designkonzept passt.

Im nächsten Schritt ging es darum, das statische Motiv dezent zu animieren, um für Social Media eine lebendige, aber nicht aufdringliche Version zu erzeugen. Der Fokus lag auf einer subtilen Bewegung der Linienstruktur, die dem Plakat eine zusätzliche visuelle Tiefe verleihen sollte, ohne den Charakter der Gestaltung zu verändern.

Zur Umsetzung dieser Animation wurden zwei KI-Video-Tools getestet:

  • Adobe Firefly Video Model
  • Sora von OpenAI

In den folgenden Abschnitten werden die jeweilige Vorgehensweise, die generierten Ergebnisse sowie der direkte Vergleich der Tools erläutert.

Adobe Firefly Video Model:

Hier kam das „Bild-zu-Video“-Tool zum Einsatz. Das Hintergrund Bild wurde als Frame hochgeladen, das Videoformat auch Hochformat 9:16 gestellt. Bei Kamera und Kamerabewegung wurde keine Auswahl getroffen. 

Der Prompt lautete: very slow movement; flowy liquid; lines glow in the dark; move very slow; slimy; flowy, liquid close up

Das erste generierte Ergebnis:

  • An sich tolles Ergebnis
  • Linien bewegen sich relativ schnell aber kontinuierlich
  • Lichtpunkte in den Linien nicht ganz optimal
  •  Fällt zum Schluss in der rechten unteren Ecke sehr ab

Da ich noch nicht zu 100% happy war, generierte ich mit den gleichen Einstellungen und dem identen Prompt eine weitere Version, die schlussendlich die finale Fassung des Plakats wurde:

  • Dynamisches Movement, ohne dass ein Teil „wegfällt“
  • Linien leuchten in sich und nicht nur an gewissen punkten
  • Sehr zufrieden mit dem Ergebnis

An sich war ich an diesem Punkt sehr zufrieden, aber dennoch wäre es aus Sicht der Designer:in gut gewesen, noch eine Version, auch eventuell in einem anderen Stil und anderem Movement auszuprobieren. Doch nach dem zweiten Video war leider die Obergrenze der gratis Videos erreicht. 

Pro:
+ schönes Movement
+ auf Anhieb gute Versionen, die dem Visuellen Anspruch gerecht wurden 
+ sehr einfach Anwendung

Con:
– auf 5 Sekunden limitiert, stellt schon eine große Schwierigkeit in der Verwendung des Videos dar
– die Qualität war nicht zu 100% überzeugend
– leider nach 2 Versionen gratis Versuche aus, keine Möglichkeit außer eines Abo-Abschlusses

Sora by OpenAI

Aufgrund meines ChatGPTs Abos war es mir möglich als zweite Version ein KI-Video von Sora generieren zu lassen. Ebenfalls kam das “Bild-zu-Video”-Tool zum Einsatz. Das Hintergrund Bild wurde als Frame hochgeladen, das Videoformat auf 1:1, 480p, auf 5 Sekunden und auf eine Version gestellt. Hier wäre es an sich möglich, die Dauer des Clips auf 10 Sekunden zu erhöhen, um aber vor allem bei den ersten Versuchen nicht zu viele Credits zu verbrauchen, wählte ich hier ebenfalls die 5 Sekunden. Ebenfalls gibt es in Sora die Möglichkeit ein Storyboard hochzuladen. Generell sind die Möglichkeiten bei diesem Tool großer als bei Adobe Firefly.

Der Prompt lautete gleich wie bei Adobe FireFly: very slow movement; flowy liquid; lines glow in the dark; move very slow; slimy; flowy, liquid close up

Das Ergebnis:

An auch ein sehr großartiges Ergebnis, mit vielen Möglichkeiten, um nachzuschärfen und genau das zu erreichen, das man möchte. Dieses Video „kostete“ 20 Credits.

Pro:
+ länger als 5 Sekunden möglich
+ viele Möglichkeiten der Bearbeitung wie z.B. Remix, Blend oder Loop (siehe Bild)


Con:
– optisch nicht ganz so akkurat wie Adobe Firefly, wirkt so als würde Sora ein eigenes Muster erschaffen und nicht direkt mit dem Bild, das hochgeladen wurde arbeiten (würde sich aber auf jeden Fall durch weiter Prompts und Schleifen ändern und präzisieren lassen)

Fazit:

Sowohl Adobe Firefly als auch Sora von OpenAI haben in meinen Tests visuell beeindruckende Ergebnisse geliefert. Die generierten Inhalte überzeugen durch eine bemerkenswerte Bildqualität, kreative Umsetzung und überraschend hohe Präzision in der Darstellung der Texteingaben.

Wie bereits zuvor erwähnt, bringen beide Tools jeweils ihre individuellen Stärken und Schwächen mit. Insgesamt bieten beide Plattformen spannende Möglichkeiten im Bereich der KI-gestützten Visualisierung. Eine endgültige Bewertung hängt daher stark vom jeweiligen Anwendungsfall und den individuellen Anforderungen ab. In diesem Fall fiel die Wahl auf das Video von Adobe Firefly weil das Ergebnis besser zur Stimmung und Anwendungsfall passt. Dennoch war ich sehr positiv von Sora begeistert und würde für die nächsten KI-Videos definitiv darauf zurückgreifen.

How AI Can Help Directors in the Treatment Process Without Changing the Creative Idea

In my first blog post, I refreshed my knowledge about the difference between a treatment and a script and how these are powerful and necessary tools for directors. Now, I want to dive deeper into how AI can enhance this process and other pre-production tasks, making workflows more efficient while preserving creative intent.

The rise of artificial intelligence (AI) in creative industries has sparked both excitement and concern. While some worry that AI might interfere with artistic decision-making, others recognize its potential to streamline production and help directors shape their vision faster and more effectively. In video production, AI can be a valuable tool in the treatment and scripting process, assisting directors without altering their original ideas. It can help by optimizing workflows, improving collaboration, and simplifying pre-production planning.

AI’s Role in the Treatment Process

As we already know, a treatment is the director’s first opportunity to present their vision to clients, producers, and creative teams and AI can assist in multiple ways:

1. Generating Mood Boards and Visual References

AI-powered platforms like Runway ML and MidJourney can generate images that align with the director’s vision. AI can suggest visual references that match the tone, color scheme, and aesthetics of the project, saving directors time searching for reference materials manually. However, some directors prefer tools like Frame Set or Shotdeck, which provide libraries of real film frames rather than AI-generated images, ensuring a more authentic and cinematic look.

2. Enhancing Concept Development

AI tools like ChatGPT can help structure a director’s ideas into a clear and engaging treatment. While the creative idea remains intact, AI can refine phrasing, eliminate redundancies, and improve overall flow. AI-driven insights can also suggest areas that may need more detail, making the treatment more cohesive and professional.

3. Speeding Up Formatting and Organization

Many directors, myself included, struggle with translating creative thoughts into structured documents. AI text generators can format treatments according to industry standards, ensuring consistency and clarity. They also assist with grammar, readability, and tone, reducing the time spent on revisions. But AI can do more than just refine phrasing—it can also help producers streamline the pre-production process. One of the most exciting areas where AI is making an impact is storyboarding.

Storyboarding with AI

During my research, I came across Previs Pro, an AI tool that allows directors to create rough animated sequences to visualize camera movements and scene pacing before production begins. Instead of manually sketching or hiring a storyboard artist, directors can input their script, and AI generates rough animatics that help visualize the flow of a scene.

Other tools like Boords and Storyboard That use text-to-sketch technology, enabling directors to generate quick storyboards without modeling 3D environments. This is a major advantage, as it allows for rapid iteration, making it faster to refine visual storytelling before production.

AI as a Collaborative Tool, Not a Replacement

The key to integrating AI into the treatment process is to use it as a collaborator rather than a replacement for human creativity. AI does not generate original artistic vision—it enhances workflows, eliminates repetitive tasks, and refines ideas that already exist. Directors remain the ultimate decision-makers, ensuring that the final product aligns with their creative intent.

Conclusion

AI is transforming the way directors approach the treatment and scripting phases of commercial video production. From generating visual references and formatting treatments to refining dialogue, automating shot lists, and assisting in pre-production logistics, AI offers practical tools that support—but do not override—the director’s creative vision. By leveraging AI effectively, directors can focus more on storytelling and artistic expression while benefiting from a more efficient and optimized pre-production process.

References

  • Field, S. (2005). Screenplay: The Foundations of Screenwriting. Dell Publishing.
  • Trottier, D. (2014). The Screenwriter’s Bible: A Complete Guide to Writing, Formatting, and Selling Your Script. Silman-James Press.
  • Rabiger, M. & Hurbis-Cherrier, M. (2020). Directing: Film Techniques and Aesthetics (6th ed.). Routledge.
  • Katz, S. D. (2019). Film Directing Shot by Shot: Visualizing from Concept to Screen. Michael Wiese Productions.
  • McKee, R. (1997). Story: Substance, Structure, Style and the Principles of Screenwriting. HarperCollins.
  • Runway ML. (2023). AI-Powered Creative Tools. Retrieved from https://runwayml.com
  • Final Draft. (2023). AI Story Mapping in Screenwriting. Retrieved from https://www.finaldraft.com
  • Boords. (2023). Storyboard Software for Filmmakers. Retrieved from https://boords.com


CHATGPT 4.0 was used as grammar and spellchecking-tool

The Difference Between a Treatment and a Script in Commercial Video Production

In the world of commercial video production the terms “treatment” and “script” are often used interchangeably by those unfamiliar with all the parts of pre-production. For professionals in the industry, these documents serve essential functions. While both are crucial for ensuring the success of a project, they differ in purpose, structure, and impact of the final video. Understanding the difference between a treatment and a script is fundamental for directors and also for producers, and creatives wanting to produce commercial or narrative videos.

What Is a Treatment?

A treatment is a conceptual document that outlines the creative vision for a video. It is typically created during the pre-production phase and serves as a pitch to clients, agencies, or productions. The treatment provides a clear overview of the look, feel, and storytelling approach before a full script is developed.

A standard treatment includes:

  • Logline: A one- or two-sentence summary of the concept.
  • Synopsis: A more detailed narrative explaining the story, visuals, and themes.
  • Tone and Style: A description of the aesthetic, mood, and overall cinematic approach.
  • Visual References: Mood boards, color palettes, or sample images to illustrate the intended look.
  • Character Descriptions (if needed): Brief descriptions of key figures in the video.
  • Shot Ideas: Possible cinematographic approaches and framing suggestions.

Treatments vary in length, it often depends on the director’s style and on the project’s complexity. They are often designed to be visually engaging, using imagery and design elements to convey the creative direction effectively.

And What Is a Script?

The script is a detailed document that outlines every scene, action, and dialogue in a video or film. Unlike a treatment, which focuses on concept and vision, the script serves as a precise blueprint for the production. It ensures that all elements of the shoot are planned in advance, minimizing confusion on set.

Key components of a script include:

  • Scene Headings: Indicate the location and time of day (e.g., INT. OFFICE – DAY).
  • Action Descriptions: Describe the movement and actions of characters or visual elements.
  • Dialogue: Spoken lines for actors, voice over, or narration.
  • Camera Directions (optional): Notes on specific shots, angles, or transitions.

Scripts follow standardized formatting to maintain consistency across the industry. They are crucial for directors, cinematographers, and editors to align on how the final video will unfold.

Why Are These Steps Important?

Both the treatment and the script serve essential roles in ensuring a seamless production process. Their impact extends far beyond the pre production phase and directly influences the efficiency and final quality of the video.

  1. Client and Stakeholder Approval

    – Treatments are often used in the pitching phase to secure client buy-in. They help non-technical stakeholders visualize the concept before significant resources are committed.

    – Scripts provide the final detailed plan, ensuring that all parties agree on the creative execution before production begins.
  2. Creative Alignment

    – A well-crafted treatment ensures that the entire creative team, from directors to designers, shares the same vision.

    – The script provides detailed instructions for actors, cinematographers, and editors, reducing misinterpretations of the vision.

3. Impact on the Final Video

  • A strong treatment ensures that the final video aligns with the brand’s message and audience expectations.
  • A well-structured script guarantees smooth execution and post-production, leading to a polished, professional result.

    The Impact of a well executed Treatment and Script:

    Picture this, a global fashion brand, Carhartt WIP, commissions a production team to create a commercial that showcases its latest collection. The creative team begins by brainstorming the key themes: authenticity, urban culture, and timeless style. They want to capture the raw energy of streetwear in a way that connects with their target audience.

    To set the creative direction, the director develops a treatment that paints a vivid picture of the campaign. The vision centers around a community of young people—skaters, basketballers, dancer —enjoying the time of their life. The treatment includes references to raw textures, color palettes, and shots which enhance the brands style. The client, inspired by the mood and aesthetic, approves the concept.

    With the treatment in place, the team moves on to scriptwriting. Each scene is carefully outlined, from a skater grinding along a concrete ledge to a dancer dancing in a abandoned warehouse. The script details every camera movement—close-ups of rugged denim, wide shots of models walking through misty streets, and dynamic handheld shots capturing the pace of people playing ball. Natural lighting, stylized edits, and a pulsating soundtrack are all integrated into the script’s framework.

    As production begins, the crew follows the script to execute the planned shots. The DoP ensures each frame aligns with the intended visual style, while the director guides the models and performers to bring out the raw, effortless energy associated with the Brand. In post-production, editors match the pacing of the footage to the music and colorists enhance the film’s industrial aesthetic.  When the final cut is delivered, the commercial aligns seamlessly with the initial treatment, creating a compelling campaign that stays true to the brands heritage.

    Understanding the difference between a treatment and a script is essential for anyone working in video production. While a treatment establishes the creative vision and secures stakeholder approval, the script provides the detailed framework necessary for execution. Both documents are indispensable for ensuring that a project runs smoothly and delivers a satisfying final product. Mastering their use can significantly enhance the efficiency and quality of video production, making them powerful tools for industry professionals.

    References

    • Field, S. (2005). Screenplay: The Foundations of Screenwriting. Dell Publishing.
    • Trottier, D. (2014). The Screenwriter’s Bible: A Complete Guide to Writing, Formatting, and Selling Your Script. Silman-James Press.
    • Rabiger, M. & Hurbis-Cherrier, M. (2020). Directing: Film Techniques and Aesthetics (6th ed.). Routledge.
    • Katz, S. D. (2019). Film Directing Shot by Shot: Visualizing from Concept to Screen. Michael Wiese Productions.


    CHATGPT 4.0 was used as grammar and spellchecking-tool

    Finding The Story of my Semester Project

    Last semester I used some of my blog entries to summarise and reflect on what I had learned from reading the book “Documentaries …and how to make them” (Glynne, 2007) by Andy Glynne. So far I have written about the pre-production of documentaries, what steps are necessary and how to take them. Now to put what I have learned to a practical use, I want to try to approach this semester’s project by going through all of the steps and stages, but on a smaller scale.

    The Idea

    I came across the idea for the semester project because I wanted to shine a light on one group or organisation that is unique to Graz. I then remembered having been to two of these car-free Fridays and how I enjoyed the sense of community and togetherness they created. My motivation to showcase this particular movement stems, on one hand, from the desire to learn more about the possibility of car-free city centres, and on the other hand from the wish to show this community to others.
    I had realised, when talking to my colleagues about my idea, that almost none of them had actually ever heard of Auto:Frei:Tag, even though they all live in Graz, are a similar age as me, and most of them I would also describe as, at least to some extent, invested in climate action.
    Thus my “Why” was developed; I wanted to raise awareness for the organisation and maybe inspire some of my peers to also take action. Due to the fact that I only want to create a short video and I don’t have a lot of time, I have decided to aim for a target group who is already to some extent interested in topics like climate change or car-free city centres, and just needs some inspiration for possible outlets and ways to show their convictions, rather than trying to convince some 60-something year old who has always taken the car everywhere to completely change their lifestyle.

    The Story

    When looking for the story behind my idea, there are a few ways I can see it develop.
    I was thinking I could choose one of the people behind Auto:Frei:Tag and follow their journey of organising one of these events, however when talking to them I found out that it is hardly ever them who actually come up with the idea for an event, but they just provide support for others who want to make themselves heard.
    So another option would be to find someone with a cause worthy of documenting and follow along with the planning and organisation of a car-free Friday, also highlighting the person and the cause behind it. During this process, it would also be possible to reach out to experts on city planning and traffic politics in Graz to get their opinion and expertise on the topic, adding facts and figures into my documentary.
    My third option would be to treat the street itself as my main character, showing the stark contrast between everyday traffic, chaos and noise and the complete opposite mood when the same road is taken over by bikes, people, music and community. This version would be less about the specific cause behind one event, but would highlight the opportunities which could open up when removing cars from the equation. For this idea, possible interviews could be a bit more free and abstract, aiming at capturing feelings rather than facts.

    Possible obstacles or problems to either of the stories could be issues with permits, unhappy neighbours or the inevitable people in cars coming by and complaining about the street being closed. Moreover, obstacles could be seen in the broader sense of the issue, where legislature, politics and influential citizens are the reasons why some efforts to further reduce traffic in the city might be hindered.

    For questions of accessibility, I have already talked to the people behind Auto:Frei:Tag and they have agreed to be filmed and documented, and I also hope for some of the visitors of the event to be willing to be on camera. For filming on a public street I would still need to figure out whether a permit is needed and how to get one.

    Conclusion

    After writing all of this out, I feel the story is beginning to take shape, but there are still a few key decisions that need to be made in order to get to a more precise and well formulated plan which in the end could theoretically be pitched to potential stakeholders. Within the next few blogposts I want to try to narrow down this idea and get it to where it needs to be to move on.

    Literature