Beyond the Lens: The Next Frontier of Augmented Reality in Marketing

Since people are tired of the usual online ads, an essential shift is happening. AR is now used for more than fun effects, as it supports marketing that relies on presence, relevance and people’s connection. The future of AR marketing is focused on changing the way brands interact with the physical world instead of only adding digital elements to it.

Here, I discuss the next important things happening in AR marketing such as ambient AR and campaigns affected by emotions, as well as real-time personalization.

1. Background Marketing: Ambient AR consists of Interactive Elements

Ambient AR refers to information or objects that appear in the physical environment around someone, depending on their location, the time or even the mood they display. With ambient AR, you can receive information or experiences without having to aim your device at anything or use the camera.

Just think that as you walk into a store, your AR glasses will highlight items that mean most to you based on your past and present interests. Perhaps it’s a public sculpture that tells a relevant story from the brand as you walk by. Marr (2019) stated that the next important shift in marketing will be hidden tools that make it more convenient for people without demanding attention.

They believe that these campaigns will be integrated into our lives, helping us rather than causing disruptions.

2. Emotionally Responsive AR: Marketing That Feels

With today’s technology, marketers have more ways to sense when customers are not happy or satisfied and can respond.

Marketing is being transformed by the way AR blends with affective computing which is able to gauge feeling from a person’s face or other signs. When AR is used, brands are able to react instantly to what a user is feeling.

A fashion retailer can use AR mirrors to determine if a customer is upset or happy and react appropriately. If a person seems tense, a skincare brand may guide them through visualisations to help relax. Since the brand responds so quickly, marketing shifts from promotion to sympathy.

Kotler, Kartajaya and Setiawan (2021) state in their book that the upcoming trend in marketing calls for machines and humans to work together. Emotion-aware AR prepares to do just that.

3. AR Spaces That Last: Digital Twins Are Increasing

Businesses are beginning to design AR spaces that mirror actual places and keep updating with the real world. Thanks to Niantic and Snap’s platforms, companies are able to set up multi-user AR areas where clients interact with various things such as products, over a period of time.

Envisage a sneaker brand where you could use AR to access its virtual flagship store, join drop events, meet others in real time and share your brand avatar. They shouldn’t be mistaken for simple campaigns; they’re meant to last.

Craig mentioned before that with AR, we will experience more lasting environments that matter for a period of time.

4. Hyper-Personalisation Through AI and Spatial Data

AI allows brands to examine AR data to personalise their offerings for every individual. Based on someone’s interactions and tastes, a tourism company could develop an AR walk in real time. A fitness company could advise you on appropriate exercises based on your schedule when you enter a gym. ( which is already can used as an app)

Pires and Stanton (2015) explain that real-time flexibility is crucial in today’s marketing which incremental AR provides with accuracy.

5. Storytelling for the Sustainable Use of the Environment

Many consumers hope that brands can be more transparent and environmentally friendly. AR lets you present eco-information by visualising the process behind a product on its packaging or in stores.

For example, Rothy’s has introduced AR experiences that explain how used plastic bottles are made into shoes. When customers perform a scan, the sustainability statements are immediately visible.

Deloitte highlights that AR creates transparency, turning regular CSR efforts into experiences that engage consumers.

6. WebAR and 5G are helping to get rid of the barriers.

Over the past few years, using AR in marketing was restricted due to the requirement for users to download an app and the lack of enough bandwidth. However, WebAR enables AR to be accessed via web browsers and with the introduction of 5G, these problems are no more.

Nowadays, brands can design interactive campaigns using a link or QR code. Because it takes little effort to log in, the platform will attract more users, encourage them to spend more time there and reach a wider audience. When Starbucks switched to WebAR for their seasonal offers, they reported a 62% increase in the way customers engaged with the campaigns.

In Conclusion: From Something New to Something Essential

The best part about AR in marketing now is its potential, rather than what it is today. Today, AR is more than a clever tool or a trend; it is expected to be the main form of brand communication. There will be a shift in AR marketing from advertising to offering experiences so realistic that people barely notice it.

It is evident that the brands that succeed in being less visible, yet present, will guide the future market.

Reference List (Harvard Style)

Craig, A.B. (2013) Understanding Augmented Reality: Concepts and Applications. Waltham, MA: Morgan Kaufmann.

Deloitte. (2024) Augmented Reality: The new front line of digital marketing. Available at: https://www2.deloitte.com

Kotler, P., Kartajaya, H. and Setiawan, I. (2021) Marketing 5.0: Technology for Humanity. Hoboken, NJ: Wiley.

Marr, B. (2019) Tech Trends in Practice: The 25 Technologies That Are Driving the 4th Industrial Revolution. Hoboken, NJ: Wiley.

Pires, G.D. and Stanton, J. (2015) Interactive and Dynamic Marketing. London: Routledge.

Scholz, J. and Smith, A.N. (2022) Immersive Marketing: How Technology is Shaping the Future of Customer Experience. London: Routledge.

Starbucks. (2024) Seasonal AR campaigns: A case study. Available at: https://stories.starbucks.com


Grammar and formatting support provided by ChatGPT.

Personalized Shopping with AR: 

 AR and Personalized Fashion Recommendations

For me shopping online always been easier than in shops. But unfortunately even now for me classical methods of shopping more often involve the use of general size definitions and style groups that may not in the best possible way for the specific needs of each customer. This is where AR comes in, because it allows brands to provide the customers with fashion suggestions that are more relevant by taking into consideration the customer’s body type, favorite articles of clothing, and even previous orders. AR technology allows customers to use their cameras or in-store screens to see how the clothes will look like on them in the near real time.

Zeekit’ app

Zeekit’ app (purchased by Walmart) suggests body metrics. With these technologies combined with the use of machine developing,learning algorithms, brands can now suggest styles that would fit the customers’ body shape much more effectively, making this a very personalized experience for the customer . Besides, AR can also identify colors ( pattern to match ) , the time of the year, and other preferences of the customers to give recommendations that are not limited to resizing, but also to the style and the current trends on the market.

Moreover, all these large accounts for(encoded) data such as social media habits and personal shopping histories can be checked and analyzed to provide the best styling. Customers, therefore, did not have a simple choice between what was so wishes on a rack or a website but got to choose from what is decided on the basis of customers’ individual characteristics. Such a level of customization is capable of minimizing returns and level of dissatisfaction among customers, and at the same time promoting customer loyalty and brand uptake.

Custom-Tailored Virtual Outfits: The Role of AI and AR

Among all the features that AR and AI bring to the fashion industry, the most striking one is the personalized virtual clothing try-on for customers. AI can know their individual style preferences from different sites (outlet, yoox and more) or what they have been searching for in the past and what they like sharing on social media. Together with AR this technology creates perfect experience for fashion consumers.

For example, Amazon and H&M contain AR options in the applications of smart phones through which customers can design clothes and virtually try them on. Exclusively, artificial intelligence algorithms operate in the background, making the appropriate recommendation based on a customer’s past selections ,which is already now new technique with regard to clothing. With these data points, AI brings not only fashion relevance, but much more skin to skin experience (Hoffman and Zhao, 2022, p.56).

AR increases the overall diversity within fashion domain as well as the reach of the industry. Specific types of clothing can be suggested to any consumer based on their shape, size, or physical possibilities. So AI algorithms consider each individual’s body shape in order to give them suggestions that they feel alright about themselves. This strengthens the talk of how technology is closing all the gaps in the fashion world, making shopping on individuality a reality across the world.

 The Future of Hyper Personalized Shopping Experience 

AR hop and AI technologies are not standing still, and so the next stage of personalized shopping may not be far away.

AI can anticipate suitable clothing based on the weather forecast for the week, important events that might be scheduled for the week or even changes in the social media trends.

AI weather fashion combines data from several weather APPs, behavioral analytics, and intelligent recommendation offer real-time clothing suggestions. When these systems detect a cold front in your area, for example, they can highlight cozy knits, boots, or scarves in your app or feed. 

Glance can be as your adviser from now one. Glance use contextual knowledge with the help of artificial intelligence (AI) to provide customized outfit suggestions based on location, preferences, and the current weather.

Furthermore, as more appealing wearable AR devices get introduced to the market, the rationale of the digital and physical world’s shopping differences evaporates. Augmented reality on wearables such as glasses may act as a tool whereby the customers are able to engage with the items of clothing in a different way. These customers could ‘place’ entire wardrobes with accessories in a store while observing the assortment, or they could ‘purchasing from their living rooms, and at once look and feel the garments (Johnson, 2024). Consumers’ physical and digital experiences, known as ‘Phygital’, are held to reinvent the consumer-brand touchpoints.

Hyper-Personalization is also going to hit sustainability by the end of shopping. Thus, AI and AR help with the more precise clothing recommendations, and this approach decreases returns, which cause waste. Virtual showrooms also eliminate the extra requirements of clothes in stock, making the production line cater to the consumers’ market better (Evans, 2023).

Conclusion

The combination of AI and AR in fashion is here. This means that despite body type, style and trends, people can be given recommendations on what season’s fashionable for them and where to find it as it has become possible for fashion and tech to amalgamate thus changing the retail landscape.

References

https://corporate.walmart.com/news/2022/03/02/walmart-launches-zeekit-virtual-fitting-room-technology

Evans, L., 2023. Sustainable Fashion and Technology: The Role of AR and AI in Reducing Waste. Fashion Technology Review, 14(2), pp.30-41.

Hoffman, J. and Zhao, Y., 2022. Virtual Try-Ons: How AI and AR are Revolutionizing the Shopping Experience. Retail Science Quarterly, 9(4), pp.67-82.

Huang, W. and Liao, Z., 2023. Personalized Fashion with AR: Body Types and Style Preferences in the Digital Age. International Journal of Fashion Studies, 7(3), pp.112-126.

Johnson, M., 2024. The Phygital Future: How Wearable AR Will Transform Shopping. Retail Tech Today, 15(1), pp.54-63.

https://glance.com/us/blogs/glanceai/ai-shopping/ai-weather-fashion-shopping

Photogrammetry in Polycam

Um ein präziseres 3D-Modell der Kirche zu erhalten und eine verlässliche Referenz für die Arbeit in Cinema 4D zu schaffen – insbesondere zur besseren Einschätzung von Tiefenverhältnissen und räumlichen Dimensionen – wurde die Kirche aus etwa 50 verschiedenen Perspektiven fotografiert. Diese Aufnahmen deckten möglichst viele Blickwinkel ab, um ein umfassendes Bild des Objekts zu gewährleisten.

Die Fotos wurden anschließend in Polycam hochgeladen und mithilfe des Photogrammetry-Modus verarbeitet. Das daraus generierte 3D-Modell – im GLB Format – lieferte überraschend gute Ergebnisse und bildete die Struktur der Kirche erstaunlich detailreich ab.

Im nächsten Schritt wurde das Modell in Cinema4D weiterbearbeitet. Mit der Remesh-Funktion konnte die Geometrie optimiert und in eine gleichmäßige, saubere Topologie überführt werden. Das daraus resultierende Modell dient nun als präzise digitale Replik des ursprünglichen Holzmodells und ermöglicht eine exakte Anpassung der zuvor erstellten Fassadenprojektion in Cinema 4D.

Ein weiterer Vorteil: Eine leichte perspektivische Verzerrung, die beim vorherigen Mapping-Versuch aufgefallen war, konnte nun auf Basis des neuen Modells korrigiert werden.

Testrender GLB aus Polycam


Disclaimer zur Nutzung von Künstlicher Intelligenz (KI):

Dieser Blogbeitrag wurde unter Zuhilfenahme von Künstlicher Intelligenz (ChatGPT) erstellt. Die KI wurde zur Recherche, zur Korrektur von Texten, zur Inspiration und/oder zur Einholung von Verbesserungsvorschlägen verwendet. Alle Inhalte wurden anschließend eigenständig ausgewertet, überarbeitet und in den hier präsentierten Beitrag integriert.

Erster Prototyp: Reales Footage durch KI ersetzen – erkennt man einen Unterschied?

Ausgangslage

Für den ersten Prototyp meines Semesterprojekts verwende ich ein bereits bestehendes Video, das aus verschiedenen Drohnenflügen der letzten Jahre zusammengeschnitten wurde. Der Clip ist bewusst schnell geschnitten und kombiniert unterschiedliche Landschaftsaufnahmen in schneller Abfolge. Ziel des Experiments ist es, einen Teil dieser realen Aufnahmen durch KI-generierte Bilder zu ersetzen und anschließend zu überprüfen, ob der Unterschied für den Betrachter unmittelbar erkennbar ist.
Wichtig ist hierbei die Einschränkung, dass ausschließlich Landschaftsbilder verwendet werden. Aufnahmen mit Menschen werden bewusst vermieden, da dies sowohl den Generierungsprozess als auch die spätere Bewertung der Ergebnisse erheblich erleichtert.

Das Ziel dieses Prototyps ist es, die Grenzen zwischen realen Drohnenaufnahmen und KI-generiertem Footage auszutesten. Dazu ersetzte ich im Originalvideo einige der Drohnenszenen durch von HailuoAI und Sora erstellte Sequenzen.

Prototyp

Im ersten Schritt tauschte ich gezielt einzelne Drohnenshots durch die generierten B-Roll-Clips aus, wobei besonderes Augenmerk auf die Vergleichbarkeit gelegt wurde.
Der Fokus der Analyse liegt darauf, zu untersuchen, wie deutlich sich die KI-Bilder von den echten Aufnahmen unterscheiden, auch hinsichtlich der subjektiven Wahrnehmung durch die Betrachter:in.

Um dies zu überprüfen, werde ich im nächsten Schritt eine kleine Umfrage durchführen. Dabei werde ich ausgewählte Ausschnitte aus dem Video zeigen und die Teilnehmer:innen bitten, anzugeben, welche Szenen sie als real und welche sie als KI-generiert einschätzen.

Und hier das aktuelle Video mit KI-Teilen

Frage an sich selbst: Erkennt man die KI Parts deutlich?

Hier einen Ausschnitt des Original Videos:

Herangehensweise

Zu Beginn meines Versuchs wollte ich mit Hilfe eines Prompts ein Video eines schönen Sonnenuntergangs erstellen. Die erste Eingabe lautete:
„Drohnenflug, Sonnenuntergang, über den Wolken, schöne und cinematische Lichtstimmung, leichter Anstieg.“

Das Ergebnis entsprach jedoch nur bedingt meinen Vorstellungen. Zwar wurde ein Sonnenuntergang generiert, allerdings war in den meisten Clips die Drohne selbst im Bild zu sehen, was der angestrebten Ästhetik widersprach.

Um das Problem zu beheben, passte ich den Prompt an und ergänzte die Anweisung, dass die Drohne nicht sichtbar sein sollte:
„Drohnenflug (Drohne nicht im Bild), Sonnenuntergang, über den Wolken, schöne und cinematische Lichtstimmung, leichter Anstieg der Drohne im Bild.“

Trotz dieser genaueren Formulierung blieb das Resultat hinter den Erwartungen zurück. Die Drohne tauchte weiterhin in den generierten Videos auf, sogar sehr präsent im Bild.

Ein dritter Anlauf folgte mit einer leicht vereinfachten Formulierung:
„Drohnenflug (Drohne nicht im Bild), Sonnenuntergang, über den Wolken, schöne und cinematische Lichtstimmung.“

Doch auch dieser Versuch führte nicht zum gewünschten Ergebnis. Die KI interpretierte die Angaben nicht konsequent, sodass immer wieder Bildelemente auftauchten, die nicht der Vorstellung eines klaren, „drohnenlosen“ Himmelsflugs entsprachen.

Nach mehreren erfolglosen Prompt-Varianten entschied ich mich für eine alternative Herangehensweise: Anstatt nur mit Textvorgaben zu arbeiten, lud ich ein eigenes Ausgangsbild hoch. Dafür wählte ich jeweils den ersten Frame eines geeigneten Drohnenvideos.

Bei HailuoAI gibt es die Möglichkeit, auf Basis eines hochgeladenen Bildes einen kurzen Clip zu generieren. Zusätzlich kann man Anweisungen zur gewünschten Kamerabewegung formulieren. Diese Funktion nutzte ich gezielt, um die Bilddynamik nachzustellen, etwa durch einen sanften Anstieg oder einen leichten Schwenk, um den Eindruck eines realen Drohnenflugs zu verstärken.

Insgesamt funktionierte diese Methode deutlich besser als die reine Prompt-Eingabe. Die Resultate wirkten stimmiger und entsprachen eher der ursprünglichen Vision.
Natürlich gab es auch hier kleinere Fehler und Unstimmigkeiten, die sich nicht ganz vermeiden ließen. Ein „Best of“ der Fehlversuche.

Vergleich: Sora von OpenAI und HailuoAI

Zunächst plante ich, die Erstellung der KI-generierten B-Roll mit Sora von OpenAI umzusetzen. Sora versprach durch seine Text-to-Video-Technologie hochwertige Ergebnisse und schien zunächst eine vielversprechende Wahl zu sein. In der praktischen Anwendung zeigten sich jedoch einige Schwierigkeiten. Während der Generierungsversuche traten wiederholt Fehlermeldungen auf, die den Prozess unterbrachen oder komplett verhinderten. Zusätzlich kam es zu sehr langen Wartezeiten, und die Plattform machte oft keine klaren Angaben über die voraussichtliche Dauer der Erstellung.
Diese wiederholten Probleme führten schließlich dazu, dass ich mich intensiver nach Alternativen umschaute.

Nach eingehender Recherche (mehr dazu im 4. Blogpost) entschied ich mich, HailuoAI zu testen. Ein entscheidender Vorteil von HailuoAI war das flexible Preismodell. Nutzer erhalten beim Anlegen eines kostenlosen Kontos 1100 Credits, wobei die Generierung eines Videos 30 Credits kostet.

What’s the Best Way to Storyboard a Commercial?

Storyboarding is the magic that makes a commercial actually happen. Before spending real money on a camera crew, actors, props, and locations, you need a solid plan. A storyboard lays out the commercial shot by shot so everyone knows what’s supposed to happen before the first person steps foot on set. When it comes to putting a storyboard together, there are a few main ways to do it: sketching, previs, or searching for similar frames online. Picking the right method can seriously change how smooth your whole project runs.

Sketching is probably the most classic way to storyboard. It’s quick, cheap, and all you need is a pen and paper. Especially early on, sketching is super helpful because you can brainstorm different ideas without overthinking it. You can map out tons of different options for a scene without getting stuck on the details.

But sketching isn’t always the most accurate way to show your ideas, especially if you’re like me and aren’t super confident in your drawing skills. If the sketches are too rough or messy, there’s definitely a risk that other people won’t really get what you’re trying to say. But honestly, that’s kind of fine when you’re just getting started. Sketching keeps everything loose and flexible, which is exactly what you need at the beginning. I still hate it tough 

Previs has gotten way easier lately, especially for commercial work. You don’t need expensive software anymore — just grab your phone and shoot rough videos or stills. Shooting previs on your phone lets you block out real scenes with real people and props, which gives you a much better sense of how timing, movement, and camera angles will actually feel. Plus, making quick edits from your phone clips can show you problems with pacing or weird transitions before you even get to the set.

It’s honestly the fastest way to figure out if your idea is going to work once you actually start shooting. The only downside is that most of the time, you do have to leave the house. If you’re still collecting ideas or trying to figure out the rough storyline, it’s probably smarter to stick to sketching at first. Even if you can’t draw well, you know what your own sketches mean — and when it’s time to show someone else your vision, you can shoot a rough previs or, if you’re feeling lazy and don’t want to go outside, just search the web for frames.

Searching for similar frames is another solid option, especially when you’re pitching your idea. You can pull images from movies, ads, or photography and build a quick mood board that shows the vibe, style, and energy you’re going for. Actually the last spec ad we shot was 90% planned just by pulling frames from Pinterest and Frameset. It worked perfectly. Clients especially love this because they can instantly see what you’re aiming for, without you needing to explain it for half an hour.

The only real downside to this method is that if your idea is super original, it might take forever to find the right frames. You can easily spend hours searching and still not find something that matches exactly. Plus, this method doesn’t solve how the shots connect or flow together — it’s more about the look, not the structure — so you’ll still need a real storyboard or previs later if you want a full plan.

On real projects, the best storyboards usually end up being a mix of all three techniques. Sketch first to throw down ideas fast. Gather reference frames to lock in the style and mood. Then shoot quick previs videos to make sure the scenes actually work. Especially in commercial work, where budgets are tight and timelines are even tighter, using all three methods together can save you a ton of stress, money, and last-minute disasters.

At the end of the day, the best storyboard is the one that makes your idea clear — whether you sketch it badly on paper, film it on your phone, or build a vibe board from random internet screenshots. Whatever gets your team (and your client) on the same page is the way to go.

Does Less Pre-Production Open Doors for Creativity?

I recently thought a lot about an experience from our last spec ad shoot. We didn’t do a lot of traditional pre-production. We mainly searched for some cool shots and visuals we liked but skipped detailed storyboarding. During the two shooting days, many ideas just came up on the spot. This made me wonder: does doing less pre-production open doors for more creativity?

Obviously, pre-production is a super important part of filmmaking because it helps avoid problems and makes sure everything runs smoothly. But too much planning can sometimes kill creativity. People tend to be more creative when they have the freedom to explore and take risks.

In our case, the loose structure helped a lot. We were flexible and open-minded, and new ideas just kept coming. Creativity often happens “in the moment,” especially when people are improvising together. Being able to adjust and try new things without being tied to a strict plan made a big difference.

Psychology studies show that people who are given fewer rules during a creative task often come up with more original ideas. So having just a rough plan for a film shoot might actually help new, better ideas happen on set.

A lot of the shots above just “happend” during our shoot and still tell our initial story but none of them were planned.

Of course, skipping pre-production completely can be dangerous, especially in commercial filmmaking where time and money are tight. So it’s about finding the right balance. Creativity tends to peak when there is enough structure to give clear goals but also enough freedom to experiment. In film, this means having a general idea of what you want but staying flexible.

Thinking back to our spec ad, the best shots came from moments we hadn’t planned. Maybe it was a sudden change in light or a spontaneous move by the talents. Random, lucky moments like these can really boost creativity — if you’re open to them.

Still, it wouldn’t have worked without pre-production. It gave us a direction, helped with logistics, and got everyone on the same page. But it didn’t have to be super detailed. Plans should be flexible and able to change quickly, especially in fast-moving environments like film sets.

From my still limited experience as a director, a “light” version of pre-production has two big advantages: it lets everyone on set bring in fresh ideas, and it helps the project adjust to new opportunities. But for this to work, you need to trust your crew and be ready to let go of some control, which is really hard for me sometimes but giving people space and trusting them is key for creative teamwork.

In the end, doing less pre-production doesn’t mean being unprepared. It can actually be a smart move to leave space for real creativity to happen. It completely depends on the project: are there a lot of locations? How many shooting days are there? How big is the crew? These are all questions you need to ask yourself before deciding to work with a smaller pre-production plan. The bigger the crew and the more locations, the harder it will be to not have a detailed storyboard. But still, our spec ad showed me that letting things evolve naturally on set can lead to surprising results. Finding the balance between preparation and flexibility seems to be a secret weapon, when used right, for creative success in commercial filmmaking.

Camera Match by Ethan Ou

Following the preparation of both the source and target ColorChecker datasets, the subsequent step involves generating a color transform through mathematical alignment. For this purpose, the tool Camera Match developed by Ethan Ou provides an effective and streamlined solution. This Python-based application enables the creation of LUTs by computationally matching the color responses of the source dataset (e.g., digital camera footage such as ARRI Alexa imagery) to a target dataset (e.g., film scans or alternative camera profiles).

Camera Match is accessible both as a downloadable script and via a browser-based interface (Camera Match GitHub Repository). The basic workflow for LUT generation using the browser interface is as follows:

  1. Initialization:
    Execute the script by pressing the “Play” button, which installs all necessary Python libraries automatically.
  2. Source Data Upload:
    Load the source dataset (e.g., Alexa ColorChecker measurements) into the interface.
  3. Target Data Upload:
    Upload the corresponding target dataset representing the desired film look or alternative camera profile.
  4. LUT Generation:
    Initiate the LUT creation process by selecting the Radial Basis Function (RBF) algorithm as the matching function. The RBF method provides smooth and continuous color transitions, making it suitable for high-fidelity color transformations.
  5. LUT Export:
    Save the generated LUT to a local directory for further use.

Once created, the LUT can be implemented in post-production applications such as DaVinci Resolve, Lattice, or any system capable of ingesting standard LUT formats. The process is highly efficient, offering a rapid turnaround from dataset preparation to deployable LUT creation.

While this approach enables the user to quickly produce functional LUTs, it is important to acknowledge that the quality of the input datasets—particularly the preparation of the ColorChecker charts—significantly influences the final result. In subsequent discussions, we will explore more advanced methodologies for chart preparation, focusing on best practices for achieving scene-referred workflows compatible with color-managed environments such as DaVinci Wide Gamut Intermediate and ACES.

Although this preparation phase remains time-consuming, it is a critical component for those seeking the highest levels of color accuracy and transform reliability.

Reference:

DeMystify Colorgrading. (n.d.). Film Profile Journey: 20 – More Automation, Less Tedious Manual Work. Retrieved April 28, 2025, from https://www.demystify-color.com/post/film-profile-journey-20-more-automation-less-tedious-manual-work

Creating a LUT from a Film Profile

Developing a LUT tailored specifically to the needs of a project may initially seem complex, but the process is more straightforward than it appears. In essence, one builds a specific look within DaVinci Resolve and subsequently renders this look into a LUT file. The technical steps for generating and exporting the LUT will be discussed in detail later in this series of posts, as they are relatively direct once the foundational elements are established.

In order to approach the creation of a true Show LUT, we must move beyond subjective grading and work systematically by profiling real analog film stocks. Specifically, we will be extracting data from ColorChecker charts photographed on film and generating a modified version aligned with our own creative preferences.

It is important to note that film profiling itself is an expansive discipline, comprising numerous methodologies and technical variations. A full exploration of these methods would require not merely additional blog entries, but likely an entire master’s thesis in its own right. To streamline the process for practical application, this discussion will focus exclusively on the automated film profiling workflow presented by Nico Fink in his Film Profiling Course.

Extraction of ColorChecker Values

The first essential step is the acquisition of RGB data from both the reference charts and the film-exposed charts. To facilitate this process efficiently, we use the command-line tool “Get ColorChecker Values”.

This tool automates what would otherwise be an extremely laborious manual task: sampling and recording the 24 patches of a ColorChecker chart across multiple exposures. Rather than hovering over each patch, sampling color values individually, and manually entering data, the tool extracts and compiles the colorimetric information automatically into a structured CSV file.

The script and relevant resources can be found here:

Dataset Preparation

Prior to running the script, the film scans and the reference (digital) captures must be organized carefully:

  • Create a new directory and place the respective color chart images inside.
  • Ensure that each corresponding film and Alexa (digital reference) image shares the exact same filename and appears in the same sequence within the directory. This is crucial for proper alignment of datasets.

Before exporting the images to the directory, additional preprocessing in Adobe Photoshop is required:

  • Straighten the charts to correct any perspective distortion.
  • Crop the charts tightly around the patches.
  • Apply a slight Gaussian blur to reduce fine-grain noise or scanning artifacts.

The resulting images should resemble standardized color chart references suitable for automated analysis.

Running the Script

After preparing the datasets, the next step involves executing the script through the Terminal application on the computer:

  1. Open Terminal.
  2. Navigate to the directory containing your charts using the ls (list directories) and cd (change directory) commands.
    Example:
  3. Execute the script using the appropriate command-line syntax. You will specify the input images and define an output name for the resulting CSV file. (Detailed command examples will be provided later in this series.)

    Upon successful execution, the script generates a comprehensive sheet of numerical RGB values for each chart. This replaces what would otherwise be a time-consuming manual process.

    It is essential to repeat this process separately for both your film-based charts and your digital reference charts, thereby creating two distinct datasets. These datasets form the empirical basis for the subsequent LUT construction, wherein the desired film look will be derived and mathematically mapped.

    Summary

    By automating the extraction of ColorChecker values, we establish a foundation of objective data that can be used to model the film response curve. This not only accelerates the LUT creation workflow but also enhances accuracy and repeatability, which are critical for professional color pipelines.

    The next phase will involve analyzing and comparing these datasets in order to generate a transform that authentically replicates the desired film look while allowing for tailored creative adjustments.

    Reference

    DeMystify Colorgrading. (n.d.). Film Profile Journey: 20 – More Automation, Less Tedious Manual Work. Retrieved April 28, 2025, from https://www.demystify-color.com/post/film-profile-journey-20-more-automation-less-tedious-manual-work

    Importance of Diverse Source Footage

    Importance of Diverse Source Footage

    In the process of designing a show LUT, one cannot rely solely on a narrow selection of material. The end goal is to build a transform that behaves consistently, no matter the variations in camera sensor, lighting condition, or scene composition. A LUT tested on limited footage is unlikely to generalize well across the complexities of an actual production environment.

    Footage must therefore be drawn from a wide array of scenarios: sunlit exteriors, dim interiors, high-contrast night scenes, and environments illuminated by mixed light sources. Moreover, it is necessary to incorporate material from multiple camera manufacturers—each bringing its own interpretation of color science and sensor response to the equation. Without such diversity, the LUT may perform well under certain conditions but break down unpredictably when the variables change.

    This is not merely a technical requirement; it is a philosophical one. A LUT must serve the story without introducing artifacts that pull the viewer out of the experience. As Poynton (2012) points out, wide testing ensures that color transforms survive the real-world unpredictability that defines filmmaking.

    The Role of Controlled References

    Including color charts within the test material is equally critical. These references, such as the X-Rite ColorChecker or similar calibration tools, provide fixed targets against which LUT behavior can be measured. They offer a set of known quantities—neutral grays, primary colors—that allow the colorist to observe exactly how the LUT manipulates standard values.

    This step moves the process from subjective taste toward empirical validation. Without color charts, evaluations become reliant on intuition alone, which may fail to detect subtle but cumulative errors over the course of a feature-length project.

    The ICC (2022) highlights that such controlled references are essential to maintaining fidelity not just within a shot but across the complex interrelation of shots, scenes, and acts. When the same reference yields different results across multiple lighting conditions or cameras, one can be confident that the problem lies within the transform, not within the footage.

    Necessity of a Neutral Evaluation Pipeline

    A show LUT can only be meaningfully assessed if all other variables are controlled. This principle demands that the footage be stripped of in-camera looks, hidden LUTs, and uncontrolled image processing prior to evaluation. Only through this neutralization can the true effect of the show LUT be isolated.

    Otherwise, as Arney (2021) warns, observed issues may stem not from the LUT but from a polluted pipeline, leading to incorrect conclusions and wasted revision cycles. It becomes impossible to know whether a magenta shift, for instance, is caused by the LUT itself or an unnoticed RAW processing setting.

    Neutralization, therefore, is a prerequisite—not a preference. It guarantees that the feedback loop between observation and correction is valid, allowing genuine issues to be identified and addressed with confidence.

    Designing Test Structures with Purpose

    Simply gathering footage is not enough; it must be organized and sequenced thoughtfully. Test timelines must be constructed to reveal different failure modes: rapid shifts from bright exteriors to dim interiors, extreme saturation to near-monochrome, natural daylight to heavy artificial lighting. Each transition becomes a test of the LUT’s resilience.

    Furthermore, footage should be juxtaposed to maximize stress on the transform. For instance, placing a RED clip next to an ARRI clip, or alternating between footage with and without deep shadows, forces the LUT to reveal its behavior under changing conditions.

    In this way, the colorist is not waiting for issues to arise by accident but actively provoking them. As van Hurkman (2014) suggests, the integrity of a color transform is proven not in ideal conditions but when subjected to extremes.

    Conclusion

    The creation of a show LUT is ultimately a scientific inquiry wrapped in artistic purpose. By committing to footage diversity, employing objective reference points, maintaining pipeline neutrality, and designing tests that actively seek out failure, the colorist ensures that the final transform will not merely look good on a single shot but will endure the realities of production.

    A show LUT, if built properly, becomes invisible—supporting story, mood, and emotion without drawing attention to itself. Achieving this level of reliability requires more than technical skill; it demands a methodological rigor rooted in the understanding that visual storytelling is at once a technical craft and an expressive art.

    References

    • Van Hurkman, A. (2014). Color Correction Handbook: Professional Techniques for Video and Cinema. 2nd ed., Peachpit Press.
    • ICC (International Color Consortium). (2022). Introduction to the ICC Profile Format and Color Management Workflows. ICC White Paper Series.
    • Poynton, C. (2012). Digital Video and HD: Algorithms and Interfaces. 2nd ed., Morgan Kaufmann.

    Arney, D. (2021). Practical Color Management in Film and Video Postproduction. Postproduction Journal, Vol. 19

    KI-gestützte B-Roll-Erstellung: Grundlagen und Tools im Überblick

    Was ist B-Roll?

    In der Videoproduktion bezeichnet B-Roll alle Aufnahmen, die zusätzlich zum Hauptmaterial (A-Roll) verwendet werden. Während A-Roll z. B. ein Interview oder eine Moderation zeigt, bietet die B-Roll ergänzende visuelle Eindrücke: Detailaufnahmen, Landschaften, Arbeitsprozesse oder illustrative Bilder – einfach alles, was das gesagte im A-Roll bildlich unterstützt und verdeutlicht.

    B-Roll erfüllt mehrere wichtige Aufgaben:

    Visuelle Auflockerung: Monotone Einstellungen werden durch abwechslungsreiche Bilder aufgelockert.

    Erzählerische Unterstützung: Komplexe Inhalte können durch Bilder und/oder Illustrationen verständlicher gemacht.

    Emotionale Vertiefung: Stimmung und Atmosphäre können gezielt verstärkt werden.

    Fehlerüberdeckung: Schnittfehler, inhaltliche Lücken oder typischerweise Versprecher oder Denkpausen können weggeschnitten werden und durch B-Roll kaschiert werden.

    Gerade in Social Media, YouTube und Marketingvideos ist B-Roll ein wichtiges Mittel, um die Aufmerksamkeitsspanne der Zuschauer hochzuhalten.

    Traditionell bedeutete B-Roll jedoch auch einen erheblichen Produktionsaufwand und/oder Kostenaifwand: Separate Drehs/mehr Drehzeit am Set, um genügend Material zu produzieren, teure Stock-Videolizenzen oder aufwendige Archiv-Recherchen waren oft notwendig. KI-Technologien können hier inzwischen eine praktische und günstige Alternative sein.

    KI und B-Roll: Warum der Einsatz sinnvoll ist

    Mit Hilfe von KI kann B-Roll automatisch aus Archiven zusammengestellt (z.B. eine Suchanfrage auf ChatGPT), gezielt für bestimmte Themen generiert (durch durch Bild-zu-Video-Modelle) oder sogar komplett neu erstellt werden (z.B. durch Text-zu-Video-Modellen).

    Gerade für kleine Teams, Content Creator oder Low-Budget Projekte sind KI-gestützte Lösungen eine Möglichkeit, schneller und günstiger hochwertiges Zusatzmaterial zu produzieren und somit Videos besser zu machen. Vor allem wenn man die Aufmerksamkeitsspanne der Benutzer:innen in Betracht zieht, ist der Einsatz von gut abgestimmten B-Roll von großer Bedeutung.

    (Quellen: https://www.yourfilm.com.au/blog/understanding-the-importance-of-b-roll-footage-in-video-production/#:~:text=Think%20of%20b%2Droll%20as,and%20variety%20to%20your%20story.

    https://alecfurrier.medium.com/generative-ai-video-generation-technologies-infrastructure-and-future-outlook-ad2e28afae8c

    https://filmustage.com/blog/the-future-of-ai-in-video-production-innovations-and-impacts/#:~:text=AI%20in%20video%20editing%20software,inspiring%20stories%20without%20extra%20work. )

    Zwei Tools im Überblick

    OpusClip

    OpusClip ist ein KI-gestütztes Videobearbeitungs-Tool, das sich darauf spezialisiert hat, lange Videoformate automatisch in kurze, social-media-taugliche Clips zu verwandeln. Besonders hervorzuheben ist dabei die Fähigkeit der Plattform, zentrale Aussagen und visuelle Highlights im Ursprungsmaterial eigenständig zu erkennen und daraus eigenständige Kurzvideos zu generieren.

    Die Funktionsweise basiert auf einer Kombination von Textanalyse und Bildinterpretation. OpusClip analysiert zum einen die Audiospur des hochgeladenen Videos, identifiziert Schlüsselsätze, besonders betonte Aussagen oder emotional wichtige Momente und schlägt dazu passende Schnittpunkte vor. Zum anderen berücksichtigt das Tool visuelle Anhaltspunkte wie Gestik, Mimik oder Veränderungen im Szenenbild, um passende Start- und Endpunkte für die Clips zu bestimmen.

    Für die B-Roll-Erstellung spielt OpusClip insofern eine Rolle, als dass es Übergänge und Zwischenschnitte automatisch verbessern kann. Während der Clip-Erstellung werden Elemente wie Zooms, automatische Bildanpassungen oder Text-Overlays eingesetzt, um die visuelle Dynamik zu erhöhen. In neueren Versionen bietet OpusClip sogar eine direkte Integration von kurzen B-Roll-Sequenzen, etwa Naturaufnahmen oder städtische Szenen, um monotone Passagen aufzulockern.

    Die Plattform richtet sich hauptsächlich an Content Creator, Marketer und Unternehmen, die Video-Content schnell für Plattformen wie TikTok, Instagram Reels oder YouTube Shorts aufbereiten wollen. Besonders vorteilhaft ist die enorme Zeitersparnis, da der komplette Analyse-, Schnitt- und teilweise B-Roll-Prozess automatisiert erfolgt. OpusClip ermöglicht außerdem die Anpassung an verschiedene Formate (16:9, 9:16, 1:1), was für Multiplattform-Strategien relevant ist.

    Zusammengefasst: OpusClip ist ein leistungsstarkes Tool für schnelles Content-Repurposing. Die KI unterstützt dabei nicht nur beim Kürzen und Strukturieren, sondern kann auch visuelle Auflockerung durch einfache B-Roll-Integration bieten. Der Fokus liegt hier weniger auf hochwertiger individueller B-Roll, sondern auf Effizienz und sofortiger Publikation.

    (Quelle: https://youtu.be/4mCU6HtvoAI?si=-Y60nYEQRMxDnviB
    https://youtu.be/tVIFWx6KVzU?si=rLSd0Lrv2NE8OcO8 )

    HailuoAI

    HailuoAI verfolgt einen anderen Ansatz: Die Plattform ist darauf spezialisiert, kurze, eigenständige Videosequenzen zu generieren, die sich hervorragend als B-Roll oder visuelle Ergänzungen eignen. Nutzer geben Themen oder Stichwörter (Prompts) ein, und die KI erstellt daraufhin eigenständig passende Clips, basierend auf vorhandenen Stock-Datenbanken und KI-generierten Animationen.

    Im Gegensatz zu klassischen Stock-Plattformen wird das Material bei HailuoAI dynamisch angepasst: Farbe, Stil, Geschwindigkeit und Übergänge können je nach Nutzerwunsch variiert werden. Besonders hervorzuheben ist die Benutzeroberfläche: Nach der Eingabe eines Prompts erhält man eine klare Auflistung aller generierten Videos, inklusive einer übersichtlichen Vorschau. Nutzer können die Clips bewerten, speichern oder weiterverarbeiten. Auch der verwendete Prompt wird transparent angezeigt, was bei der späteren Organisation oder Optimierung hilft.

    Ein weiterer Vorteil liegt im zugänglichen Preismodell. Bereits mit einem kostenlosen Konto stehen einem zahlreiche Generierungen zur Verfügung (über ein Credit-System), bevor überhaupt ein Abonnement nötig wird. So lässt sich die Qualität des Tools umfangreich testen, ohne sofortige Verpflichtungen einzugehen.
    Technisch gesehen arbeitet HailuoAI hauptsächlich mit synthetischem Footage und stockbasierten Elementen. Die Plattform ist besonders stark im Bereich atmosphärischer B-Roll: Himmel, Berge, Meereslandschaften, urbane Silhouetten oder generische Naturaufnahmen lassen sich sehr schnell und in akzeptabler Qualität erzeugen.

    Ein kleiner Nachteil ist, dass nach mehreren Video-Generierungen zeitliche Sperren (Cooldowns) greifen. Manchmal muss man bis zu 20 Minuten warten, bis man neue Clips erstellen kann. Trotzdem bleibt der Prozess insgesamt intuitiv und benutzerfreundlich.

    Zusammengefasst: HailuoAI ist ein flexibles Werkzeug für die Erstellung von B-Roll-Sequenzen auf Basis von Themenvorgaben. Im Vergleich zu OpusClip geht es hier weniger um die Bearbeitung von bestehendem Material, sondern um die Neuschaffung von visuellem Content, ideal für atmosphärische Ergänzungen und kreative Gestaltung.

    (Quelle: https://youtu.be/CqWulzM-EMw?si=LCYfXD_AWSKGNbKY https://youtu.be/DuRHup2QxtI?si=7AZkooXp5_gotnLH https://hailuoai.video )