Masterarbeit-Thema und Überlegungen im Detail

Beim schreiben meines Expose’s habe ich mich intensiver mit den Aspekten auseinander gesetzt die ich in der Masterarbeit behandeln möchte und möchte diese in einer etwas ausführlicheren (und deutschen) Version hier im Blog teilen.

Arbeitstitel: Beyond Characters: Alternative Narrative Strategien im Motion Design
Untertitel:
„Character – Warum Marken- und Erklärvideos neue narrative Strategien brauchen“ 

1. Problem (Starting Point, Problem Description)

Character-basierte Erzählweisen dominieren das Motion Design in der Branche, besonders im Bereich der Markenkommunikation und Erklärvideos kleiner und mittelständischer Unternehmen – und das obwohl es zunehmend komplexere, abstraktere oder systembasierte Kommunikationsziele werden. Dennoch fordern viele Unternehmen weiterhin klassische Erklärfilme und Videos mit Figuren, Maskottchen und Co. Video-Content wird dort häufig entweder als Realfilm oder als Character-basiertes Erklärformat gedacht. Dadurch wissen viele Kund:innen nicht, dass Markenbotschaften auch durch eine starke Motion Identity, durch Motion Graphics, Typografie oder abstrakte visuelle Systeme vermittelt werden können, die ganz ohne Figuren oder „Maskottchen“ auskommen.
Vielen mittelständischen Unternehmen fehlt meinem Ansatz zufolge das Wissen über versch. Ansätze für animierte Markenkommunikation und „Character basierte Erzählweisen“ sind oft das einzige dass sie sich vorstellen können.

Diese eingeschränkte Vorstellung erzeugt mehrere praktische Probleme: Videos, die ihr Publikum nicht wirklich erreichen, dramaturgisch schwache oder vorhersehbare Erzählstrukturen, generische visuelle Ästhetiken (z. B. austauschbare Flat-2D- oder trendige Character-Stile) sowie eine deutliche Reduktion der wahrgenommenen Möglichkeiten von Animation im Branding. In manchen Fällen wird Animation als Medium sogar komplett ausgeschlossen, weil Charaktere als unpassend zur Marke wahrgenommen werden – obwohl eine non-character-basierte Lösung ideal wäre.

Ob meine Vermutung stimmt und was die tatsächlichen Gründe dafür sind, warum Unternehmen so häufig auf Character Animation setzen, wird im Rahmen dieser Arbeit weiter erforscht. Gewohnheit, mangelndes Wissen über alternative Motion-Ansätze oder Trends könnten eben mögliche Faktoren sein. Ein zentraler Teil dieser Arbeit ist daher, diese Entscheidungslogiken besser zu verstehen, die Probleme charakterbasierter Ansätze zu untersuchen und herauszufinden, in welchen Fällen non-character Strategien passendere und effektivere Alternativen bieten.

2. Stand der Forschung

Themenbereiche die ich hier recherchieren möchte sind folgende:

  • historische Nutzung von Charakteren in Werbung & Animation
  • psychologische Begründungen (Anthropomorphismus, Identifikation)
  • Marketing-Mechanismen (Brand Recall, Emotionalisierung)
  • kritische Perspektiven (z. B. „Mascot Fatigue“, Kritik an generischen 3D/Flat-Stilen)
  • Forschung zu non-character Storytelling, Motion Grammar, Form-based Narratives

Es existiert umfangreiche Forschung zu visuellem Storytelling, Wahrnehmung und Animation allgemein. Autor:innen aus verschiedenen Disziplinen untersuchen, wie visuelle Formen interpretiert werden, wie Bildsequenzen Bedeutung erzeugen und welche Rolle Software und digitale Medien in der heutigen visuellen Kommunikation spielen. Zu der einen oder anderen Quelle kommt im Laufe der nächsten Wochen bestimmt noch ein Blogpost.

Psychologische und kognitive Studien zeigen, dass Menschen selbst einfachen bewegten Formen Intentionalität und Emotion zuschreiben – narrative und emotionale Wirkung ist also keinesfalls auf Figuren beschränkt. Forschung im Bereich Branding und Motion Identity wiederum beleuchtet, wie Bewegung als Teil einer visuellen Markenidentität funktionieren kann.

Dennoch gibt es Forschungslücken, die ich behandeln will

  • Eine systematische Analyse, wann Character-basierte Narrative in Branding- und Erklärvideo-Kontexten schwach oder kontraproduktiv sind.
  • Bestehende Literatur vergleicht Character-basierte Narrationen kaum direkt mit non-character Systemen wie typografischer, formbasierter oder rhythmischer Kommunikation.
  • Konzepte wie „Motion Grammar“ und formbasiertes Storytelling werden selten praxisnah operationalisiert und kaum im Branding-Kontext getestet.

Diese Arbeit knüpft an die vorhandene Forschung an, setzt jedoch einen klaren Fokus auf alternative narrative Strukturen im non-character Motion Design und untersucht deren Potenziale sowie deren Grenzen im direkten Vergleich zu Character-basierten Ansätzen.

Autor:innen und Papers die ich bisher gefunden habe oder die ich noch genauer recherchieren werde:

Arnheim, R. (1974). Art and visual perception: A psychology of the creative eye. University of California Press.

Blazer, L. (2020). Animated storytelling: Simple steps for creating animation and motion graphics (2nd ed.). Peachpit Press. https://permalink.obvsg.at/AC16663659

Kim, J., & Lee, S. (2021). Motion as identity: Exploring dynamic branding in digital media. In Proceedings of the 6th International Conference on Arts, Design and Contemporary Education (ICADCE 2020) (pp. 678–683). Atlantis Press. https://doi.org/10.2991/assehr.k.210106.132

Manovich, L. (2013). Software takes command: Extending the language of new media. Bloomsbury. https://search-fhj.obvsg.at/permalink/f/1a6sh9s/FHJ_alma5114293650004526

Sweet, F. (1999). Frog: Form follows emotion. Thames and Hudson. https://permalink.obvsg.at/AC03426968

Heider and Simmel (1944) animation
+ The perceived intentionality of groups: Author: Paul Bloom Csaba Veres
https://doi.org/10.1016/S0010-0277(99)00014-1

Weitere: 

  1. Scott McCloud (Visual Narration)
  2. Donald Norman (Emotional Design)
  3. Barbara Tversky (Cognition / Diagrammatical Thinking)
  4. Motion Design Theory (Lupton, Betancourt etc.)


3. Forschungsfrage

Hauptfrage:
Wie können Marken- und Erklärvideos im Motion Design mithilfe non-character narrativer Strategien klarer, wirksamer und markenspezifischer kommunizieren als mit klassischen Character-basierten Animationen?

„Unterfragen”:

  • In welchen historischen und praktischen Kontexten wurde Character-basierte Animation dominant?
  • In welchen Anwendungsfällen zeigt diese Erzählform heute Schwächen (z. B. Redundanz, „Character Fatigue“, fehlender Markenfit)?
  • Welche alternativen narrativen Strukturen existieren im non-character Motion Design und wie lassen sie sich systematisch beschreiben?
  • Wie wirkt dieselbe Botschaft, wenn sie einmal charakterbasiert und einmal non-character erzählt wird – hinsichtlich Klarheit, Engagement und Markenpassung?
  • Welche Rolle spielen Erwartungen, Gewohnheiten und Trendwahrnehmungen von Kund:innen bei der Bevorzugung von Character Animation?

Ich möchte „the rise of character animation“ und „wann/wo/wie character Animation gut eingesetzt funktioniert“ anreißen, aber Fokus darauf legen, raus zu finden „Welche Schwäche Character Animation” hat und vor allem „alternative Strukturen“ zu finden (Forschung zu non-character Storytelling, Motion Grammar, Form-based Narratives) und hier herausfinden welche Ansätze wann/wie/wo gut funktionieren.


4. Hypothese / Zielsetzung

Gedanken: Ich weiß bereits, dass es Bereiche gibt in denen Character-basierte Erzählstrukturen immer noch super funktionieren. Ich will für jene Fälle Alternativen finden, in denen Charcater-Animation schwach ist und hier die Wirksamkeit non-character testen. Am Ende will ich zumindest ein Video in dem Non-character-Animation ein Produkt/service/… zeigt/bewirbt/… und ev. mit einem Character Gegenbeispiel arbeiten.

Hypothese:
Obwohl Character-basiertes Storytelling historisch und psychologisch stark verankert ist, ist es nicht in allen Fällen die wirkungsvollste narrative Methode für zeitgenössische Markenkommunikation und Erklärvideos. In vielen Szenarien können non-character Strategien – basierend auf Typografie, Form, Rhythmus, Motion Systemen und abstrakten visuellen Metaphern – klarer, flexibler und markenspezifischer kommunizieren.

Ziele:

  • Identifikation von Fällen, in denen Character Animation narrativ oder ästhetisch schwach ist.
  • Erforschung und Systematisierung alternativer non-character narrativer Strategien (z. B. Motion Grammar, formbasiertes Storytelling, systembasierte Kommunikation).
  • Produktion eines animierten Vergleichspaars:
    1. characterbasierte Version
    2. non-character Motion-Graphics-Version derselben Botschaft
  • (Optional) Vergleichende Untersuchung der Wahrnehmung beider Varianten durch Testpersonen.
  • Entwicklung eines praxisorientierten Frameworks bzw. Tools zur Kommunikation dieser Ergebnisse gegenüber Kund:innen.


5. Theoriebezug

Die Arbeit stützt sich auf folgende theoretische Ansätze:

  • Narratologie & visuelles Storytelling
  • Motion Grammar / Bewegung als Bedeutungsträger
  • Gestaltprinzipien & kognitive Visualisierungsforschung
  • Brand Identity, Dynamic Branding & Motion Identity
  • Anthropomorphismus & Intentionalität (selektiv, zur Kontextualisierung)

Die eigene Position ist design-theoretisch mit stark praktischer Ausrichtung: Character Animation ist ein wertvolles Werkzeug, aber derzeit überrepräsentiert. Die Arbeit plädiert für ein erweitertes Verständnis von Storytelling im Motion Design, in dem non-character Systeme als gleichwertige – und oft überlegene – Optionen anerkannt werden.


6. Methode

Die Arbeit nutzt einen Mixed-Methods-Ansatz aus Theorie, Analyse und Praxis:

  1. Literaturrecherche
  2. Fallstudien & Vergleichsanalyse (Character vs. Non-Character)
  3. Experteninterviews (Motion Designer:innen, Brand Strategists)
  4. Design-Experiment – zwei Versionen derselben Botschaft
  5. (Optional) User Testing zu Klarheit, Engagement, Emotion und Markenfit
  6. Synthese & Framework-Entwicklung

Im Detail:

1.  Literature review

  •  Systematic review of literature on visual storytelling, animation, motion design, motion grammar, perception and brand identity
  • Identification of existing models that relate motion and narrative structure to communication goals.

2.  Case studies / analytical comparison

  • Selection of existing brand and explainer videos that use character-based storytelling.
  • Selection (or identification) of non-character motion design examples (typography-driven, form-based, system/identity-based).
  • Qualitative analysis focusing on clarity of message, aesthetic specificity, narrative structure and perceived brand fit.

3.  Expert interviews

  • Semi-structured interviews with motion designers, creative directors or brand strategists.
  • Topics: reasons for choosing character vs. non-character approaches, perceived strengths and weaknesses, client expectations and real-world constraints.
  • Evaluation via thematic coding and synthesis of recurring patterns.

4.  Design experiment / practice-based research

  • Concept development for a short explainer or brand-related message (e.g. introducing a service or product).
  • Creation of two animated versions:
    a) a character-based narrative solution,
    b) a non-character motion graphics solution (e.g. using typography, forms, rhythm, motion systems and abstract metaphor).
  • Potentially also further non-character variations (e.g. typography-only, form-only), depending on scope.
  • ALTERNATIVE: Ase an existing character-based narrative and translate it to a non-character motion graphics solution

5.  (Optional) User testing / evaluation

  • Recruitment of test participants from relevant or mixed target groups.
  • Presentation of the different video versions under comparable conditions.
  • Data collection via questionnaires and/or short interviews focusing on:
    • perceived clarity and understanding of the message
    • attention and engagement
    • emotional response
    • perceived fit with a hypothetical or existing brand
    • Qualitative and, where applicable, simple quantitative evaluation (e.g. rating scales).

6.  Synthesis and framework development

  • Integration of insights from literature, case analysis, expert interviews and user testing.
  • Formulation of a set of principles or a framework describing when and how non-character narrative strategies are particularly effective.
  • Translation of these findings into a practical “tool” or guideline that can be used in discussions with clients. 


7. Material

Vorhandene Materialien:

  • Fachliteratur
  • Beispiele aus Brand- und Erklärvideo-Praxis
  • Animationssoftware
  • potenzielle Interviewpartner:innen

Noch zu erheben:

  • Fallstudien-Korpus
  • Material zu Motion Identity & Motion Grammar
  • Teilnehmer:innen für Tests
  • Interview- & Testdaten


8. Idee für das Workpiece

Geplant ist eine Serie experimenteller Animationen:

  • eine charakterbasierte Version
  • eine oder mehrere non-character Versionen derselben Botschaft
  • optional: Übersetzung eines bestehenden Character-Videos in eine abstrakte Motion-Lösung

Das Workpiece dient sowohl der Untersuchung als auch als späteres Tool für Kund:innen.

9. Vorläufiger Aufbau

1.     Introduction
Background, motivation and relevance of the topic, Research gap and objectivesStructure of the thesis

2.     Problem and context: Character dominance in motion design

– Character-based explainer films and brand videos
– Client expectations and common industry practices
– Initial observations from practice

3.     Historical and theoretical foundations

– Short history of characters in animation and advertising
– Psychological foundations: anthropomorphism, identification, perceived intentionality
– Basics of visual storytelling and narrative in design

4.     Brand communication, motion identity and design systems

– Brand identity basics
– Dynamic branding and motion identity
– Motion as a component of visual brand systems

5.     Limits and weaknesses of character-based narratives in contemporary contexts

– Aesthetic redundancy and “character fatigue”
– Generic styles (e.g. flat 2D, corporate mascot trends)
– Mismatches between character styles and brand identity
– Cases where animation is not used because characters are perceived as unsuitable

6.     Alternative narrative systems in non-character storytelling
(Should be the central and detailed chapter… still unsure what exact content will be)

–      Concept of non-character communication in motion design
–      Motion grammar: timing, rhythm, transitions and system behaviour as meaning
–      Typographic storytelling and kinetic typography
–      Form-based and abstract narratives (shape, composition, colour, scale, rhythm)
–      System-based and grid-based motion identities
–      Emotional expression through form and movement without figurative characters‘
–      …
–      Synthesis: a preliminary framework of non-character narrative strategies

7.     Methodology

– Research design and rationale
– Literature review approach
– Case study selection and analytical criteria
– Expert interviews (design, sample, procedure)
– Design experiment and user testing (setup, indicators, limitations)

8.      Analysis and results

– Insights from case studies
– Summary of expert interviews
– Results of user testing (character vs. non-character versions)

9.      Development and discussion of the framework / tool

– Integration of findings into a practical model
– Description of the experimental workpiece
– Implications for practice and client communication

10.  Conclusion and outlook

Summary of key findings, Limitations of the study, Outlook for future research and practice in motion design. 

IMPULSE #4: World Usability Congress 2025

Spending two days at the World Usability Congress in Graz made me focus on UX aspect of my thesis. The talks I followed were mostly about UX KPIs, usability testing and accessibility, and I kept translating everything into my own topic: AR and IoT in retail. Instead of just thinking about how my future system could look, I started to think in a much more concrete way about how to measure it, test it and make sure it works for real people, not only in prototypes.

KPIs – Learning To Define What “Better” Means

One of the clearest lessons was how seriously UX teams treat KPIs. In my notes I wrote that valuable improvements are often only 10 to 15 percent per quarter, and that this is already considered success. That sounds small, but the important part is that these improvements are defined and measured. The typical UX KPIs that kept coming up were conversion rate, task completion time, System Usability Scale score, Net Promoter Score and error rate.

For my thesis this means I cannot just write “AR wayfinding will improve the shopping experience”. I need to specify what that improvement looks like. For example: people find a product faster, they ask staff for help less often, they feel more confident about their choices. The practical action I took from the congress is: for each feature I design, I will write down one or two concrete metrics and how I would measure them in a real store test. That turns my concepts into something that can be evaluated instead of just admired.

Accessibility As A Built In Check, Not An Extra

The accessibility track was also directly relevant. In my notes I wrote down a “quick checklist” that one speaker shared: check page layout and content, contrast and colours, zoom, alerts and error messages, images and icons, videos, no flashing animation and audio only content. It is simple, but exactly because it is simple it is realistic to apply often.

For my AR and IoT ideas, this becomes a routine step. Whenever I sketch a screen or overlay, I can quickly run through that checklist. Also thinking how my work could also have an impact on the accessibility for the end users. Are colours readable on top of a busy store background. Can text be enlarged. Is there a non visual way to access key information. Combined with talks about accessibility on a corporate level and inclusive design for neurodivergent people, it pushed me to treat accessibility as a default requirement. The concrete action is to document accessibility considerations in my thesis for every main feature, instead of adding a separate chapter at the end.

What I Take Back Into My Thesis

After World Usability Congress, my AR and IoT retail project feels less like a collection of futuristic ideas and more like something that could be developed and tested step by step. The congress gave me three practical habits. First, always define UX KPIs before I design a solution, so “better” is not vague. Second, run an accessibility quick check on every main screen or interaction and think about different types of users from the start.

This fits nicely with my other blog reflections. The museum visit gave me ideas about where AR and IoT could be applied. The festival made me think about wayfinding and smart environments. World Usability Congress added the missing layer: methods to prove that these ideas actually help people and do not silently exclude anyone.

Links
Official conference homepage
World Usability Congress – Home World Usability Congress

2025 agenda with talks and speakers
World Usability Congress 2025 – Agenda World Usability Congress
AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #3: Meta Connect 2025 AR Moving From Headsets To Everyday Life

Watching Meta Connect 2025 felt like seeing my thesis topic walk on stage. The focus was not on big VR helmets anymore but on glasses that look close to normal and are meant to be worn in everyday life. The main highlight was the new Meta Ray-Ban Display, a pair of smart glasses with a small full color display in one lens and a lot of AI power built in. They are controlled with a neural wristband that reads tiny finger movements, so you can click or scroll with almost invisible gestures.

When starting this topic I have theorized how technology going to look and have had my hopes and assumptions. A few years ago AR meant heavy hardware that you would never wear into a supermarket or furniture store. Now the vision is a pair of sunglasses that weigh about as much as regular glasses, can show simple overlays in your field of view and are designed to be worn on the street, in a shop or on the couch. The technology is still expensive and early, but watching the keynote made it very clear that the direction is: smaller, lighter, more normal looking, and more tightly connected with AI.

It could be compared to evolution of the phones and technology in general how our everyday devices moved from being heavy and bulky to light and portable and also with having speculations such as it’s not needed we can come to vision that everyday said we do not need phones because we have laptops but technology advances and we find new ways where we interact with the world.

What I Learned About AR From The Event

The first learning is about form factor. The Ray-Ban Display does not try to turn your whole field of view into a digital world. It uses a compact display area to show only what is necessary: navigation hints, messages, short answers from Meta AI or the title of a song that is playing. Instead of replacing reality, it adds a thin layer on top of it.

The second learning is about interaction. The neural wristband is a good reminder that people do not want to wave their arms in public to control AR. In real environments like a festival, a museum or a supermarket, subtle gestures or simple taps are much more realistic.

The third learning is the merge of AI and AR. The glasses are clearly designed as AI first devices. They can answer questions, translate speech, caption what you hear and see, and then present this information visually inside the lens.

Technology Getting Smaller And More Accessible

Another strong theme in Meta Connect is how quickly the hardware is trying to become socially acceptable. Earlier devices were clearly gadgets. These glasses try to be fashion first, tech second. They look like familiar Ray-Ban frames instead of a prototype. The same is true for battery life and comfort. The promise is that you can wear them for several hours without feeling like you are in a lab experiment.

Why Meta Connect Matters For My Thesis

Meta Connect 2025 confirmed that my scenarios for AR in retail are not just science fiction. The building blocks are emerging in real products: lightweight glasses, AI assistants, subtle input methods and simple overlays instead of full virtual worlds. For my master’s thesis this is both motivating and grounding. It tells me that the interesting design work is no longer about asking if AR will be possible in stores, but about shaping how it should behave so that it actually helps people shop, learn and navigate without stealing the spotlight.

Technology should become smaller, calmer and closer to everyday objects, so it can quietly support what people already want to do in physical spaces. Not to replace those spaces, but to make moving through them a little clearer, smarter and more human.

Links

Official Meta recap of the Connect 2025 keynote (Ray-Ban Display, Neural Band etc.)
Meta Connect 2025 – AI Glasses And Ray-Ban Display Meta

Meta product page for Ray-Ban Meta smart glasses (for specs and positioning)
Ray-Ban Meta Smart Glasses – Meta Meta

General info / news listing around Meta smart glasses and AI wearables
Meta – Newsroom / Ray-Ban Meta Announcements

AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #2: A Night of Techno Losing Yourself And Finding Your Way Experience

The night I saw Charlotte de Witte at Signal Festival was pure overload. Heavy bass, dense crowd, strobing lights, smoke, multiple bars and stages, lockers, queues, a constant flow of people in every direction. As an experience it was amazing, I am a designer and one thing I love to do and always do is to see how I can optimize the whole event and how I can apply it to my thesis because I am also a workaholic.

Observations: Immersion Versus Orientation

One of my strongest observations was how different immersion and orientation felt. Immersion was perfect. When I was in front of the main stage, I did not need any interface. The sound and visuals were enough. Orientation was a different story. Moving away from the stage meant guessing, especially if you got drunk a bit. Where is the nearest bar that is not overcrowded. Which corridor leads to the toilets. How do I get back to my locker without opening the venue map again and again. The more time passed, the more people were intoxicated, and the weaker everyone’s internal navigation became.

At some point I lost my friends in the crowd and we had the usual routine: messages that did not go through, vague descriptions like “I am near the left bar” that are useless in a dark hall, and the classic feeling of spending twenty minutes trying to reconnect. When you are sober this is still slightly annoying. Once you are drunk, it becomes hard work.

Understanding: How AR And IoT Could Be A Soft Safety Net

This is where I started to imagine an IoT based guidance system with AR as the interface. Where IoT beacons or other positioning technology could be distributed across the venue. Every bar, locker zone, toilet block and entrance could have its own tiny digital footprint. If visitors opt in, AR glasses could use this network to understand three basic things in real time: where they are, where their friends are, and where key services are located.

In practice, that could look very simple. An AR arrow could hover in my view and gently lead me to my locker, even if I barely remember which area I used. A small indicator could show me which direction my friends are in and roughly how far and also notify in case my friends need help as sometimes you can face safety issues other people approaching and annoying. If I want a drink, the system could show the nearest bar plus tell where I can go to smoke. If there is an emergency or I need to leave quickly, the AR layer could highlight the closest safe exit instead of forcing me to rely on my memory in a confused state.

Main Concept: Festivals As Prototypes For Smart Guidance

The main concept that came out of Signal Festival for me is the idea of a soft, ambient guidance system built on AR and IoT. The festival does not need more screens. It needs invisible structure that supports people at the right moment. A network of small, low power devices in the space can give the system awareness of positions and states. Which will elevate user experience nd AR then becomes a thin, context aware layer on top of that awareness. It answers very simple questions: where am I, where is what I need, and how do I get back.

This is closely related to my retail research. A music festival is like an extreme version of a shopping mall. Both are large, noisy, crowded environments where people try to reach specific goals while managing limited energy and attention. If a guidance system can help a drunk visitor find the right bar, locker or friend in a dark venue, it can certainly help a tired shopper find the right aisle or click and collect point in a busy store.

Links
Event page for Signal Festival Weekend 2 at Pyramide
Signal Festival – PYRAMIDE TAKEOVER WE2 (O-Klub) O-Klub

Techno event listing with headliners and description
Signal Festival Pyramide WE2 – Event Overview technomusicworld.com

Local article about Signal Festival in the glass pyramid
Signal Festival in der Pyramide Vösendorf – Heute.at

AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #1 Kunsthistorisches Museum Wien: Analog Space, Digital Ideas

Visiting the Kunsthistorisches Museum Wien felt almost the opposite of my thesis topic. It’s a very “analog” space: heavy architecture, old masters, quiet rooms, and almost no visible technology. Apart from the optional audio guide device, there are no screens, no projections, no interactive installations. You move from room to room, read the small wall texts and simply look.

That contrast is exactly what made the visit so valuable for me as an interaction design student. I wasn’t impressed by high-tech features. I was impressed by how much potential there is for technology to quietly support the experience without taking attention away from the art itself. The museum became a kind of mental sandbox where I could imagine how AR and IoT might be implemented in a very delicate context: history, culture, and learning.

Observations: A Classical Museum with a Small Digital Layer

My main observation was how traditional the user journey still is. You enter, pick a wing, and mostly navigate by room numbers, map and intuition. The only digital touchpoint I used was the handheld audio guide. Even that already shows the basics of what I work with in my thesis: an extra information layer on top of the physical space. You enter a painting number, press play, and suddenly you get context, story and meaning instead of just title, date and artist.

But the interaction is linear and passive. You always get the same story, no matter who you are, how much you already know, or what caught your eye. There is no way for the system to “notice” that you are fascinated by one detail and want to go deeper, or that you are in a hurry and only want a short summary. It made me see very clearly where today’s museum tech stops and where AR and IoT could start.

Understanding: Technology Should Support the Artwork, Not Compete with It

Standing in front of paintings, I tried to imagine AR in the room. The danger is obvious: if we fill the space with too many digital elements, the painting becomes a background for the interface. That’s exactly what I do not want, and it connects strongly to my thesis: technology must serve the human and the content, not distract from it.

So my understanding is that any AR or IoT system in a museum like this would have to be extremely calm, subtle and respectful. The artwork stays the main actor. AR is just a transparent layer that appears only when the visitor asks for it. IoT devices like small beacons near the frame could be completely invisible, only there to let the system know where you are and what you’re looking at. The goal is not to “modernise” the museum for its own sake, but to deepen the connection between visitor and artwork.

Main Concept: A Future AR & IoT Guidance Layer for Museums

The main concept that came out of this visit is to treat the museum as a potential case study for the same principles I explore in smart retail: guided navigation, contextual information, and personalised journeys, all powered by AR and IoT.

I imagined wearing AR glasses instead of holding an audio guide. When I look at a painting for more than a few seconds, a small icon could appear next to it in my field of view. If I confirm, the system overlays very minimal hints: a highlight around a specific detail, a short caption, or the option to see a brief animation explaining the story behind the scene. If I want more, I can dig deeper maybe see a reconstruction of how the painting originally looked, or how it was restored. If I don’t, nothing changes; I just keep looking with my own eyes.

The same system could also redesign the wayfinding experience. Instead of a fixed predefined tour, AR could show me a route that matches my interests and time: “Show me five highlights from the Renaissance in 45 minutes,” or “Guide me only to works that relate to mythology.” IoT sensors in rooms could provide live information about crowding, so the path avoids the most packed galleries and keeps the experience more relaxed.

What mattered most for me in this museum visit was not what technology was already installed, but the mental exercise of placing my thesis ideas into this setting. It helped me see that the principles I am developing for AR and IoT could have wider use case from the intended one and give a perspective for a retail subtle guidance, context-aware information, and respect for the physical environment also make sense in a cultural space.

Links

Official museum site
Kunsthistorisches Museum Wien – Official Website KHM.at

Visitor overview and highlights in English
Kunsthistorisches Museum – Overview & Highlights (visitingvienna.com) Visiting Vienna

Background and history of the building
Kunsthistorisches Museum Wien – Wikipedia

AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #2 — Museum CoSA Graz

(Museum Visit – High-Fidelity / Interactive Learning)

If the Schlossberg Museum and Graz Museum showed me how visible framing can still communicate, then the CoSA (Center of Science Activities) showed me something completely different:
What happens when the frame doesn’t just present the content, but replaces it?

CoSA is not a museum in any traditional sense. It’s more like a playground disguised as an exhibition. A high-fidelity, immersive environment designed for kids, teens, and curious adults who want to touch, play, try, fail, experiment. Everything is screaming interaction. Lights, buttons, projections, puzzles, sounds, even the architecture itself feels like part of the performance. And somehow, in the middle of all this spectacle, I found myself thinking about my thesis again. Especially the question of whether art needs a frame to communicate or whether, in spaces like CoSA, the frame becomes so thick that the content becomes secondary.

The Superpower of High-Fidelity Framing

Everything is polished, exaggerated, designed for engagement. There’s no moment of “Is this intentional?” it obviously is. Even the walls communicate. Even the floor feels curated. In some rooms, you’re invited to look at a dead cat and a movie plays in front of it. In others, you’re challenged to be the doctor to an ill man or child, build a car yourself, drive with your very own car, force, sound, perspective. It’s all very game-like. And because it’s game-like, it also shifts how people behave. At Schlossberg Museum, people slowed down, read text, observed.
At CoSA, people jump in. There’s no hesitation, because the space gives permission. It guides you. It demands participation. And that’s exactly where it becomes relevant for my research:

High-fidelity framing dictates behaviour.

When people know the rules, they relax. When people know they are supposed to interact, they interact. When people know the space will guide them, they let go. This is almost the opposite of my everyday installations, where uncertainty is the whole point.

The Contrast: What My Research Isn’t About (but Helps Clarify)

One thing I noticed at CoSa: nothing here could ever be mistaken for an everyday installation. The framing is too strong, too theatrical. There’s no ambiguity. The frame is not just present  it’s hyper-present. And that helps me understand my thesis by contrast. If I want to explore how art communicates without a frame – then CoSA shows me the extremity of what happens with a frame. Here the meaning comes from the design, not from the object. The space tells you what to do, how to behave, and how to interpret what you see.

My photos of accidental compositions function in the opposite way. They rely on your curiosity, your willingness to look, your active interpretation. CoSA relies on instructions. So a strange question formed in my head:

Can art without a frame only function if people are trained by spaces like CoSA to trust their instincts or does it make them too dependent on explanation?

I don’t know the answer yet.
But I love that this place forces me to ask the question.

How Children React vs. Adults

Children don’t need frames the way adults do. Kids immediately start touching, playing, pushing, exploring. They don’t care what things “mean,” only what they “do.” They don’t ask for permission they assume everything is meant to be interacted with. Adults, however, hesitate. They wait for someone else to engage first. They need the frame to feel safe. This ties directly back to my earlier experiments with staging reactions to the celery stalk. Maybe adults look for social proof because they learned it in high-fidelity contexts like CoSA, museums, galleries, spaces that tell them what is allowed. Kids, meanwhile, operate naturally in low-fidelity environments. They accept randomness without fear. Maybe art without a frame communicates more easily with children than with adults. Maybe adults have to unlearn framing before they can perceive openly again.

What CoSA Taught Me About My MA Question

My thesis question still feels fresh, shifting, not quite ready. But this visit helped me refine something important:

For art to communicate without a frame, the viewer must bring their own interpretive tools. High-fidelity spaces, like CoSA, give you the tools but they also take away the freedom.

CoSA is wonderful. It’s smart, engaging, well-designed. But it also shows what happens when context becomes so strong that the content becomes inseparable from it. If everyday installations are the whisper, CoSA is the megaphone. And somewhere between whisper and megaphone lies the answer to my thesis.

Links

https://www.museum-joanneum.at/cosa-graz/unser-programm/ausstellungen/event/flip-im-cosa
https://www.museum-joanneum.at/cosa-graz/unser-programm/ausstellungen/event/der-schein-truegt
https://www.museum-joanneum.at/cosa-graz

AI Disclaimer

This blog post was written with the assistance of AI.

IMPULSE #1 Schlossberg Museum / Graz Museum (Museum Visit)

When I walked into the Schlossberg Museum, I wasn’t expecting anything. It´s just a part of a course. I assumed it would be a classic museum visit: walking through rooms, reading plaques, observing objects arranged in rehearsed formations. But the longer I stayed, the more I realized that this museum, in its own quiet way, is a fascinating study of how staged environments communicate and how they sometimes don’t.
My master thesis still circles around the question:
“What does it take for art to communicate without a frame?”
And oddly enough, this museum (a highly framed environment) helped me understand the opposite: What happens when the frame is visibly present, and how that visible framing sometimes works, sometimes fails, and sometimes becomes the entire message.

Staged Installations Without Pretending Not to Be Staged
What I realized was how intentionally “set up” everything looked. The Schlossberg Museum uses low-fidelity installations, meaning the staging is visible, almost transparent. You’re never tricked into believing that you entered an immersive world. You know that things are placed here for you.
And yet, people interact with these low-fidelity setups in surprisingly attentive ways.
Why?
Because the museum doesn’t try to hide its own construction. There’s a kind of honesty in that. It reminded me of my celery experiments, the difference between placing something deliberately yet pretending it’s accidental versus owning the arrangement. The Schlossberg Museum doesn’t pretend. The frame is obvious. The stage is visible. And weirdly enough, that visibility communicates.

How People Behave Around Framed Meaning
One of the most interesting things during my visit wasn’t the exhibition itself but the people inside it. I observed how visitors (including my friends being visitors as well) behaved:
• They slowed down near installations that had lighting around them.
• They spent more time near objects that had a certain spatial importance (center of the room, elevated platform, glass vitrtrine).
• They trusted anything behind a glass box more than anything placed openly.
• And they ignored objects that lacked a clear contextual cue, even when those objects were historically interesting.

So what does that say about meaning?
People read context faster than they read content.
They decide something is important before they understand why it is important.
This fits perfectly into my MA question.
Maybe art communicates without a frame only when people are trained to trust their own perception more than the environment around them. But museums do the opposite, they reinforce the frame as the reliable source of truth.

Low-Fidelity ≠ Low Communication
What stayed with me most were the humble, almost simple arrangements. Placed with intention, but without spectacle.
It reminded me of my everyday installations, accidental compositions I find on the street, a banana peel on a pizza carton, a toy scooter locked among adult bikes. Those moments also communicate something, despite lacking a label, despite lacking institutional permission.
At the Graz Museum, the objects have permission, yet they feel almost as unassuming as the found installations I’ve been documenting.
This made me wonder:
• Does an object need a high-fidelity frame to speak clearly?
• Or is a minimal frame enough, as long as viewers trust the context?
• And crucially: what happens when you take away the frame entirely?
The museum helped me see that “communicating without a frame” isn’t just about removing borders, it’s about cultivating perception.


Links
https://www.grazmuseum.at/graz-museum-schlossberg/
https://www.grazmuseum.at/ausstellung/demokratie-heast/
https://www.grazmuseum.at

AI Disclaimer
This blog post was polished with the assistance of AI.

LS Impulse #4 TED Talk – A brief history of rhyme

For this impulse, I watched  the TED talk A Brief History of Rhyme by Baba Brinkman — a rap artist known for creating concept albums based on unexpected themes such as The Canterbury Tales or Charles Darwin’s theory of evolution. His approach blends performance, historical research, and linguistic analysis, making the talk an unusual mix between literature lecture, hip-hop seminar and even a small comedy show, he then proceeded to explain his unusual approach:

Brinkman began by explaining the evolution of rhyme from its simplest forms, for example the classic “car, far, star” or “house”, “mouse” type of end rhym towards more complex structures like as mosaic and multi-syllable rhymes. What I actually found fascinating was how he connected contemporary rap techniques to much older literary traditions. He did a lot of research and pointed out that The Canterbury Tales already experimented with rhythmic and rhymed structures, and that 17th-century works like Hudibras used extended multisyllabic rhymes that would later influence comedic verse. Even Don Juan from 1819 contains rhyme patterns that, according to Brinkman, resemble what we today associate with classic hip-hop rhyme schemes: “Oh ye lords of ladies intellectual; / Inform us truly, have they not henpeck’d you all.”

One of his key points was that multisyllabic rhyme traditionally appeared in humorous contexts. Historically, these rhyme patterns were used to create irony or satire rather than emotional depth. The only exception Brinkman found was a moment in Lord of the Rings where such rhyme structures appear in a serious, almost solemn tone which is a rare example where polysyllabic rhyme escapes its comic roots. He argued that modern rap has pushed this evolution further, showing that complex rhyme structures can carry serious emotional meaning. Tracks like “I Ain’t No Joke” by Rakim demonstrate that rappers use rhyme not only for performance but for vulnerability and identity but they  often feel the need to defend the genre against accusations of “not being serious.”

Brinkman also contrasted rap with contemporary poetry. While poets have mainly or often moved away from rhyme in favour of expression or free verse, hip-hop has kept rhyme alive by constantly reinventing its structure. According to Brinkman, rap is one of the last art forms where formal rhyme is still being innovated. The talk concluded with Brinkman performing a freestyle using increasingly complex multisyllabic rhymes based on the phrase “broken glass,” which made the linguistic theory suddenly very concrete and audible.

Ok but what does this have to do with communication design?

This talk sparked a new line of thinking for me: how does rhyme function visually? If rhyme in language is based on repetition, rhythm, and pattern recognition, could similar mechanisms exist in visual communication? And if so, how complex can these visual “rhymes” become before they lose recognisability? Brinkman’s distinction between simple end rhymes and mosaic/multisyllabic rhymes made me wonder whether design also has equivalents from clean, obvious visual parallels to more layered, subtle echoes in form, colour, structures or spatial rhythm.

For communication design, this raises questions about how humans perceive repetition, pattern, and variation and how these can influence emotional response or memorability. The talk made me realise that rhyme is fundamentally a cognitive tool that guides attention, builds expectation, and creates satisfaction when the pattern resolves. This is therefore extremely relevant for visual research.

Relevance for my potential Master’s thesis

I have already been thinking about researching how rhyme structures influence the recognition of visuals and this talk strengthened that idea. Brinkman’s historical framing showed that rhymes communicate not only through sound, but through structure. This makes it even more interesting to explore whether “visual rhymes” could work in a similar way:
– Are simple repetitions (the visual equivalent of “car–far–star”) more memorable?
– Can complex, multi-layered visual parallels function like multisyllabic rhymes?
– Could this influence how people engage with activist or feminist visual communication?

For a Master’s topic that connects design, maybe activism, and perception, exploring rhyme as a cross-modal phenomenon  from sound to image  could be an interesting direction and I feel like it could be fun researching this topic.

Links

Ted Talk https://www.youtube.com/watch?v=8t4F83aHAXU

Baba Brinkmann https://bababrinkman.com/

IMPULSE #4: Lunch with Prof. Baumann (with some good Kebap!)

This impulse is a bit different from the others because it is not a book or a talk, but a lunch meeting with Prof. Konrad Baumann that helped me put much sharper edges around my thesis idea. The conversation was essentially my first “real” check-in with someone I would like to supervise my thesis, and it forced me to articulate my motivations and what I actually want to achieve with “effective ethical design” and digital footprints. Instead of staying in my own head, I had to explain why this topic matters to me and where I see it sitting inside UX practice and the wider industry. That alone made this meeting feel like an important impulse.

We started by reconnecting threads from a previous class discussion, where we had talked about our interests in the UX field and the kinds of industry problems we care about. For me, those questions brought back the same themes: ethical design, dark patterns, privacy, and how users are often left in the dark about their data trails. This lunch was like a continuation of that exercise, but one-on-one and more honest. Saying my thesis topic out loud and contextualising it in front of someone with experience in this area made my intentions feel more “real”, and it also exposed where my thinking was still a bit vague or too broad.

I really liked how he brought up concrete cases and pointed me toward resources, including earlier advice I had heard about noyb (Neuerungen bei Datenschutzfällen), a privacy organisation that regularly takes companies to court over data protection violations. These cases are basically “real-life stories” of where digital products and services crossed lines in how they handled user data. That was a helpful reminder that my thesis is not just theoretical; it sits in a landscape where regulators, NGOs, and companies are already fighting over what is acceptable, from tracking to dark patterns to consent models.

Afterwards, Prof. Baumann shared an interesting ORF article that discusses current tensions and developments around privacy and digital rights in Austria and Europe. Even without quoting it directly, the article makes it clear how much is at stake: from weak enforcement to high-profile cases against platforms and tech companies, it shows that “privacy by design” is not just a slogan but something that either happens in concrete interfaces or does not. For my thesis, this is a useful anchor, because it links my academic work to a living context of laws being tested, companies being challenged, and users being affected.

What I take from this impulse is both emotional and structural. Emotionally, it reassures me that I am not chasing a “nice sounding topic” but something that sits at the intersection of UX, law, and real harms users are experiencing. Structurally, it pushes me to frame my thesis more clearly around a few core questions: How can interaction design make digital footprints visible and manageable in everyday interfaces? How can ethical constraints and legal requirements be translated into practical patterns instead of abstract guidelines? And how can designers avoid repeating the kinds of behaviours that end up in complaints, lawsuits, or investigative articles about privacy abuses?

For my next steps, this meeting gives me three concrete moves. First, to keep mapping real cases (like those collected by noyb and highlighted in media coverage) as examples of what “unethical design” looks like in practice, and why better interaction patterns are needed. Second, to use those cases as boundary markers when I prototype: if a pattern smells like something that has already led to a complaint or enforcement, it is a red flag. Third, to stay in close conversation with Prof. Baumann as a supervisor, so that my thesis stays grounded in both design practice and the evolving legal and ethical landscape.

Link to the ORF article Prof. Baumann shared (in German), which anchors this impulse in current debates about privacy and data protection:
https://orf.at/stories/3410746/

For broader context on enforcement and complaints concerning privacy violations in Europe, especially involving companies like Clearview AI, this overview from Reuters and noyb helps show how data misuse is being challenged at a legal level:
https://www.reuters.com/sustainability/society-equity/clearview-ai-faces-criminal-complaint-austria-suspected-privacy-violations
https://noyb.eu/en/criminal-complaint-against-facial-recognition-company-clearview-ai

Finally, this Austrian consumer-focused article on dark patterns and manipulative web design provides a very concrete list of deceptive practices and explains how new regulations like the Digital Services Act aim to limit them, which connects directly back to my thesis interest in ethical interfaces and user autonomy:
https://www.konsumentenfragen.at/konsumentenfragen/Kommunikation_und_Medien/Kommunikation_und_Medien_1/Vorsicht-vor-Dark-Patterns-im-Internet.html

Disclaimer: This blog post was developed with AI assistance (Perplexity) to help with structuring and phrasing my reflections.

IMPULSE #2: Design Patterns for AI Interfaces

With more tools adopting AI — generative text, code assistants, smart search, content creation — there’s a rush to “add AI” to every product. Without good UI/UX, many of these additions end up confusing or frustrating users. The patterns from this talk offer a more sustainable, user-centric approach to AI integration. As a UI UX designer working with a product team trying to explore AI features, these insights help avoid common pitfalls during research and practice.

The traditional chatbot (a blank text box, open prompt) is often insufficient; it places too much burden on the user to guess what to ask for, how to phrase it, what input format works. Instead, AI UIs should provide structure — templates, guided inputs, preset actions — that shape user intent and make the AI’s capabilities and limitations clear.

Structured Input & Output UX

  • Input UX: Rather than free-form prompts, designers can use structured templates, presets, or guided flows so users don’t need to “guess” how to phrase their request. This improves usability and broadens the accessibility of AI tools to non-expert users.
  • Output UX: AI responses — often long, verbose, or ambiguous — should be presented in a digestible way. Use of rich formatting (e.g. collapsible reasoning traces, style lenses, ranking, color-coding) helps users find value quickly.

Why These Patterns Matter and What They Solve

Lowering friction and cognitive load: Many people don’t know how to “talk to AI.” Structured inputs/templates reduce the intimidation and guesswork.

Making AI more reliable and trustworthy: By clarifying what AI can (and can’t) do, and giving users control (via refinements, options, transparency), designers can avoid “hallucinations,” miscommunication, and user frustration.

Delivering value quickly and predictably: Well-designed AI interfaces help users get useful results with minimal effort — increasing adoption and satisfaction.

Supporting diverse user types: Not everyone is a “power user.” Good patterns make AI accessible to novices while still serving experienced users.

First results from AI often need tuning. Good AI interfaces let users refine — through follow-up prompts, filter buttons, adjustment sliders (e.g. “temperature” or style), or iterative flows — to get closer to what they need. This is more powerful than expecting a single perfect answer.

Rather than isolating AI in a separate “assistant” screen, embed AI features where they feel natural: side-panels, overlays, inline suggestions, context-aware widgets — wherever they support the user’s task flow. This makes AI feel like a seamless extension, not a tacked-on add-on.

Design Patterns For AI Interfaces — Smashing Magazine

Design Patterns for AI Interfaces by Vitaly Friedman