IMPULSE #8: Thesis Discussions

This impulse came from three separate mentoring conversations about my master’s thesis: first with Ursula Lagger, then with Horst Hörtner, and finally with Martin Kaltenbrunner. All three liked the core idea of using AR and IoT to enhance retail experiences, but each of them pushed me in a different direction. Together, these talks turned my project from a vague vision into something that needs concrete methods, business relevance and technical depth.

Focusing The Thesis With Ursula Lagger

My first conversation was with Ursula Lagger about my master’s expose. It was less about judging the idea and more about shaping it into a strong research plan. She encouraged me to keep the main concept, but to put much more emphasis on how I am going to test it with users. That meant not just saying “I will do user studies”, but being specific. Who are the participants, what scenarios will I test, which tasks will they perform, and how exactly will I collect and evaluate their feedback.

She also stressed that the written proposal should already show this depth. Instead of broad, generic goals, she wants to see clearly defined outcomes and methods. That feedback was very practical. It pushed me to rewrite sections of the proposal from high level ambition into detailed steps. For example, instead of “evaluate AR navigation in a store”, I now think in terms of concrete studies like “observe how long users take to find an item with and without AR guidance” or “measure perceived stress in crowded environments”.

Business And Social Perspective With Horst Hörtner

The conversation with Horst Hörtner brought in a different layer. He was positive about the topic and said it fits well with current technological developments, but he also pointed out that some of my scenarios are ahead of what is easily deployable today. Rather than seeing that as a problem, he framed it as a chance to think strategically.

From a business perspective, he recommended focusing on locations where the investment in AR and IoT can realistically pay off. That means contexts with higher margins or clear efficiency gains, where companies can justify installing such systems and maintaining them. Further mentioning trying to make something that will benefit not businesses but humanity. I now try to frame each concept both in terms of value for businesses and in terms of concrete benefits for humanity, not just for “tech fans”.

Technical And Methodological Depth With Martin Kaltenbrunner

With Martin Kaltenbrunner the discussion went into the technical and methodological details. He also liked the idea, but he was skeptical mentioning how trends come and go. He mentioned to look for already existing products that we might have in our phone. Additionally, his main question was: how exactly will this research play out in practice. Are there going to be physical prototypes, how will people interact with them, which tools and environments will I use.

He asked for more depth in the user research plan. Which classes or groups could participate in early tests, what kind of app or prototype will I build first, in which settings will the studies take place, and how many iterations I am planning. This made me realise that I need a clearer roadmap from first low fidelity mockups to more realistic prototypes. He also suggested concrete technical options, like building simple interactive shelves or objects with Arduino and available hardware, instead of keeping everything purely conceptual. That was encouraging, because it connected my ideas to components that are actually available in our labs.

AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #7: Life Story

A few days ago I had a very simple mission. Go to the store, buy a few things, get out. Instead it turned into an unplanned usability study. I needed cornstarch. That is not exactly an exotic product, so I walked to the baking section, then to the sauces, then to the international food aisle, then back to baking. I walked the same path again and again and still could not find it. At some point I just stood there in the middle of the aisle and realised that I was living inside my own thesis problem.

I knew the store had cornstarch, it is a common product and I had bought it there before, but my internal map completely failed. Shelf labels were tiny and placed at odd positions. My working memory was full of other items from my shopping list. After about twenty minutes of wandering, I finally found it at the very middle of a shelf in a place that was too easy to notice but there I am did not see it at all. That moment was the first impulse. If I had my imagined AR glasses, connected to the store’s inventory, this would have been a two second problem.

The story did not end there. When I finally picked up the cornstarch, there were two brands. The packaging looked almost identical. I could not see at a glance what the difference was, apart from a small price variation and some vague marketing text. I stood there comparing ingredients, Googling on my phone, opening product pages and reviews, trying to understand which one to choose. That felt like a second micro usability test. Finding the product is one task, choosing between options is another. Both were slower and more frustrating than they needed to be.

Later I told this story to friends and a few people immediately answered with similar experiences. They knew the store had a product, but could not locate it. Or they found something, then spent ten minutes trying to compare slightly different versions without any help. Some of them are very tech comfortable, so this is not a “user error”. It is a mix of confusing layout, poor signage and the cognitive load of doing small decisions in a crowded noisy environment.

This small field visit also changed how I think about evaluation. It is easy to say “AR will save time in the supermarket”. Now I have a real reference situation where I can ask people how long they typically search for items, how often they feel lost, and how they currently make product choices. I can imagine measuring the difference between the current experience and a guided AR version in a prototype study. The frustration I felt in front of that shelf is exactly the kind of pain point that can justify the complexity of an AR and IoT system.

In the end, this was just a normal shopping trip, but it gave me a very strong validation that my topic is grounded in everyday life. People are already hacking the system with their phones and Google. My research question is how to turn that into a seamless, spatially aware experience that lives in the environment itself instead of on a small screen.

AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #6: Book “Practical Augmented Reality”

The book Practical Augmented Reality: A Guide to the Technologies, Applications, and Human Factors for AR and VR, I expected to be a technical overview. Instead, it turned into a kind of design manual for my master’s thesis on leveraging AR and IoT to improve the shopping experience with context aware AR glasses. The book helped me connect big technological concepts to very concrete design decisions for my own project.

Seeing AR as “aligned, contextual and intelligent”

Early in the book, Aukstakalnis defines augmented reality as not just overlaying random graphics on the real world, but aligning information with the environment in a spatially contextual and intelligent way.
This sounds simple, but it actually shifted how I thought about my shopping glasses. It is not enough to place floating labels next to products. The system needs to understand where I am, what shelf I am looking at, and which task I am trying to complete, then lock information to those objects. This definition pushed me to think more seriously about IoT integration and precise tracking so that a price, rating, or nutrition label is always attached to the right item in space.

Designing from the human senses outward

The structure of the book also influenced how I plan my thesis. Aukstakalnis starts with the mechanics of sight, hearing and touch, and only then moves on to displays, audio systems, haptics and sensors.
That “inside out” perspective reminded me that my AR glasses concept should begin from human perception, not from whatever hardware is trendy. Reading about depth cues, eye convergence and accommodation, and how easily they can be disturbed by poorly designed displays, made me much more careful about how much information I want to show and at what distances.

For my thesis this means keeping overlays light, avoiding clutter in the central field of view, and respecting comfortable reading distances. It also supports my idea of using short, glanceable cards in the periphery instead of stacking lots of text in front of the user’s eyes.

Translating cross domain case studies into retail

The applications section of the book covers fields like architecture, education, medicine, aerospace and telerobotics.
None of them are about grocery shopping, but a common pattern appears: AR and VR are most powerful when they help people understand complex spatial information, rehearse tasks safely, or make better decisions with contextual data. I realised that retail has the same ingredients. Shelves, wayfinding and product comparisons are all spatial problems with hidden data behind them.

This insight strengthened the core vision of my thesis. My AR and IoT concept is not just about showing coupons in the air. It is about turning the store into an understandable information space, where digital layers explain what is currently invisible: where a product is, how fresh it is, how it fits personal constraints like allergies or budget, and how it compares to alternatives.

Impact on my thesis work

Overall, Practical Augmented Reality gave me three concrete things for my master’s project. First, a precise vocabulary and mental model for AR systems, which helped me write a clearer research question and background section. Second, a checklist of human factor issues that I now plan to address through prototype constraints and user testing. Third, a library of real world examples that prove similar technologies already deliver value in other domains, which I can reference when I argue why AR glasses for shopping are realistic in the near future.

Reading the book was less about copying solutions and more about understanding the hidden structure behind successful AR systems. That structure now guides how I want to combine AR, AI and IoT in an everyday retail scenario without forgetting the humans wearing the glasses.

AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #5: Preperation for Ph.D

This impulse is a bit unusual compared to a museum or a festival, because it did not happen in one specific room. It happened at my desk, in front of piles of PDFs. I had to start preparing my PhD proposal even before finishing my master’s thesis, mainly because of time pressure and my personal situation with the army. That pressure turned into a very intense, focused research sprint. I spent several evenings reading and analysing work on AR, AI and IoT to frame a possible PhD topic that extends my master’s project instead of repeating it.

The three main sources that shaped this impulse were the paper “IoT + AR: pervasive and augmented environments for ‘Digi-log’ shopping experience” by Dongsik Jo and Gerard Jounghyun Kim, the CHI paper “UI Mobility Control in XR: Switching UI Positionings between Static, Dynamic, and Self Entities” by Siyou Pei and colleagues, and the book “Practical Augmented Reality” by Steve Aukstakalnis. Together they created a kind of mini-course for me: one about the future of physical retail, one about interaction patterns in XR, and one about the broader technology and human factors behind all of this.

Observations: From “Cool Idea” To Structured Research Questions

Reading Jo and Kim’s “Digi-log shopping” paper was the moment where my retail ideas suddenly felt less like a personal fantasy and more like part of an actual research landscape. Their concept of blending digital overlays with the physical store confirmed that the direction of my thesis is relevant, but it also showed what has already been tried: navigation, in-store recommendations, context-aware content. While I was reading, I kept noting down where my own IKEA and grocery scenarios overlap and where they differ. That helped me see that my contribution should not just be “AR in shopping”, but more specifically about interaction patterns and how to keep users in control in these pervasive systems.

The UI mobility paper pushed me even harder in that direction. It analyses how interface elements can be anchored in XR: fixed to the world, attached to the body, or moving with the user. I realised that many of my early sketches for AR glasses assumed a single style of UI placement without questioning it. The paper gave me vocabulary and structure to ask concrete questions: when should a navigation cue be world-locked, when should it follow the head, when should it sit on the wrist. This was very useful both for tightening my master’s concept and for defining a sharper PhD angle around “interaction patterns for context-aware AR glasses”.

Main Concept: PhD Preparation As Shared Fuel For Master And Future Work

The biggest impact of this impulse is that PhD preparation stopped feeling like a separate project. The literature review I did for the proposal feeds directly back into my master’s thesis. It gave me language, references and frameworks that I can already use now: “digi-log experiences” for describing hybrid retail journeys, XR UI mobility for structuring my interaction designs, and a more precise understanding of AR hardware constraints for my scenarios.

So this impulse was not a public event, but it was a very strong push for my Design & Research. Writing the PhD proposal turned my scattered interests in AR, AI and IoT into a more coherent research trajectory. It made me read deeper, think more critically about gaps in existing work, and see my master’s thesis as the first chapter of a longer exploration instead of a one-off project.

“IoT + AR: pervasive and augmented environments for ‘Digi-log’ shopping experience” by Dongsik Jo and Gerard Jounghyun Kim – an HCI paper on blending AR and IoT in retail environments. (PDF via https://d-nb.info/1177365146/34

“UI Mobility Control in XR: Switching UI Positionings between Static, Dynamic, and Self Entities” by Siyou Pei et al. – a CHI 2024 paper on how XR interfaces move and anchor in space. (Project page: https://duruofei.com/projects/fingerswitch/

“Practical Augmented Reality: A Guide to the Technologies, Applications, and Human Factors for AR and VR” by Steve Aukstakalnis – a comprehensive AR / VR textbook. (Publisher page: https://eu.pearson.com/practical-augmented-reality-a-guide-to-the-technologies-applications-and-human-factors-for-ar-and-vr/9780134094359

AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #4: World Usability Congress 2025

Spending two days at the World Usability Congress in Graz made me focus on UX aspect of my thesis. The talks I followed were mostly about UX KPIs, usability testing and accessibility, and I kept translating everything into my own topic: AR and IoT in retail. Instead of just thinking about how my future system could look, I started to think in a much more concrete way about how to measure it, test it and make sure it works for real people, not only in prototypes.

KPIs – Learning To Define What “Better” Means

One of the clearest lessons was how seriously UX teams treat KPIs. In my notes I wrote that valuable improvements are often only 10 to 15 percent per quarter, and that this is already considered success. That sounds small, but the important part is that these improvements are defined and measured. The typical UX KPIs that kept coming up were conversion rate, task completion time, System Usability Scale score, Net Promoter Score and error rate.

For my thesis this means I cannot just write “AR wayfinding will improve the shopping experience”. I need to specify what that improvement looks like. For example: people find a product faster, they ask staff for help less often, they feel more confident about their choices. The practical action I took from the congress is: for each feature I design, I will write down one or two concrete metrics and how I would measure them in a real store test. That turns my concepts into something that can be evaluated instead of just admired.

Accessibility As A Built In Check, Not An Extra

The accessibility track was also directly relevant. In my notes I wrote down a “quick checklist” that one speaker shared: check page layout and content, contrast and colours, zoom, alerts and error messages, images and icons, videos, no flashing animation and audio only content. It is simple, but exactly because it is simple it is realistic to apply often.

For my AR and IoT ideas, this becomes a routine step. Whenever I sketch a screen or overlay, I can quickly run through that checklist. Also thinking how my work could also have an impact on the accessibility for the end users. Are colours readable on top of a busy store background. Can text be enlarged. Is there a non visual way to access key information. Combined with talks about accessibility on a corporate level and inclusive design for neurodivergent people, it pushed me to treat accessibility as a default requirement. The concrete action is to document accessibility considerations in my thesis for every main feature, instead of adding a separate chapter at the end.

What I Take Back Into My Thesis

After World Usability Congress, my AR and IoT retail project feels less like a collection of futuristic ideas and more like something that could be developed and tested step by step. The congress gave me three practical habits. First, always define UX KPIs before I design a solution, so “better” is not vague. Second, run an accessibility quick check on every main screen or interaction and think about different types of users from the start.

This fits nicely with my other blog reflections. The museum visit gave me ideas about where AR and IoT could be applied. The festival made me think about wayfinding and smart environments. World Usability Congress added the missing layer: methods to prove that these ideas actually help people and do not silently exclude anyone.

Links
Official conference homepage
World Usability Congress – Home World Usability Congress

2025 agenda with talks and speakers
World Usability Congress 2025 – Agenda World Usability Congress
AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #3: Meta Connect 2025 AR Moving From Headsets To Everyday Life

Watching Meta Connect 2025 felt like seeing my thesis topic walk on stage. The focus was not on big VR helmets anymore but on glasses that look close to normal and are meant to be worn in everyday life. The main highlight was the new Meta Ray-Ban Display, a pair of smart glasses with a small full color display in one lens and a lot of AI power built in. They are controlled with a neural wristband that reads tiny finger movements, so you can click or scroll with almost invisible gestures.

When starting this topic I have theorized how technology going to look and have had my hopes and assumptions. A few years ago AR meant heavy hardware that you would never wear into a supermarket or furniture store. Now the vision is a pair of sunglasses that weigh about as much as regular glasses, can show simple overlays in your field of view and are designed to be worn on the street, in a shop or on the couch. The technology is still expensive and early, but watching the keynote made it very clear that the direction is: smaller, lighter, more normal looking, and more tightly connected with AI.

It could be compared to evolution of the phones and technology in general how our everyday devices moved from being heavy and bulky to light and portable and also with having speculations such as it’s not needed we can come to vision that everyday said we do not need phones because we have laptops but technology advances and we find new ways where we interact with the world.

What I Learned About AR From The Event

The first learning is about form factor. The Ray-Ban Display does not try to turn your whole field of view into a digital world. It uses a compact display area to show only what is necessary: navigation hints, messages, short answers from Meta AI or the title of a song that is playing. Instead of replacing reality, it adds a thin layer on top of it.

The second learning is about interaction. The neural wristband is a good reminder that people do not want to wave their arms in public to control AR. In real environments like a festival, a museum or a supermarket, subtle gestures or simple taps are much more realistic.

The third learning is the merge of AI and AR. The glasses are clearly designed as AI first devices. They can answer questions, translate speech, caption what you hear and see, and then present this information visually inside the lens.

Technology Getting Smaller And More Accessible

Another strong theme in Meta Connect is how quickly the hardware is trying to become socially acceptable. Earlier devices were clearly gadgets. These glasses try to be fashion first, tech second. They look like familiar Ray-Ban frames instead of a prototype. The same is true for battery life and comfort. The promise is that you can wear them for several hours without feeling like you are in a lab experiment.

Why Meta Connect Matters For My Thesis

Meta Connect 2025 confirmed that my scenarios for AR in retail are not just science fiction. The building blocks are emerging in real products: lightweight glasses, AI assistants, subtle input methods and simple overlays instead of full virtual worlds. For my master’s thesis this is both motivating and grounding. It tells me that the interesting design work is no longer about asking if AR will be possible in stores, but about shaping how it should behave so that it actually helps people shop, learn and navigate without stealing the spotlight.

Technology should become smaller, calmer and closer to everyday objects, so it can quietly support what people already want to do in physical spaces. Not to replace those spaces, but to make moving through them a little clearer, smarter and more human.

Links

Official Meta recap of the Connect 2025 keynote (Ray-Ban Display, Neural Band etc.)
Meta Connect 2025 – AI Glasses And Ray-Ban Display Meta

Meta product page for Ray-Ban Meta smart glasses (for specs and positioning)
Ray-Ban Meta Smart Glasses – Meta Meta

General info / news listing around Meta smart glasses and AI wearables
Meta – Newsroom / Ray-Ban Meta Announcements

AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #2: A Night of Techno Losing Yourself And Finding Your Way Experience

The night I saw Charlotte de Witte at Signal Festival was pure overload. Heavy bass, dense crowd, strobing lights, smoke, multiple bars and stages, lockers, queues, a constant flow of people in every direction. As an experience it was amazing, I am a designer and one thing I love to do and always do is to see how I can optimize the whole event and how I can apply it to my thesis because I am also a workaholic.

Observations: Immersion Versus Orientation

One of my strongest observations was how different immersion and orientation felt. Immersion was perfect. When I was in front of the main stage, I did not need any interface. The sound and visuals were enough. Orientation was a different story. Moving away from the stage meant guessing, especially if you got drunk a bit. Where is the nearest bar that is not overcrowded. Which corridor leads to the toilets. How do I get back to my locker without opening the venue map again and again. The more time passed, the more people were intoxicated, and the weaker everyone’s internal navigation became.

At some point I lost my friends in the crowd and we had the usual routine: messages that did not go through, vague descriptions like “I am near the left bar” that are useless in a dark hall, and the classic feeling of spending twenty minutes trying to reconnect. When you are sober this is still slightly annoying. Once you are drunk, it becomes hard work.

Understanding: How AR And IoT Could Be A Soft Safety Net

This is where I started to imagine an IoT based guidance system with AR as the interface. Where IoT beacons or other positioning technology could be distributed across the venue. Every bar, locker zone, toilet block and entrance could have its own tiny digital footprint. If visitors opt in, AR glasses could use this network to understand three basic things in real time: where they are, where their friends are, and where key services are located.

In practice, that could look very simple. An AR arrow could hover in my view and gently lead me to my locker, even if I barely remember which area I used. A small indicator could show me which direction my friends are in and roughly how far and also notify in case my friends need help as sometimes you can face safety issues other people approaching and annoying. If I want a drink, the system could show the nearest bar plus tell where I can go to smoke. If there is an emergency or I need to leave quickly, the AR layer could highlight the closest safe exit instead of forcing me to rely on my memory in a confused state.

Main Concept: Festivals As Prototypes For Smart Guidance

The main concept that came out of Signal Festival for me is the idea of a soft, ambient guidance system built on AR and IoT. The festival does not need more screens. It needs invisible structure that supports people at the right moment. A network of small, low power devices in the space can give the system awareness of positions and states. Which will elevate user experience nd AR then becomes a thin, context aware layer on top of that awareness. It answers very simple questions: where am I, where is what I need, and how do I get back.

This is closely related to my retail research. A music festival is like an extreme version of a shopping mall. Both are large, noisy, crowded environments where people try to reach specific goals while managing limited energy and attention. If a guidance system can help a drunk visitor find the right bar, locker or friend in a dark venue, it can certainly help a tired shopper find the right aisle or click and collect point in a busy store.

Links
Event page for Signal Festival Weekend 2 at Pyramide
Signal Festival – PYRAMIDE TAKEOVER WE2 (O-Klub) O-Klub

Techno event listing with headliners and description
Signal Festival Pyramide WE2 – Event Overview technomusicworld.com

Local article about Signal Festival in the glass pyramid
Signal Festival in der Pyramide Vösendorf – Heute.at

AI Disclaimer
This blog post was polished with the assistance of AI.

IMPULSE #1 Kunsthistorisches Museum Wien: Analog Space, Digital Ideas

Visiting the Kunsthistorisches Museum Wien felt almost the opposite of my thesis topic. It’s a very “analog” space: heavy architecture, old masters, quiet rooms, and almost no visible technology. Apart from the optional audio guide device, there are no screens, no projections, no interactive installations. You move from room to room, read the small wall texts and simply look.

That contrast is exactly what made the visit so valuable for me as an interaction design student. I wasn’t impressed by high-tech features. I was impressed by how much potential there is for technology to quietly support the experience without taking attention away from the art itself. The museum became a kind of mental sandbox where I could imagine how AR and IoT might be implemented in a very delicate context: history, culture, and learning.

Observations: A Classical Museum with a Small Digital Layer

My main observation was how traditional the user journey still is. You enter, pick a wing, and mostly navigate by room numbers, map and intuition. The only digital touchpoint I used was the handheld audio guide. Even that already shows the basics of what I work with in my thesis: an extra information layer on top of the physical space. You enter a painting number, press play, and suddenly you get context, story and meaning instead of just title, date and artist.

But the interaction is linear and passive. You always get the same story, no matter who you are, how much you already know, or what caught your eye. There is no way for the system to “notice” that you are fascinated by one detail and want to go deeper, or that you are in a hurry and only want a short summary. It made me see very clearly where today’s museum tech stops and where AR and IoT could start.

Understanding: Technology Should Support the Artwork, Not Compete with It

Standing in front of paintings, I tried to imagine AR in the room. The danger is obvious: if we fill the space with too many digital elements, the painting becomes a background for the interface. That’s exactly what I do not want, and it connects strongly to my thesis: technology must serve the human and the content, not distract from it.

So my understanding is that any AR or IoT system in a museum like this would have to be extremely calm, subtle and respectful. The artwork stays the main actor. AR is just a transparent layer that appears only when the visitor asks for it. IoT devices like small beacons near the frame could be completely invisible, only there to let the system know where you are and what you’re looking at. The goal is not to “modernise” the museum for its own sake, but to deepen the connection between visitor and artwork.

Main Concept: A Future AR & IoT Guidance Layer for Museums

The main concept that came out of this visit is to treat the museum as a potential case study for the same principles I explore in smart retail: guided navigation, contextual information, and personalised journeys, all powered by AR and IoT.

I imagined wearing AR glasses instead of holding an audio guide. When I look at a painting for more than a few seconds, a small icon could appear next to it in my field of view. If I confirm, the system overlays very minimal hints: a highlight around a specific detail, a short caption, or the option to see a brief animation explaining the story behind the scene. If I want more, I can dig deeper maybe see a reconstruction of how the painting originally looked, or how it was restored. If I don’t, nothing changes; I just keep looking with my own eyes.

The same system could also redesign the wayfinding experience. Instead of a fixed predefined tour, AR could show me a route that matches my interests and time: “Show me five highlights from the Renaissance in 45 minutes,” or “Guide me only to works that relate to mythology.” IoT sensors in rooms could provide live information about crowding, so the path avoids the most packed galleries and keeps the experience more relaxed.

What mattered most for me in this museum visit was not what technology was already installed, but the mental exercise of placing my thesis ideas into this setting. It helped me see that the principles I am developing for AR and IoT could have wider use case from the intended one and give a perspective for a retail subtle guidance, context-aware information, and respect for the physical environment also make sense in a cultural space.

Links

Official museum site
Kunsthistorisches Museum Wien – Official Website KHM.at

Visitor overview and highlights in English
Kunsthistorisches Museum – Overview & Highlights (visitingvienna.com) Visiting Vienna

Background and history of the building
Kunsthistorisches Museum Wien – Wikipedia

AI Disclaimer
This blog post was polished with the assistance of AI.

AUGMENTING SHOPPING REALITIES: STUDIES ON AUGMENTED REALITY (AR) IN RETAIL: Review

Thesis: Augmenting Shopping Realities: Studies on Augmented Reality (AR) in Retail

Author: Camen Teh

Degree/Institution/Date: PhD, University of Nottingham, February 2023.

How is the artifact documented?

There are basically two “work pieces.” First, the hierarchical value map from Study 1, which connects AR attributes, consequences, and values. It’s documented with an implication matrix, centrality and abstractness indices, and a final map; you can see which items ranked most important, like product evaluation, amplified product information, and product knowledge. 

Second, the web-based AR product presentation was used in the field experiment. The design varies the visual style (cartoonised vs realistic) and whether simple control buttons are present. When students scan a code on the product, one of the versions plays. The figures make the manipulations and trigger flow clear. 

Where and how can it be accessed?

I discovered the research while I was scrolling through research of various universities on HCI and AR technology, and stumbled upon this research. So research is accessible at the university of nottingham web-page. I was not able to find it on other platforms.

Do theory and implementation align?

Yes. Study 1 shows that shoppers care about information that builds knowledge and supports evaluation; Study 2 then tries concrete design moves (visual style + simple controls) that could nudge curiosity and controllability, and checks whether that helps people make sense of a “weird” product and move toward buying. The pipeline is coherent.

Is the documentation clear and comprehensible?

Yes. The thesis shows step-by-step procedures, visuals for manipulations, and full instrument lists; reliability/validity and manipulation checks are reported, enabling reproducibility and critique.  

Does the artifact meet master’s level quality standards?

This is a PHD research. Nevertheless, Research constitutes substantive work pieces with clear research aims, implementation details, and evaluation, meeting and in places exceeding typical master’s expectations. The absence of a public build/repo is a practical limitation rather than a quality flaw.  


Systematic Evaluation (CMS Criteria)

  • Overall presentation quality: Clean structure, figures and tables used purposefully. Reads polished.
  • Degree of innovation: Neat combo: means-end mapping of retail AR, then a live store test of visual style and simple controls. The finding that a stylized look can help sense-making for unfamiliar products is genuinely interesting.
  • Independence: The author builds or modifies stimuli, runs a campus-store experiment with random assignment, and reports checks and stats. Feels hands-on.
      
  • Organization and structure: The thesis is easy to follow: it opens with an introduction and an overarching AR literature review, then presents Study 1, a short bridge chapter that links Study 1 to Study 2, and a full chapter on the field experiment, before closing with implications and limitations. The table of contents and chapter headers make this flow clear.  
  • Communication: The author explains the manipulations plainly and even shows them with simple figures, and the measurement section reports reliability and validity in a straightforward way. 
  • Scope: Study 1 goes deep with 45 interviews, which is plenty to build the value map, and Study 2 is sizable for a field setting with 197 student participants. It tracks both purchase intention and an objective purchase measure via pre-orders, so the behavioral side isn’t just hypothetical.
        
  • Accuracy and attention to detail: The author explains what was tested and shows that the setup worked as intended. Most of the questionnaires feel solid, and while one of them is a bit shaky, it doesn’t break the study. Overall, the write-up is careful, tidy, and easy to follow.
  • Literature: The work includes a focused AR-in-retail review in Appendix A with a transparent selection funnel that narrows to 53 journal papers, and the measurements used in the experiment are adapted from prior validated scales and documented in the item tables. It reads grounded rather than hand-wavy.  

Overall Assessment (strengths & weaknesses)

Overall, this is a well put together thesis that treats AR in retail as a tool for better decisions rather than a flashy add-on. It moves cleanly from ideas to practice: first mapping what shoppers actually need from AR, then testing simple design choices in a real store. The write-up is clear, the artifacts are documented inside the thesis, and the practical message is easy to use: give people decision & useful information, let them control the presentation a little, and don’t assume photorealism is always the best choice for unfamiliar products.

There are a few issues. The live AR build isn’t shared as a public demo, and the field test sits in a single, student-heavy setting, so we should be careful about claiming it works everywhere. Still, the work is coherent, transparent, and genuinely helpful for anyone designing AR in shops. For a PHD, it comfortably meets the standard and, in places, goes beyond it.