Impulse #2: Computer Vision in UI/UX

After diving into Picard’s vision of emotionally intelligent systems, I now found a more technical and practical perspective on how computer vision is already reshaping UI testing. The research paper Computer Vision for UI Testing: Leveraging Image Recognition and AI to Validate Elements and Layouts explores automated detection of UI problems using image recognition techniques, something highly relevant for improving UX/UI workflows today.

Img: Unveiling the Impat of Computer Vision on UI Testing. Pathak, Kapoor

Using Computer Vision to validate Visual UI Quality

The authors explain that traditional UI testing still relies heavily on manual inspection or DOM-based element identification, which can be slow, brittle and prone to human error. In contrast, computer vision can directly analyze rendered screens: detecting missing buttons, misaligned text, broken layouts, or unwanted shifts across different devices and screen sizes. This makes visual testing more reliable and scalable, especially for modern responsive interfaces where designs constantly change during development.

One key contribution from the paper is the use of deep learning models such as YOLO, Faster R-CNN, and MobileNet SSD for object detection of UI elements. These models not only recognize what is displayed on the screen but verify whether the UI looks as intended, something code-based tools often miss when designs shift or UI elements become temporarily hidden under overlays. By incorporating techniques like OCR for text validation and structural similarity (SSIM) for layout comparison, the testing process becomes more precise in catching subtle visual inconsistencies that affect the user experience.

Conclusion

This opens a potential master thesis direction where computer vision not only checks whether UI elements are visually correct but also evaluates user affect during interaction, identifying frustration, confusion, or cognitive overload as measurable usability friction. Such a thesis could bridge technical UI defect detection with affective UX evaluation, moving beyond “does the UI render correctly?” toward “does the UI emotionally support its users?”. By combining emotion recognition models with CV-based layout analysis, you could develop an adaptive UX testing system that highlights not only where usability issues occur but also why they matter to the user.

Source: https://www.techrxiv.org/users/898550/articles/1282199-computer-vision-for-ui-testing-leveraging-image-recognition-and-ai-to-validate-elements-and-layouts

Prototyping VI: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possibilities

New features in the object recognition and test run for images:

Since the initial freesound.org and GeminAI setup, I have added several improvements.
You can now choose between different object recognition models and adjust settings like the number of detected objects and the minimum confidence threshold.

GUI for the settings of the model

I also created a detailed testing matrix, using a wide range of images to evaluate detection accuracy. Due to that there might be the change of the model later on, because it seems the gemini api only has a very basic pool of tags and is also not a good training in every category.

Test of images for the object recognition

It is still reliable for these basic tags like “bird”, “car”, “tree”, etc. And for these tags it also doesn’t really matter if theres a lot of shadow, you only see half of the object or even if its blurry. But because of the lack of specific tags I will look into models or APIs that offer more fine-grained recognition.

Coming up: I’ll be working on whether to auto-play or download the selected audio files including layering sounds, adjusting volumes, experimenting with EQ and filtering — all to make the playback more natural and immersive. Also, I will think about categorization and moving the tags into a layer system. Beside that I am going to check for other object recognition models, but  I might stick to the gemini api for prototyping a bit more and change the model later.