Impulse #2: Computer Vision in UI/UX

After diving into Picard’s vision of emotionally intelligent systems, I now found a more technical and practical perspective on how computer vision is already reshaping UI testing. The research paper Computer Vision for UI Testing: Leveraging Image Recognition and AI to Validate Elements and Layouts explores automated detection of UI problems using image recognition techniques, something highly relevant for improving UX/UI workflows today.

Img: Unveiling the Impat of Computer Vision on UI Testing. Pathak, Kapoor

Using Computer Vision to validate Visual UI Quality

The authors explain that traditional UI testing still relies heavily on manual inspection or DOM-based element identification, which can be slow, brittle and prone to human error. In contrast, computer vision can directly analyze rendered screens: detecting missing buttons, misaligned text, broken layouts, or unwanted shifts across different devices and screen sizes. This makes visual testing more reliable and scalable, especially for modern responsive interfaces where designs constantly change during development.

One key contribution from the paper is the use of deep learning models such as YOLO, Faster R-CNN, and MobileNet SSD for object detection of UI elements. These models not only recognize what is displayed on the screen but verify whether the UI looks as intended, something code-based tools often miss when designs shift or UI elements become temporarily hidden under overlays. By incorporating techniques like OCR for text validation and structural similarity (SSIM) for layout comparison, the testing process becomes more precise in catching subtle visual inconsistencies that affect the user experience.

Conclusion

This opens a potential master thesis direction where computer vision not only checks whether UI elements are visually correct but also evaluates user affect during interaction, identifying frustration, confusion, or cognitive overload as measurable usability friction. Such a thesis could bridge technical UI defect detection with affective UX evaluation, moving beyond “does the UI render correctly?” toward “does the UI emotionally support its users?”. By combining emotion recognition models with CV-based layout analysis, you could develop an adaptive UX testing system that highlights not only where usability issues occur but also why they matter to the user.

Source: https://www.techrxiv.org/users/898550/articles/1282199-computer-vision-for-ui-testing-leveraging-image-recognition-and-ai-to-validate-elements-and-layouts

Business Management x Customer Experience x Data Visualisation: 1st prototype

In the current digital era, Customer Experience has evolved with multichannel support, chatbots, self-care, and virtual assistants to reduce customer effort, driving the development of Customer Relationship Management tools. However, while significant investments have focused on empowering AI-assisted agents, the role of managers has been largely underserved.​

I’m designing a solution to empower the “Augmented Manager”, equipping leaders with advanced tools and analytics to optimize real-time-onsite–remote-team performances and deliver outstanding results in an increasingly complex, tech-driven customer experience ecosystem.

Beside.you is a Software As A Service (SaaS) solution, is developed to simplify decision-making and boost efficiency for managers.

By solving the biggest business challenges, with intuitive functionalities simplifying Steering, Performance management, and resources’ Growth, it is shaping a future whereall business tools work seamlessly together, unlocking unmatched operational excellence for organizations everywhere.

This small demo showcases some parts of the product experience that is offered beginning with Steering on its macro micro levels:

Testing & Iterating using Lean UX: ‘Macro’ Steering Dashboard

As part of my SaaS project, one of the user interfaces that had to be done is the Steering dashboard but it required a lot of back and fourth of feedback, iterations and meetings to finally come to a great enough version in terms of functionality and business but as a product designer and using Lean UX as a methodolgy , at some stage I had to do an iteration test.

Macro and Steering are the fundamentals of what consists this interface, they serve the business internally as much as they serve it externally with clients and prospects.

Macro: refers to the holistic overview of an insights dashboard and its part of the macro-micro dynamic that I’m using as a system thinking for the whole project.

Steering: provides a real-time view of every KPI needed to make better decisions and steer the business towards a better course.

This testing/iteration process shows the first level of a user interface that required little user interview, user feedback and zero iteration but it served as a good foundation and a stepping stone to a later version that will pave the way to the “final” iteration.

The first version was even called ‘Adherence Planning’ which was an innacurate term of what we are trying to achieve and also because it defined only one aspect of the whole dashboard.

It had the basic insights that we want to show without much output but it was limited, inefficient and not value driven on a business level.

So after much stakeholder’s feedback we iterated that version to a much effective version that visualised the data in a much more functional manner and I did that through designing the work force performance gauge chart, rearranging the cards to suit the data monitoring priority, tweaked the charts and their colors on a UI level for more effective anomaly identification, streamlied the data visualisation with proper alerts identification and finally made the 3 dots ‘kebab’ compartment the drawer where every additional setting is hidden, and here is where I arrived:

The Steering Dashboard is now much more structured and serves the decision making business goal on a decent level, every KPI can be checked and monitored in real time along with checking the business performance in terms of profit and loss through the “Work force performance” gauge.

What I learned

A disruptive data vizualisation product can never be done, but through proper UI UX design, testing, iterating and coming up with the optimal solutions, it can reach a certain level of efficient structure that will propel it to achieve some of the business goals that were far further in the roadmap of the product experience and all of the above can only be done through collaboration and proper communication within a team.

All and all, I’m just getting started and i’m thrilled to work more on the refinement of both my skills and everything that I get involved in

17 – Clickable Prototype v1

After all the sketches, user flows, and planning, I finally pulled everything into a quick clickable prototype (Figma is awesome for this, btw). It’s still an early version, but it gives a solid feel of how the app might look and behave. I wanted to see how the Home, Activity, and Settings tabs work together and how smooth the experience feels when clicking through it all.

Here’s a short walkthrough video showing the prototype in action:

Working on this helped me catch a few small details I hadn’t noticed before, like the pacing between steps and where extra feedback could better guide the user. Overall, seeing it come to life, even in a simple form, was a great way to confirm if the structure works.

Next, I’ll refine the flow, tidy up interactions, and start testing how others respond. It’s exciting to finally transition from an idea to something tangible you can click through.

16 – Pulling It All Together

After spending time designing each part of the app on its own, I knew the next step was to figure out how it all fits together. It’s one thing to have a solid Home tab, a clear Activity tab, and a flexible Settings area. But the real challenge is making the tool feel like one connected experience instead of just three separate features sitting side by side.

So I started mapping the full user journey, from the moment someone opens the app for the first time to the moment they take their first action. The goal was to make sure every screen, every tap, and every option felt like part of a bigger flow.

It starts with Home. This is where the user gets a quick update on their privacy status and can tap one button to begin scanning. Once the scan is done, they’re either shown a clean summary that says everything looks good, or they’re nudged to go check out their results in the Activity tab.

That handoff between Home and Activity became really important. It needed to feel natural, not like you’re being dropped into another part of the app. So I kept asking myself questions like, “What happens after a scan?” and “What does the user want to do next?” The answer is usually some version of “check what was found” or “see if anything needs action.”

Once they land in Activity, the results are organized clearly. Old scans are listed with summaries, and new findings are labeled in a way that stands out without being too loud. From there, users can open a scan, review the exposed data, and decide what to do. They might request a removal, ignore it, or save it for later.

Then there’s Settings, which sits quietly in the background but plays a big role in shaping how the app works. Before a user ever hits “Scan Now,” the tool has already been set up to know what data to look for and where to search. That part happens quietly but meaningfully. And at any point, the user can return to the Settings tab to update what they’re tracking or change how often they want to scan.

Full App Flow

The more I worked on this flow, the more I realized how important rhythm is. The app should never feel like it’s asking too much at once. It should guide, not demand. There’s a gentle back-and-forth between checking your privacy, understanding your exposure, and deciding what to do about it. That rhythm is what makes the whole thing feel usable.

At this point, the main structure is starting to come together. There are still things to work out, like onboarding, empty states, and what the app says when no data is found. But now that the core journey is mapped, I feel more confident about shaping the rest of the experience.

15 – Defining What Gets Scanned

After sketching out how users would scan their data and review the results, I knew it was time to focus on something deeper. If someone’s trusting this tool to find their personal data online, they should be able to control exactly what it’s looking for and how it behaves. That’s where the Settings tab comes in, specifically, the part that lets people manage the data points the app scans for.

This is more than just a list of preferences. It’s the part of the app that decides how useful the tool really is. If it can’t scan for the right things or look in the right places, then it doesn’t matter how nice the interface looks. So I started thinking through the user journey here. What does it feel like to set this up for the first time? How easy is it to update your info later? What happens when someone wants to remove or change something?

I broke it down into a few simple flows. When someone taps into this section, they see a list of data types like full names, email addresses, phone numbers, home addresses, usernames, and social media handles. Each one has a toggle, so they can decide which categories they want the app to track. Tapping into a category opens a list of actual data points. For example, under “email addresses,” you might see:

Users can add new entries, remove old ones, or give them a label like “Work” or “Personal” to keep things organized. It should feel simple, like updating a contacts list.

User flow of the entire settings tab
Zooming into the Scan Preferences

Another part of this section is where the app should scan. Some people might want full control, while others may prefer a more hands-off setup. So I imagined a second area where users can select the types of platforms the app should search, like:

  • Public data brokers
  • Social media sites
  • Search engines
  • Forums or blogs
  • Data breach records

By default, the app could suggest a recommended setup, but users who want to go deeper can switch things on or off based on what they care about.

I also wanted to give users a quick summary before they leave this section. Something that says, “You’re scanning for 6 data points across 4 categories.” Just a simple, reassuring message that confirms everything’s set up the way they want. From there, they can either save changes or jump straight into a new scan.

This part of the tool gives people full control over what they’re sharing with the app and what the app is doing for them. It also needs to feel like something they can come back to anytime. Maybe they changed their email or want to track a new phone number. It should be easy to update without starting from scratch.

14 – What the Activity Tab Unlocks

Once I felt like the Home tab had a solid direction, I shifted my focus to the Activity tab. This is the part of the app that lets users look back and understand what the tool has found over time. If the Home tab is about quick action, the Activity tab is about reflection and detail. It’s where things get a bit more layered.

I started by asking a few questions. After a scan is done, what would someone want to do next? What would they expect to see if they tapped into their past results? The obvious answer was, they’d want to understand where their data showed up, how serious it is, and what actions they can take. So that became my starting point for the user flow.

The journey into the Activity tab begins with a list of past scans. Each entry shows the date, how many exposures were found, and a quick status, like “3 removals in progress” or “Last checked 4 days ago.” This lets the user get a feel for their privacy over time. From there, tapping into any scan opens a detailed breakdown.

Inside that scan detail view, I imagined a set of cards or sections for each exposure. Each card would show where the data was found, maybe on a marketing site, a data broker list, or a forum. It would also show what kind of data was found, like a phone number or full name, and whether the app could help remove it. There would be a clear action button like “Request Removal” or “Ignore for Now,” giving the user simple choices without pressure.

User flow of the activity tab

Another part I thought about was how to show overall progress. Maybe there’s a visual indicator on the main Activity screen that shows how your privacy is improving over time. Something like a simple line graph or a color-coded “privacy score” that updates as you take action. I don’t want it to feel gamified, but it should feel encouraging. Like you’re making progress, not just looking at problems.

One small but important touch I sketched out was what happens when there are new exposures. Maybe we highlight them with a subtle label like “New since last scan” or bump them to the top of the list. This way the user’s attention naturally goes to the most important updates.

This part of the app is where people go to feel more in control. It’s not just a log of past activity. I wanted it to feel full of helpful options without overwhelming anyone.

13 – Home Tab, How should it work?

After figuring out the broader structure of the tool, the next step was to zoom in and really understand what should happen on the Home tab. This is where everything begins. It’s the screen someone sees the moment they open the app, so it needs to be clear, simple, and useful right away.

I started thinking through the experience from a user’s point of view. What would they be trying to do here? Most likely, they just want to know how exposed their personal data is and what they can do about it. They’re not coming in to explore every setting or dig through past reports. They want a quick answer to a big question: “Am I okay online?”

So I mapped out the user flow for this part. It starts with a clean welcome screen that gives a clear privacy status. This might say something like “You have 3 data exposures found” or “You’re all clear.” Just enough to give the user a sense of where things stand. From there, the most important action is the Scan Now button. This is the main thing the app offers, and it needs to be obvious and easy to tap.

Once the user hits that button, the app begins scanning for their data across different online sources. I imagined a simple progress indicator, maybe a friendly loading animation or a visual scan bar. No need for too many details yet. Just a sense that the app is working quietly in the background to find their information.

After the scan is complete, the user is taken to a short summary. This is where the tone really matters. It shouldn’t feel scary or overwhelming. It should feel clear and in control. Something like
“We found 4 pieces of your personal data online. Tap to review and take action.”

Home tab user flow
User flow to perform a scan

I also had to think about smaller touches. What if the user has never scanned before? Do we show an empty state with a short message that explains the tool? What about returning users? Should they see their last scan result or a prompt to scan again?

These are the kinds of small questions that start to stack up once you begin thinking through a full user journey. The challenge is to give people just the right amount of information without making things feel too heavy.

At this stage, I’m keeping things flexible. The layout will probably change as I move on, but the flow feels right. Welcome the user, show them where things stand, let them take action quickly, and offer a calm, clear summary when the scan is done.

What I learned as the Best Practices For Designing an Intuitive Mobile Dashboard

A mobile dashboard may not offer the full functionality of its desktop counterpart, but it can still provide users with a scannable view of top-line data and statistics to let them make informed decisions. It can also give managers and executives the necessary tooling to quickly approve orders, contracts, and procedure and policy documents.

I drew from experience and other proficient UX Designers some of the best practices and key aspects that designers can use to design better mobile dashboards:

Smooth Navigation

Let’s start with navigation as this is how users will get acquainted with your dashboard and find the information they’re looking for. If the navigation for your mobile dashboard is clumsy or disjointed, or your search bar or navigation menu isn’t well suited to touch-based interactions, you are likely to turn off users.

Visual prioritization is key. Responsive mobile dashboards should communicate information quickly and prioritize it in a clear visual hierarchy. Another dashboard design best practice is using the principle of progressive disclosure to reveal information only when the user needs it.

Consider space, button styles, and the user’s first impression when converting a desktop design into its mobile counterpart. (Coinbase)

This mobile dashboard effectively uses space to prioritize the most essential options. Crucial buttons from the left-hand panel on desktop become the bottom navigation bar on mobile, a standard for mobile menus as the position falls into the “Z” page scanning pattern where users’ attention tends to land.

Responsive Tables and Charts

Designing responsive mobile tables and charts can be a challenge, but in my experience, the customer satisfaction it provides is worth it.

Generally, I advocate for responsive web designs that send a single code set to all devices but use fluid grids and media queries to change the appearance of elements based on a device’s size and orientation. This common and effective method is used in mobile dashboards to collapse table row headers into column headers in a set of stacked, standalone cards that can be scrolled through vertically. The approach offers an elegant mobile presentation that avoids squishing cells, while allowing the user to quickly peruse large amounts of data.

Button Design

Arguably, one of the biggest challenges in creating a responsive mobile dashboard is sizing and arranging buttons. Why? Because, unlike on desktop, you touch them, rather than select them with a cursor or keyboard command. They need to be big enough and spaced out enough to be tapped comfortably, and, due to the limited screen space, some will have to be collapsed into sub-menus or even hidden.

Kebab menus save space on mobile screens by enabling users to access hidden buttons.

Standard button design principles should apply to your mobile designs. I tend to divide button design into two main principles: You should present buttons in a range of styles (sizes, colors, and shapes) that denote their relative importance through visual cues; and the text label or icon associated with a button should connote its semantic meaning and intended function—for instance, whether the button affirms an action, selects a tool, navigates to a new page, or cancels an action.

But you may have important buttons that can’t be hidden. As an alternative to the kebab menu, you could simply increase the button size to meet the mobile guidelines and then stack them vertically. Another alternative would be to leave the most crucial buttons at full size and make secondary buttons smaller by replacing the text label with an icon. When considering which technique to use, decide which buttons are most essential on each page.

Conclusion

When the project scope and budget allow, it’s most efficient to consider how a responsive mobile dashboard will look and function as you’re building out the desktop version. This will save time and development costs later in the product life cycle, and it will also help ensure brand consistency across devices. It’s also paramount to recruit a skilled developer to assess the feasibility of various design approaches, and, where needed, define CSS rules for reconfiguring tables and charts. Above all, try to faithfully recreate as much of the desktop experience as you can: You may have to eliminate some features and functionality, but it’s important not to dumb down the design and to follow the UX design process.

Mobile dashboard UI shouldn’t be an afterthought. If designed with care and foresight, mobile dashboards can provide significant value to users.

WebExpo Conference Talk #2 – Digital Intimacy: Feeling Human in an Artificial World

I have identified „Digital Intimacy-Feeling Human in an Artificial World“ as the second talk I want to discuss here because I have previously worked on two projects during my bachelors degree that dealt with the same topic and similar questions as the ones Lutz Schmitt presented at the Expo. Especially in one of my projects about long distance relationships my team and I asked ourselves how we could create a sense of closeness through media and technology. Closeness especially meaning emotional intimacy – through rituals shared experiences and time spent doing things together, but also asked ourselves if we should mimic physical intimacy and proximity in some way and more importantly how to do that with technology. 



Lutz Schmitt’s talk investigates how feelings of closeness and connection can be created in digital and artificial contexts (through robots, AI-driven systems, or designed experiences). He explores whether digital interactions can offer a genuine sense of intimacy and how we can distinguish meaningful connection from simulation. He brings up key questions: Can people form real emotional bonds with non-human objects? What role do trust and vulnerability play in creating such connections? And what ethical responsibilities arise when we design digital interactions?


From a UX and interaction design perspective, this talk is very relevant. In both projects I worked on, we looked into creating interfaces that go beyond typical communication(tools). Ones that encourage presence and emotional involvement. For example, instead of simply allowing users to send messages, we explored designing rituals: synchronized activities, and interfaces that created a sense of “co-being” rather than just „back and forth“ communication. These approaches align with Schmitt’s idea that intimacy is not just about frequency of contact, but about quality of interaction and the emotional context.

He also challenges the trend of creating frictionless, overly polished digital experiences. In reality, human relationships are full of imperfection and effort. Transferring that to UI/UX means intentionally designing for slowness and emotional nuance which is something we often avoid in tech but is deeply engrained in us and an inherent part of the human experience. For example, what if the interface was affected by emotional tone? Or what if moments of silence or waiting became part of the interaction, signaling care or presence instead of emptiness?

What I also found to be a really interesting and relevant aspect he brought up in his talk, was the consideration of privacy. This is much harder to maintain when introducing a technological component/product into a situation, since it’s almost impossible to not have a third party involved. It raises the ethical question of how to handle the very private data that is collected responsibly. As someone who designs these kinds of products this is something I hadn’t given much thought before but really need to take into consideration.

In conclusion the talk reminded me that designing for emotional intimacy is not just about what technology to use but a much deeper emotional and ethical problem that requires understanding the essence of human intimacy and how technology can support that, instead of substituting or mimicking it. It’s a complex but deeply relevant area for interaction design, that requires sensitivity, creativity, and critical thinking.