In my testing I want to focus on color schemes, refining call-to-action buttons (CTA) as well as enhancing mobile responsiveness, in order to use the outcome of the testing to make sure what to focus on when it comes to my neurodesign approach. When choosing a color scheme for example, it is important to think about how all design elements interact. Are they making users squint or, worse, leave? In an A/B test, I want to test different color combinations that still align with the individual branding but are easier on the eyes. The dream result? A color palette that promotes clarity and comfort as well as making the user stay on the website longer.
CTAs are like road signs for your users. If your buttons are too small, poorly worded, or blend into the background, users may never “see” them. When it comes to testing I want to experiment with button color, size, and the text itself. Maybe wording like “Get Started” works better than “Submit,” or a bright, standout color for your button makes it easier to spot. A well-designed CTA can act like a neon arrow pointing the way to conversion. For me it is important to integrate the psychological aspect we explored in the previous semester, to enhance the design to a next level. Improving design not only through aesthetic, composition but also through a more humane approach.
One of the last parts of my testing would be the enhancing mobile responsiveness. With everyone glued to their phones, a website that doesn’t work well on mobile is practically a crime. Maybe you’ve seen those sites where you have to pinch and zoom to read anything. An A/B test where one version of your site is optimized for mobile (with bigger buttons, shorter text, and a cleaner layout) could show that mobile-first design improves engagement, session times, and overall satisfaction.
A/B testing is an ongoing journey of refinement and improvement. It’s not a one-time magic bullet. Once you’ve conducted a test and seen which version performs better, it’s crucial to act on that data. Iterating based on feedback is the next logical step for me. For example, if users showed a preference for a simplified navigation menu, it would be beneficial making that the default across the entire website.
Another thing to keep in mind is staying updated with trends. The digital world evolves quickly, and so do user preferences. Even if I’ve nailed cognitive ease for the current audience, the way people interact with websites may change. Testing new design elements regularly should help staying ahead of the curve.
Let’s break this down into actionable steps:
- Defining my hypothesis: Identifying a pain point in a website design. I will do the testing in combination with our Branding Exercise in this course as well, to use my testing in a more useful way. Here the focus on the hypothesis would be the rebranding of the webpage for “Oma’s Teekanne”.
- Creating my A/B versions: Designing two versions of the page. Version A is the control—the existing “bad” design, and Version B is my experiment with changes to navigation or layout, based on my hypothesis. In this case, Version B might feature a cleaner, simplified navigation bar.
- Splitting the audience: Using tools like Google Optimize to divide your audience. Half of the users will see Version A, and the other half will see Version B. These tools can automatically randomize the audience, ensuring that the test results are statistically sound.
- Track key metrics: Metrics are the bread and butter of A/B testing. For cognitive ease, I want to measure things like:
- Bounce rate: Do fewer people leave immediately?
- Session duration: Are users staying longer?
- Click-through rate: Are people interacting with calls to action more?
- Conversions: Are users completing desired actions like signing up or making a purchase?
- Analyze the results: After gathering enough data, it’s time to look at which version performed better. If Version B (the simplified design) results in users staying on the page longer, clicking more, and converting better, I’ve successfully enhanced the cognitive ease.
- Iterate and improve: A/B testing doesn’t end with one successful experiment. The goal would be using the results to inform future tests and continue optimizing for cognitive ease. Remembering, the goal is to reduce cognitive load for users, so looking for areas of friction and testing new ways to make the user experience as smooth as possible.
- Communicate findings: Once you have results, it’s important to share them with the costumer or stakeholders in a clear and actionable way. Visual aids like graphs or charts help convey how Version B improved metrics compared to Version A.