HighEdWeb becomes Digital Collegium on Nov. 1. Discover the new brand

Categories
2022 Conference Design

Neurodesign: UX and attention, color, memory and cognition

Andres Zapata from idfive dives into how neurodesign can inform UX and design. Learn more in this partner article for the 2022 HighEdWeb Annual Conference.

NOTE: As part of our sponsor agreements with Annual Conference sponsors, we occasionally post information provided by our sponsor-partners on their behalf. This is one such article.

This article originally appeared on UX Planet.

Exactly how the mind works is still largely a mystery. And as with most mysteries, there are clues that give resolution to the interworking of how we think. How we think involves a swarm of highly interrelated mental processes including attention, memory, and thinking.

Neurodesign is an emerging field of design practice that accounts for how our brains are wired to create designs that promote simplicity, emote joy, and drive action. Since humans are visually dominant, how we perceive and attend visual stimuli should be of particular interest to user experience designers. This is why neurodesign is important. To understand neurodesign, we must first understand a couple of major things about the brain.

Let’s peel the Neurodesign onion.

Working Memory

A central and important cognitive process is memory. Without working memory in particular, we wouldn’t be able to do most of the things that make us human and we’d live in a perpetual present. We wouldn’t be able to watch tv, read books, or browse the web due to our inability to string together a context and thread together meaning.

Working memory is an extremely limited resource (Ma et al., 2014) where we store and manipulate sensory information that expires after a few seconds (Baddeley, 2003). Because working memory is such a scarce and perishable cognitive resource, and taking into account the constant overwhelming avalanche of stimuli we are subject to, designers need to understand how the brain monitors the environment and the factors that affect working memory.

We can’t remember what we don’t know about. And to know or learn something, we must first register (sense) and process stimuli (perceive). Our senses are under constant assault from the world around us. It’s estimated that the optic nerve receives between 10^7 and 10^10 bits of information per second (Itti & Koch, 2001).

We sense considerably more information than the brain can process. This is why the brain has to actively filter the environment to help us attend to what’s important and ignore the rest.

Visual Dominance

Since humans are visually dominant, how we perceive and attend visual stimuli should be of particular interest to user experience designers. One of the most critical roles of selective visual attention is to quickly direct our gaze at objects of interest. The ability to do this quickly and meaningfully in a cluttered visual scene could be the difference between being flattened by a bus or the safety of a sidewalk – or less dramatically, between noticing a button or not.

A two-component framework, the dual processing theory, explains how the brain manages and deploys attention. The dual processing theory suggests that there are two processing systems in our brains. The theory was popularized by Daniel Kahneman in 2003 as “intuition” (system 1) and “reasoning” (system 2) (Kahneman, 2013). This framework suggests that we selectively direct attention to our environment using both bottom-up, image-based saliency cues, and top-down, task-dependent cues (Itti & Koch, 2001).

While some attributes in the visual world, such as a blinking or bright light, automatically attract attention and are experienced as “visually salient,” directing attention to other locations or objects, such as deliberately scanning for a yellow target, requires voluntary effort. Even though both mechanisms can operate simultaneously, bottom-up attention happens automatically and nearly instantaneously (25 to 50 milliseconds per item) while top-down attention is directed, more powerful, and as slow as eye movement (200 milliseconds or more) (Itti & Koch, 2001).

Humans and a couple of primates are the only mammals that are able to discriminate between red and green. This vision feature evolved between 30 and 40 million years ago, likely to successfully tell apart edible fruits and young leaves from their natural background (Frey et al., 2008). Trichromatic vision evolved for good reason: survival. We learn through experience and form expectations that a specific fruit can be found in a particular location and we direct our attention in those directions (top-bottom attention). We also passively, effortlessly, and automatically scan for brightness, color, and movement (bottom-up attention). These two attentional models work together to manage what stimuli filter through to the brain.

In a cluttered visual environment, designers can take advantage of the brain’s constant and effortless monitoring to automatically direct attention to specific visual targets. Particular color, saturation, and brightness combinations, for example, are more likely to be noticed by bottom-up monitoring than other color configurations. In fact, Koch and Ulman (1985) and Itti and Koch (2001) hatched and refined the saliency map – an algorithm that analyzes how luminance, hue, orientation, and motion work in a context to actualize bottom-up attention. This model has demonstrated that even though attention is not mandatory for early vision, attention can drastically pivot, in a top-down manner, during early visual processing. This further underscores the collaboration to attend between bottom-up and top-down attention (Itti & Koch, 2001).

A 2008 study used the saliency map to demonstrate that color does in fact play a major role in bottom-up attention. The researchers found that subjects fixate on different locations of the same image in color and grayscale. What’s more, the researchers’ findings were more pronounced for images of naturally occurring things, such as images of rainforests, than computer-generated images, such as fractals (Frey et al., 2008).

Color and arousal

The color red, in particular, has extraordinary psychological qualities. A 2015 study found that the color red captured and held attention in both a positive and negative context, but not in a neutral one. The study concludes that red not only seems to guide attention in emotional circumstances but also facilitates linked motor response. In other words, in an emotional context, the color red has an impact on both attention and motor behavior (Kuniecki et al., 2015).

Study after study confirms that there is an undeniable link between color and emotion. For example, longer wavelength light (red) is known to provoke higher arousal than lower wavelength light (blue or green). The hue of a color alone, however, is only one of the three dimensions composing color and it is the combination of hue, brightness, and saturation – not just hue – that affects the observer’s emotional state (Wilms & Oberfeld, 2018). It’s true that when controlling for saturation and brightness, red will emote the most arousal. However, a bright and highly saturated blue, for example, can yield higher arousal than a lowly saturated and dim red.

Since color increases arousal and arousal increases memory, it’s likely that color can increase memory. A 2006 study found that color improves the recognition of natural scenes by about 5%. This finding signals that color plays a significant role during encoding and also during the recognition matching process (Spence et al., 2006). More research needs to be conducted, but it would be interesting to find if this memory boost effect is true for scenes other than the natural ones. It would also be interesting to study if higher arousal leads to better recall. Even so, knowing that brighter and more saturated images lead to higher arousal, and that color images of nature are more memorable, should give designers a good reason to include highly saturated and bright color images of nature in their design when they try to affect both arousal and recall.

To further complicate matters, not all eyes see the same colors the same way. Even people who have typical photoreceptors can see the same thing differently. About 60% of men have a particular type of red cones while the other 40% have another type. This means that more than half of men perceive the color red differently than the other half. Meanwhile, most women have both types of red receptor cones and are able to sense a richer image — not just for red but also for all the colors that red interacts with (Winderickx et al, 1992).

Much like hue, saturation, and brightness, motion also has the ability to pull our attention. Clearly, color and motion can be manipulated to pull someone’s attention. Even then, can the brain process stimuli from the environment that we don’t even notice or can’t describe? Yes. We tend to perceive low-level Gestalt grouping information to semantic meaning from unattended and unreported stimuli. What’s more, this unattended and unreported information can influence performance and priming judgment (Wood & Simons, 2019).

Inattentional blindness, change blindness, and motion-induced blindness are similar in that people don’t report noticing stimuli and can’t discriminate the attributes of the unnoticed stimuli. The failure to notice together with a failure to identify features signals the absence of awareness. But not having an explicit memory doesn’t necessarily mean that the stimuli went unprocessed or that it never reached conscious awareness (Wood & Simons, 2019).

How the brain groups stimuli largely occurs automatically. By grouping sensory stimuli into forms and objects, the brain is able to make sense of a lot of information very quickly. Understanding how all this works can offer designers invaluable insights for how they can create easier and faster to process compositions. Max Wertheimer, an Austro-Hungarian-born psychologist and his partners, Wolfgang Köhler and Kurt Koffka, in the early 20th century, believed that perception involved more than simply combining sensory stimuli. This hunch led to the arrival of Gestalt psychology. “Gestalt” means “form or pattern” in German, but in this context, it is used to mean that the parts added together are different from the whole. Gestaltists believe the brain predictably creates a perception that is bigger than the sum of sensory inputs. Gestalt psychologists translated these predictable patterns into principles that we use to organize sensory stimuli (Rock & Palmer, 1990).

Enter Gestalt

Researchers have shown just how powerful Gestalt principles are in processing the whole environment by demonstrating how people can attend to global or grouped stimuli even when top-down attention is directed towards a part of the whole. What’s more astonishing, is that these researchers also found new global objects can capture attention away from a local level, but the opposite is not true (Rauschenberger & Yantis, 2001). This finding suggests that the Gestalt automatic attentional processes are incredibly powerful even though they are unintentional, and in some cases, they are even more powerful than intentional attentional processes.

One powerful Gestalt principle is the figure-ground relationship. In basic terms, the principle suggests that we tend to discriminate what we see into the foreground and background, or figure and ground. The figure is the object or person in the focus of the visual field, and the ground is the background. How we interpret sensory information depends on what we see as figure and ground (Rock & Palmer, 1990).

Organizing sensory stimuli into meaningful perception is a Gestalt principle known as proximity or emergence. As designers, by simply grouping detail together and by separating them from other elements that are farther apart, we are able to semantically create meaning without having to use labels or other design artifacts that add to the user’s cognitive load. In the same way, all the other Gestalt principles can be used alone and together to quickly, effortlessly, and automatically tap into users’ bottom-up attentional processes. For example, a design can employ the principle of the common region (connecting information together with a background color or an outline), the principle of figure-ground, and the principle of proximity to help the mind “see” a group of related information very quickly such shipping address as different from the billing address on a check-out form.

According to all this, our ability to discern figures and shapes, occurs fast, effortlessly, and automatically. For designers, gestalt principles offer the opportunity for us to manage cognitive load, drive action, and engineer emotion in our designs. However, due to the many biases we have, we are sure that our understanding accurately squares up with the real world, but this is not always the case. What we perceive are educated guesses based on lightning-fast speeds from sensory information. These hypotheses are formed by a number of factors, including our personalities, experiences, and expectations. And so, what happens when designers have the wrong experiential and expectational framework for their users?

The brain is an amazing evolutionary miracle. In a world we are continuously peppered with more stimuli than we can handle, designers should really understand how it works so that it can be appropriately and ethically leveraged to support people’s objectives and outcomes.

Citations

Baddeley, A., Eysenck, M. W., & Andreson, M. C. (2009). Memory. Psychology Press.

Frey, H. P., Honey, C., & Konig, P. (2008). What’s color got to do with it? The influence of color on visual attention in different categories. Journal of Vision, 8(14), 6–6.

Itti, L., & Koch, C. (2001). Computational modelling of visual attention. Nature Reviews Neuroscience, 2(3), 194–203.

Kahneman, D. (2013). Thinking, Fast and Slow (First Edition). Farrar, Straus and Giroux.

Kuniecki, M., Pilarczyk, J., & Wichary, S. (2015). The color red attracts attention in an emotional context. An ERP study. Frontiers in Human Neuroscience, 9.

Ma, W. J., Husain, M., & Bays, P. M. (2014). Changing concepts of working memory. Nature Neuroscience, 17(3), 347–356.

Rock, I., & Palmer, S. (1990). The Legacy of Gestalt Psychology. Scientific American, 263(6), 84–90.

Wilms, L., & Oberfeld, D. (2018). Color and emotion: Effects of hue, saturation, and brightness. Psychological Research, 82(5), 896–914.

Wood, K., & Simons, D. J. (2019). Processing without noticing in inattentional blindness: A replication of Moore and Egeth (1997) and Mack and Rock (1998). Attention, Perception, & Psychophysics, 81(1), 1–11.

 

Share this:

By About our Sponsors

As part of our sponsor agreements with Annual Conference sponsors, we occasionally post information provided by our sponsor-partners on their behalf. This is one such article. Thank you and enjoy!