Reflection Four - Remaining UX Articles

Keegan Ennis

HEART Framework and Google


HEART analyzes the following aspects of UX: happiness, engagement, adoption, retention, and task success. The article shows possible depicts particular strategies for gauging the success a design may have, Happiness can be determined from user surveys; engagement, adoption, and retention can be measured with general analytics tracking things like clicks, views, etc. Finally task success can be gauged with user testing.

The article follows up this framework with a few strategies employed by Google to improve on user engagement, user adoption, and retention. Examples such as cross platform/product promotion are used to improve engagement, while user onboarding and new feature release can be an example of how google improves user adoption.

"Google uses many means of communicatiFron. They combine email, feedback forms, tooltips, and modals to increase the HEART of their products and drive more revenue to the 4th most valuable company in the world."

Follow-Up Question(s):

How do the different frameworks compare? I.e. Design Ladder versus HEART - in what scenario would one be more successful than the other?

How does human perception shift with the introduction of new technologies?

39 Studies About Human Perception in 30 Minutes

Unsurprisingly this article had an immense amount of content to share, and it's all presented in a very concise manner - that being said, I'm not about to segment this reflection into 39 individual paragraphs outlining the points of each part of the article.

The article does a fantastic job at presenting some of the more subtle aspects of how a user might perceive a design. For instance, the use of reference points or other imaginary figures created in our head to represent elements which can be safely generalized. In the example given by the article, they cite the use of an imaginary reference point on a graph; in which the users would often use 45 Degrees as a reference point for any line drawn.

From here on out, the article continues to describe the general perceptions humans make, reinforced by the 39 studies consolidated to this single article. A lot of content is covered, taking into account everything from the way in which basic shapes are perceived, to how 3D space is interpreted on pie charts as opposed to a 2D pie-chart. The article stresses things such as using bar and pie charts for displaying proportions.

Follow Up Question(s):

Understanding Your User's Mental Model

This article once again outlines the importance of using sketches to properly convey and and receive valuable information that might not be reached otherwise. One example cited the use of 'dogbone' map-patterns, where individuals viewed their environments as a group of individual locations (all within close proximity to one another) connected to another environment by a commuting route. This simple sketch (pictured below) allowed designers to move away from the assumption that users perceived their environments as a single bubble.


This article very clearly displays the success of using sketches to better understand your user. This includes having the user sketch for you how they perceive particular relationships (in the above example, spatial relationship). 

Follow-Up Question(s):

How artistically capable do you need to be in order to effectively convey your intent or understanding of a given topic/relationship?

How Netflix Does Its A/B Testing

This article, while interesting, doesn't seem to escape the first general point it makes: which is essentially that 'Netflix Experiments' in order to improve its A/B Testing. This typically seems to mean that Netflix will throw around varying graphics of a single show in order to see how user's engage with those new visuals, and use that to better present a more personalized way of viewing media.

That's really about it. The article goes on to outline the importance of using A/B testing to improve both retention and revenue, using your observations of what people do rather than what they say they do, and experimenting. That's ultimately what it comes back to: just experimenting.

Follow-Up Question(s):

Is there an end to dynamically adjusting for user experience? Can too much variance eventually undo the previous improvements made on the experience?