Random notes from class, research, and life

2020/04/05, self

The knowledge of self can be developed in various ways. For example, people can get self-knowledge through presenting or understanding themselves. As for presenting oneself, Goffman's "The Presentation of Self in Everyday Life" [1] used the imagery of the theatre to portray how people display themselves to the public; with regard to understanding oneself, Foucault's "The Care of the Self" [2] and "Technologies of the self" [3] described that people can use self-examination of the relationship between their thoughts and the internal/external world to better care about themselves.

The practices of developing self-knowledge with technologies are vast. In terms of presenting oneself, it is quite popular that people are utilizing strategies to disclose themselves on social media, e.g., using impression management to shape one's own figure on social media [5]; regarding understanding oneself, self-surveillance equipped with tracking technologies or evaluation equipped with quantification systems are prevalent for users to monitor themselves, such as monitoring, recording, and analyzing one's heart rates in a certain period of time [4] or evaluating people's performance with numerical data in games[6].

  1. Erving Goffman. 1999. The presentation of self in everyday life. Peter Smith Pub, Inc.

  2. Michel Foucault. 1988. The History of Sexuality, Vol. 3: The Care of the Self (First Vint ed.). Vintage.

  3. Michel Foucault. 1998. Technologies of the self. In Ethics: Subjectivity and Truth (Essential Works of Foucault, 1954-1984, Vol. 1), Paul Rabinow (ed.). The New Press, New York, 223–252.

  4. Pam Briggs, Elizabeth Churchill, Mark Levine, James Nicholson, Gary W Pritchard, and Patrick Olivier. 2016. Everyday Surveillance. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems: 3566–3573.

  5. Yuan Wang, Yukun Li, Xinning Gui, Yubo Kou, and Fenglian Liu. 2019. Culturally-Embedded Visual Literacy: A Study of Impression Management via Emoticon, Emoji, Sticker, and Meme on Social Media in China. Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 1–24.

  6. Yubo Kou and Xinning Gui. 2018. Entangled with numbers: Quantified Self and Others in a Team-Based Online Game. Proceedings of the ACM on Human-Computer Interaction 2, CSCW: 1–25.

2020/04/05, uability testing

Usability testing is a type of product test through observing and interpreting how representative users use it. A common scenario of usability testing is that 1) a user go through several tasks under the observation and facilitation of a facilitator and 2) researchers interpret the observation and make evaluation accordingly.

There are three pillars of usability testing: typical users who can represent the common feature of users who will use the product, appropriate tasks which both reflect the research goals and give users proper instructions to perform, and skilled facilitators who can properly lead and carefully observe the users when doing tasks.

A facilitator's job is not exactly the same as an interview's. In usability tests, the facilitator's job is to observe and help users to complete tasks, so observing and facilitating are more important than asking probing questions. Therefore, letting users explore and find solutions to finish tasks are the primary goal. When users are too confused to proceed, there are three helpful techniques: 1) echo, which refers to repeating exactly the words from the user (but in a questioning tone), 2) boomerang, which refers to return the question back to the user when s/he asks a question, and 3) Columbo, which refers to asking partial questions to give the user hints but without telling s/he too much details.

An instructive video series of usability testing by NN group:

2020/04/02, algorithmic bias and fairness

Algorithmic bias refers to systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Generally, algorithmic bias is generated by data insufficiency and algorithm defects. Specifically, it results from five main reasons: 1) (INCORRECT DESCRIPTION) trained data reflects bias, e.g., if trained data is from modern time then it can hardly generate results that correctly reflect the past; 2) (INACCURATE PREDICTION) trained data contains unbalanced classes, e.g., if trained data contains much more white people's faces then it may generate more accurate predictions on white people than other races; 3) (UNMATCHED VALUE) quantified data in AI does not capture the value correctly, e.g., love cannot be exactly quantified as numerical data; 4) (AMPLIFIED ASPECT) trained data is enriched by amplified data that comes from positive feedback loops, e.g., if an AI system aims at a particular population under test and get positive feedback, then the system is more likely to keep testing the population and getting positive feedback and eventually leave other populations untested; 5) (MANIPULATION) trained data is attacked intentionally and become contaminated, e.g., if people keep inputing bad words to AI conversation systems that learns from the data, the systems could also use these bad words.

A nice explanation video of algorithmic bias and fairness:

2020/04/02, social support and explainable AI

Social support is a type of support provided by people from one's social network. When someone faces a crisis and cannot overcome it alone, social support is an important resource foe help. There are four major types of social support: 1) emotional support (providing empathy, concern, affection, love, trust, acceptance, intimacy, encouragement, or caring); 2) tangible support (providing financial assistance, material goods, or services); 3) informational support (providing advice, guidance, suggestions, or useful information to someone); 4) companionship support (providing a sense of social belonging).

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence (AI) so that the results of the solution can be understood by human users. However, in many cases XAI fails to perform properly due to three reasons: 1) unvalidated guidelines for design and evaluation based on authors’ respective experiences with little further justification; 2) empirically derived taxonomies of explanation needs elicited from user surveys; 3) psychological constructs from formal theories to guide explanation facilities via literature review.

A nice explanation video of social support:

2020/04/01, panopticism

Panopticism proposed by Michel Foucault describes a type of surveillance. In the theory, surveillance performs like the watching tower inside a panopticon: all prisoners are aware of the existence of the tower and the function thereof (i.e., watching prisoners), but they do not know when guards will actually be there and watch. To avoid being caught by the guards (or surveillance), the prisoners have to assume that there is always a guard watching them and thus behave themselves all the time. Therefore, the prisoners become guards of themselves even without the existence of real guards.

The application of panopticism in surveillance equipped with modern technology is various, e.g., a restrictive but vague online censorship system. Netizens are aware of the existence of the censorship and the punishment when caught by it, but they do not know when/how the system works, e.g., by which content or words will the system be triggered. Therefore, most people choose to behave themselves with great cautiousness online and eventually become prisoners/guards of themselves.

A nice explanation video of panopticism:

2020/03/31, theory of justice

John Rawls introduced a theory of justice, which argues "justice as fairness" to solve the issues of justice. First, in the theory of justice, all justice issues should be considered under original conditions, which refer to a scenario that 1) all participants are free and rational; 2) they know nothing but basic natural and social science knowledge (i.e., veil of ignorance). The original conditions make sure that all participants will argue for their own rights through effective approaches, without knowing which social class they are in. Second, the process of solving issues should obey two main principles: 1) "greatest equal liberty principle," which refers to a condition that any participant has maximized liberty that is compatible to others, e.g., a person should not invade others' right when chasing his or her liberty; 2.a) "difference principle," which refers to that social inequality should be an advantage for all, e.g., any inequality that benefits rich people should also benefit poor people; 2.b) "equal opportunity principle," which refers to that inequalities shoule be attached to offices and positions open to all under conditions of fair equality of opportunity, e.g., a powerful position is allowed to exist, but the opportunity of getting the position should be equal to all members. The two principles allow the existence of inequality but make sure that it benifits all social classes.

A nice explanation video of principles of Rawls's theory of justice: