Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
The Philosopher’s Zone / – Beauty and AI

The Philosopher’s Zone – Beauty and AI

Share this summary

Intro

In this episode of “The Philosopher’s Zone,” AI researcher Kate Crawford explores the intersection of beauty and AI, discussing the ethical concerns and societal implications of beauty apps that use AI to analyze and enhance facial appearance. She delves into the history of AI’s use in assessing human faces for beauty and criminality, highlighting the subjective nature of data labeling and the potential harm caused by preconceived assumptions and stereotypes. Crawford also examines the broader impact of AI systems on human experiences, such as emotional classification and the shaping of access to resources and opportunities. Ultimately, she emphasizes the need for careful consideration and implementation of AI technologies to avoid perpetuating inequality and to foster more diverse and inclusive ideals of beauty.

Main Takeaways

Ethical Concerns and Subjectivity in Beauty Apps

  • Beauty apps use AI to analyze and enhance facial appearance, but AI is not value-free and may have ethical concerns.
  • Data sets used by beauty apps are often scraped from the internet or labeled by crowd workers who are paid low wages.
  • The labeling process is highly subjective and cultural, lacking scientific consensus.
  • Beauty apps rely on a crowd-sourced idea of what images are seen as beautiful according to the internet.

AI Systems and Data Sets

  • Beauty apps use large-scale systems trained on data sets to classify images as beautiful or not.
  • Users contribute to these systems by uploading their own photos without realizing they are creating data sets.
  • ImageNet is a gold standard of object recognition in AI, created in 2009 using crowd labeling.
  • ImageNet had absolute horrors in its labels, including racist, sexist, and derogatory terms, connected to people’s personal photos.
  • The cleanup of these data sets is a victory, but we need to ask about the politics of classification and labeling.

Implications of AI Systems

  • Digital epidermalization: applying race or gender to someone’s face without their agreement or awareness.
  • Data sets have pre-baked assumptions built into them, applied to every system they touch.
  • Machine learning attempts to codify highly qualitative, subjective, and relational concepts.
  • Emotional classification in AI systems flattens human complexity and richness.
  • AI systems narrow and desiccate all things that go into being human, leading to epistemological flattening.
  • AI systems produce a worldview and impose a way of seeing, shaping access to resources and opportunities.
  • Data is extracted to classify people further, perceived as objective but actually amplifying social assumptions and stereotypes.
  • AI systems categorize people based on appearance, using outdated and ridiculous categories that can have material consequences.
  • Gender and race categories are fraught and complex, and AI developers cannot capture their complex meanings.
  • The assumptions informing AI applications in contexts such as policing and hiring are the same as those in beauty apps, and we should be concerned about them.

AI and New Beauty Ideals

  • AI beauty apps often have highly gendered and normative systems that cater to specific gender categories.
  • Assumptions about people’s faces and emotions in AI applications can be dangerous when applied in complex and historically unequal social institutions like employment, housing, and policing.
  • Facial and emotion detection systems are being built into public housing blocks in the US without residents’ ability to opt-out.
  • The logics underlying these systems can perpetuate inequality and cause material harm.
  • AI has the potential to bring about new, more diverse and inclusive ideals, but it requires careful consideration and implementation.
  • AI can generate a picture of a person who could look like anything at all, which might lead to the emergence of new beauty ideals.
  • The normative structures of beauty might encode these ideals into hierarchies of human value, creating sharp ends of these kinds of ideals.
  • The question of whether it’s one look or another is less interesting than looking at the actual machinery that underlies it and the way that it creates these hierarchies of human value.

The Power, Politics, and Planetary Costs of AI

  • AI is not artificial nor intelligent, but rather deeply material technologies that have an enormous carbon footprint.
  • The idea that we have a form of intelligence that is completely separate from our embodied selves really takes us back to Cartesian dualism.
  • Mechanisms of construction and class systems in AI can be deeply discriminatory and predate us by centuries.
  • Skepticism and questioning who benefits and who may be harmed is important as we use more AI interfaces.
  • Kate Crawford’s book “Atlas of AI” explores the power, politics, and planetary costs of artificial intelligence.
  • It’s important to fight against the phenomenon of enchanted determinism as we continue to use AI interfaces.

Summary

Ethical Concerns and Subjectivity in Beauty Apps

Beauty apps that utilize AI to analyze and enhance facial appearance raise ethical concerns due to the subjective nature of data labeling and the reliance on crowd-sourced ideas of beauty. Data sets used by these apps are often scraped from the internet or labeled by low-wage crowd workers, lacking scientific consensus. The subjective and cultural nature of the labeling process highlights the potential for bias and the need for careful consideration of the societal implications.

AI Systems and Data Sets

Beauty apps rely on large-scale AI systems trained on data sets to classify images as beautiful or not. Users unknowingly contribute to these systems by uploading their own photos, creating data sets that may perpetuate preconceived assumptions and stereotypes. The example of ImageNet, a widely used object recognition data set, reveals the presence of racist, sexist, and derogatory labels connected to personal photos. While efforts have been made to clean up these data sets, the politics of classification and labeling remain important considerations.

Implications of AI Systems

The application of AI systems in beauty apps and other contexts can have far-reaching implications. Digital epidermalization, the application of race or gender to someone’s face without their agreement or awareness, raises concerns about privacy and consent. AI systems, influenced by the assumptions and biases embedded in data sets, attempt to codify highly qualitative and subjective concepts, flattening human complexity and richness. These systems shape access to resources and opportunities, categorize people based on appearance using outdated and ridiculous categories, and amplify social assumptions and stereotypes. The same concerns apply to AI applications in policing and hiring, highlighting the need for critical examination.

AI and New Beauty Ideals

AI beauty apps often perpetuate highly gendered and normative systems that cater to specific gender categories. The assumptions about people’s faces and emotions in these apps can have dangerous implications when applied in complex and historically unequal social institutions such as employment, housing, and policing. The integration of facial and emotion detection systems in public housing blocks without residents’ ability to opt-out raises concerns about perpetuating inequality and causing material harm. While AI has the potential to foster more diverse and inclusive ideals of beauty, careful consideration and implementation are necessary to avoid encoding hierarchies of human value and to embrace emerging beauty ideals.

The Power, Politics, and Planetary Costs of AI

AI is not simply artificial or intelligent but deeply material technologies with significant carbon footprints. The idea of separating intelligence from our embodied selves reflects Cartesian dualism, raising questions about the implications of this perspective. Mechanisms of construction and class systems in AI can perpetuate discrimination that predates us by centuries. Skepticism and questioning who benefits and who may be harmed are essential as we increasingly interact with AI interfaces. Kate Crawford’s book “Atlas of AI” provides further exploration of the power dynamics, political implications, and planetary costs associated with artificial intelligence. It is crucial to resist the allure of enchanted determinism and critically engage with AI technologies.

Conclusion

The intersection of beauty and AI raises important ethical concerns and societal implications. Beauty apps utilizing AI often rely on subjective data labeling processes and perpetuate preconceived assumptions and stereotypes. The broader impact of AI systems extends beyond beauty apps, shaping access to resources and opportunities and potentially perpetuating inequality. Careful consideration and implementation are necessary to foster more diverse and inclusive ideals of beauty. Moreover, the power dynamics, political implications, and planetary costs of AI demand critical examination and skepticism. By actively engaging with AI technologies, we can strive for a more equitable and inclusive future.

You might also like