top of page
pink.png
isgmhh.png
queertitle.png
queergpt.png

Project Detail

Institute for Sexual and Gender Health and Wellbeing Research Project

Tools

Figma

Role

UX Researcher

Research Assistant

Duration

July 2024 - Nov 2024

pink.png
01   Background: Queerness + AI

We conducted a research study that questioned the potential harms and benefits of AI on sexual and gender minority (SGM) groups. Our goal was to implement community-based research and participatory design methods to enable a group of queer teens with sexual health education expertise to design their own GenAI health tools.

02   Goals
  1. Research the potential harms and benefits of AI tools on sexual and gender minority (SGM) communities
  2. Use participatory design methods to incorporate the needs and concerns from queer teens regarding AI tools
  3. Create an AI tool that is customized and designed for queer teens
03   Literature Reviews

I conducted a series of literature reviews that highlighted previous research that addressed the harms of AI systems on queer communities.

Facial Recognition Bias

Research @ Stanford University - Dr. Kosinski & Wang

​​​

Claim: AI can detect sexual orientation more accurately than human from facial images

​​​

Potential Harms:

  • GenAI accentuates stereotypical biases against queer and non-binary individuals

  • ​AI datasets have a large underrepresentation of LGBTQ+ individuals, leading to inaccurate results and biases in facial recognition AI tools

​​

Screenshot 2024-10-17 at 8.43.33 PM.png

Speech and Language Bias

​​Perspective, an AI technology developed to detect "toxic" speech, labels queer speech as "toxic"

​

Potential Harms:

  • Words such as "gay", "lesbian", and "queer", which should be neutral, are taken as significantly "toxic"

  • AI tools pose a threat to the voices and safety of queer individuals, particularly from the drag community​​

Screenshot 2024-10-17 at 9.14.41 PM.png

Anti-trans use of AI

Giggle, a social media app designed for “females”, intentionally excludes trans women with its use of artificial intelligence.

 

Potential Harms:​

  • AI facial recognition tool failed to properly recognize women of color and transgender women

  • Threatens the inclusion and safety of trans and POC individuals, showing clear biases in AI systems against these communities

Screenshot 2024-10-17 at 9.42.14 PM.png
pink.png
04   User Personas

Creating different scenarios in which queer teens could utilize GenAI tools to learn more about their queer identity and navigate their health

Screenshot 2024-10-17 at 10.29.42 PM.png
Screenshot 2024-10-17 at 10.29.49 PM.png
05   Research Layout

Assisted in writing in-depth interview guides and designing workshops that will engage teens from the Youth Advisory Council (YAC) managed by the Teen Health Lab.

Screenshot 2024-10-18 at 1.40.02 PM.png

Queer AI Workshop #1: Creating Equitable AI Tools with the Youth Advisory Council (YAC)

Participants: 10-20 LGBTQ+ teens from the Youth Advisory Council (YAC)

​​​

Content:

  • AI Crash Course

  • Harms & Benefits of AI

  • Discussion 

  • Participatory feedback on designing AI

monitor.png

Interview Guide: Co-Designing Equitable Generative AI Tools

Key words: Participatory research methods, community-based research, user interviews, user feedback, responsible and inclusive AI

Research Questions

 

  • ​How can generative AI tools like ChatGPT be ethically integrated into sexual health education to address the specific needs of queer teens?

​

  • What are the potential risks and benefits of using generative AI in providing sexual health information to sexual and gender minority (SGM) adolescents?

​

  • How can the preferences and insights of queer teens be incorporated into the design of AI tools to minimize harm and maximize accessibility to accurate sexual health information?

​

pink.png
06   Next Steps 
Once we finalize our pilot study, we will start engaging the research with queer teens from the Youth Advisory Council (YAC). I hope to support the next steps in this exciting research project, continuing to support youth minorities through inclusive and responsible technology.
Key Takeaways

The growth and democratization of AI poses a lot of great benefits to society. However, there must be equal pressure to highlight the potential harms and biases that AI technology comes with. Further research is needed to uncover these biases and create space for a diverse range of designers, researchers, and designers that will prioritize more inclusive and responsible technology.

bottom of page