ical challenges go beyond the uses of the outcomes of
AI technology. Without appropriate human involvement, due to the Hawthrone effect (McCarney et al.
2007), stigma, or other self-censorship reasons, over
time individuals may eventually refrain from offering cues that might reveal their risk. How can social
media sites, then, continue to be platforms of
authentic expression and a means that enable disclosure of deep-seated mental health concerns? How
can AI tools leverage human feedback in ways that
ameliorate these self-censorship challenges?
One way to tackle these challenges could be to
thoroughly assess the acceptability of our method or
the technologies it enables to different stakeholders,
thus incorporating human feedback into the design
and functioning of the AI systems. This strategy constitutes a promising direction for future research. Collaborations between AI researchers, mental health
experts, community moderators, designers, developers, social media companies, and ethicists can also
help develop protocols and guidelines that facilitate
the use of our work in practical contexts in the future.
In this article, we presented a discussion of the role of
human involvement in deriving meaningful value
out of AI techniques and approaches. We highlight-
ed work from several threads of our prior research to
describe this agenda, particularly focusing on the
domain of mental health.
To conclude, the underlying impetus for investi-
gating these types of problems of societal significance
with AI is, of course, the desire to help people
improve their (here, mental health) outcomes,
whether through early identification of people at
risk, better personalization of treatments, or discov-
ery of new treatment strategies. Bridging the gap
between insights derived from AI approaches and
real-world action will require combining the out-
comes of the approaches with human feedback,
interventions, and simultaneous human/empirical
observations to provide strong validations of bene-
fits. The challenges posed in moving from AI out-
comes to intervention in social media platforms are
particularly exacerbated in sensitive domains — for
example, how to get informed consent from very
large populations when it comes to mental health
assessments or how to ensure interventions that
avoid real-world harm while respecting privacy of
individuals online. It will be a significant challenge
to develop new protocols that safely translate
insights from observational studies of AI
methods/tools, to active experimentation involving
expert feedback, and then to large-scale deployments
involving real people, while simultaneously respect-
ing principles of individual autonomy, minimizing
risk of harm, and ensuring that benefits and risks are
distributed across all parties who are directly or indi-
rectly, positively or less beneficially, affected by the
Amershi, S.; Cakmak, M.; Knox, W. B.; and Kulesza, T. 2014.
Power to the People: The Role of Humans in Interactive
Machine Learning. AI Magazine 35( 4): 105–20. doi.org/10.
Baumel, A.; Baker, J.; Birnbaum, M. L.; Christensen, H.; De
Choudhury, M.; Mohr, D. C.; Muench, F.; Schlosser, D.;
Titov, N.; and Kane, J. M. 2018. Summary of Key Issues
Raised in the Technology for Early Awareness of Addiction
and Mental Illness (TEAAM-I) Meeting. Psychiatric Services
69( 5): 590-92. doi.org/10.1176/appi.ps.201700270.
Beck, A. T. 1979. Cognitive Therapy of Depression. New York:
Birnbaum, M. L.; Ernala, S. K.; Rizvi, A. F.; De Choudhury,
M.; and Kane, J. M. 2017. A Collaborative Approach to Identifying Social Media Markers of Schizophrenia by Employing Machine Learning and Clinical Appraisals. Journal of
Medical Internet Research 19( 8): e289. doi.org/10.2196/
Bonner, R. L., and Rich, A. 1988. Negative Life Stress, Social
Problem-Solving Self-Appraisal, and Hopelessness: Implications for Suicide Research. Cognitive Therapy and Research
12( 6): 549–56. doi.org/10.1007/BF01205009.
Boyd, D., and Crawford, K. 2012. Critical Questions for Big
Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon. Information, Communication, and Society
15( 5): 662–679. doi.org/10.1080/1369118X.2012.678878.
Caliendo, M., and Kopeinig, S. 2008. Some Practical Guidance for the Implementation of Propensity Score Matching.
Journal of Economic Surveys 22( 1): 31–72. doi.org/10.1111/j.
Chancellor, S.; Kalantidis, Y.; Pater, J. A.; De Choudhury, M.;
and Shamma, D. A. 2017. Multimodal Classification of
Moderated Online Pro-Eating Disorder Content. In
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 3213–3226. New York: Association for Computing Machinery. doi.org/10.1145/3025453.3025985.
Chancellor, S.; Lin, Z. J.; Goodman, E.; Zerwas, S.; and De
Choudhury, M. 2016. Quantifying and Predicting Mental
Illness Severity in Online Pro-Eating Disorder Communities. In Proceedings of the 19th ACM Conference on Computer
Supported Cooperative Work, and Social Computing, 626–38.
New York: Association for Computing Machinery.
De Choudhury, M.; Counts, S.; and Horvitz, E. 2013. Social
Media as a Measurement Tool of Depression in Populations.
In Proceedings of the Fifth Annual ACM Web Science Conference, 47–56. New York: Association for Computing Machinery. doi.org/10.1145/2464464.2464480.
De Choudhury, M.; Counts, S.; Horvitz, E.; and Hoff, A.
2014. Characterizing and Predicting Postpartum Depression
from Facebook Data. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work and Social Computing, 626-38. New York: Association for Computing
De Choudhury, M., and De, S. 2014. Mental Health Dis-
course on Reddit: Self-Disclosure, Social Support, and
Anonymity. In Proceedings of the Eighth International Confer-