I have always had a passion for Math, especially calculus, from a theoretical perspective and practical applications. I discovered my love for programming while learning Scratch and building gaming applications when in 8th grade. This combination led me to learn Python and all associated libraries and tools, such as pandas, matplotlib, numpy, etc., for data extraction, manipulations, and insights. What started as an innocent interest in statistics gradually became a zealousness leading to grasping various machine learning and deep learning concepts over the last few years. I worked with Dr. Kamath and supported him in building case studies and experimenting with different libraries in two of his forthcoming books - XAI: Introduction to Interpretable Machine Learning and Transformers: A Deep Dive.
Taking this further, I wanted to work on projects that blend Math, AI, programming and benefit society and the community. Assisting the authors in the book project gave me the necessary platform to grasp many algorithms, from traditional machine learning to state-of-the-art transfer learning techniques using transformers. Working with several researchers from academics and industry, I have developed PyschBERT (a language model for mental health) and downstream applications for detecting mental health issues in social media. I am currently working on extending the research to speech detection and adding explainability metrics for black-box models for further comparisons.
Abstract—Mental health behaviors are now recognized as
primary factors contributing to suicide. Social media text is an
increasingly important modality for detecting mental behaviors.
Currently, there is no taxonomy and no comprehensive dataset
that machine learning researchers can employ as a benchmark
to evaluate and advance research to address this problem.
Fragmented efforts in the community also demonstrate that
it remains challenging to recognize text relevant to mental
health analysis and distinguish behaviors, such as depression
and social anxiety. This paper puts forth a novel mental health
language model that addresses these challenges. The paper
makes several contributions. First, it proposes a taxonomy and
puts forth a comprehensive dataset of social media text for
the community. Second, it proposes a two-stage framework,
first discriminating text relevant to mental health from nonrelevant
text and then carrying out multi-class classification
for detection of mental health behaviors. Third, it proposes
a novel mental health language model, PsychBERT, which is
pretrained on a large corpus of biomedical literature on mental
health, as well as social media data. Fourth, the proposed
framework additionally incorporates components that enhance
its explainability. Our evaluation shows that our proposed
framework, strongly leveraging PsychBERT, is both effective,
outperforming state-of-the-art methods, and interpretable. The
taxonomy, dataset, and pre-trained PsychBERT model are made
publicly available. The pretrained language model is available at
Vedant Vajre
Copyright © 2021 Vedantvajre - All Rights Reserved.
Powered by GoDaddy
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.