Ausnahme gefangen: SSL certificate problem: certificate is not yet valid 📌 How The Trevor Project assesses LGBTQ youth suicide risk with TensorFlow

🏠 Team IT Security News

TSecurity.de ist eine Online-Plattform, die sich auf die Bereitstellung von Informationen,alle 15 Minuten neuste Nachrichten, Bildungsressourcen und Dienstleistungen rund um das Thema IT-Sicherheit spezialisiert hat.
Ob es sich um aktuelle Nachrichten, Fachartikel, Blogbeiträge, Webinare, Tutorials, oder Tipps & Tricks handelt, TSecurity.de bietet seinen Nutzern einen umfassenden Überblick über die wichtigsten Aspekte der IT-Sicherheit in einer sich ständig verändernden digitalen Welt.

16.12.2023 - TIP: Wer den Cookie Consent Banner akzeptiert, kann z.B. von Englisch nach Deutsch übersetzen, erst Englisch auswählen dann wieder Deutsch!

Google Android Playstore Download Button für Team IT Security



📚 How The Trevor Project assesses LGBTQ youth suicide risk with TensorFlow


💡 Newskategorie: AI Videos
🔗 Quelle: blog.tensorflow.org

Posted by Wilson Lee (Machine Learning Engineering Manager at The Trevor Project), Dan Fichter (Head of AI & Engineering at The Trevor Project), Amber Zhang, and Nick Hamatake (Software Engineers at Google)


Introduction

The Trevor Project’s mission is to end suicide among LGBTQ youth. In addition to offering free crisis services through our original phone lifeline (started in 1998), we’ve since expanded to a digital platform, including SMS and web browser-based chat. Unfortunately, there are high-volume times when there are more youth reaching out on the digital platform than there are counselors, and youth have to wait for a counselor to become available. Ideally, youth would be connected with counselors based on their relative risk of attempting suicide, so that those who are at imminent risk of harm would be connected earlier.

As part of the Google AI Impact Challenge, Google.org provided us with a $1.5M grant, Cloud credits, and a Google.org Fellowship, a team of ML, product, and UX specialists who worked full-time pro bono with The Trevor Project for 6 months. The Googlers joined forces with The Trevor Project’s in-house engineering team to apply Natural Language Processing to the crisis contact intake process. As a result, Trevor is now able to connect youth with the help they need faster. And our work together is continuing with the support of a new $1.2M grant as well as a new cohort of Google.org Fellows, who are at The Trevor Project through December helping expand ML solutions.

ML Problem Framing

We framed the problem as a binary text classification problem. The inputs are answers to questions on the intake form that youth complete when they reach out:

  • Have you attempted suicide before? Yes / No
  • Do you have thoughts of suicide? Yes / No
  • How upset are you? [multiple choice]
  • What’s going on? [free text input]
AI Impace Challenge Product Demo

The output is a binary classification: whether to place the youth in the standard queue or a priority queue. As counselors become available, they connect with youth from the priority queue before youth from the standard queue.

Data

Once a youth connects with a counselor, the counselor performs a clinical risk assessment and records the result. The risk assessment result can be mapped to whether the youth should have been placed in the standard queue or the priority queue. The full transcript of the (digital) conversation is also logged, as are the answers to the intake questions. Thus, the dataset used for training consisted of a mixture of free-form text, binary / multiple-choice features, and human-provided labels.

Fortunately, there are relatively few youth classified as high-risk compared to standard-risk. This resulted in a significant class imbalance which had to be accounted for during training. Another major challenge was low signal-to-noise ratio in the dataset. Different youth could provide very similar responses on the intake form and then be given opposite classifications by counselors after completing in-depth conversations. Various methods of dealing with these issues are detailed later.

Because of the extremely sensitive nature of the dataset, special measures were taken to limit its storage, access, and processing. We automatically scrubbed and replaced data with personally-identifiable information (PII) such as names and locations with placeholder strings such as “[PERSON_NAME]” or “[LOCATION]”. This means the models were not trained using PII. Access to the scrubbed dataset was limited to the small group of people working on the project, and the data and model were kept strictly within Trevor’s systems and are not accessible to Google.

Metrics

For a binary classification task, we would usually optimize for metrics like precision and recall, or derived measures like F1 score or AUC. For crisis contact classification, however, the metric we need to optimize most for is how long a high-risk youth (one who should be classified into the priority queue) has to wait before connecting with a counselor. To estimate this, we built a queue simulation system that can predict average wait times given a historical snapshot of class balance, quantitative flow of contacts over time, number of counselors available, and the precision and recall of the prediction model.

The simulation was too slow to run during the update step of gradient descent, so we optimized first for proxy metrics such as precision at 80% recall, and precision at 90% recall. We then ran simulations at all points on the precision-recall curve of the resulting model to determine the optimal spot on the curve for minimizing wait time for high-risk youth.

It was also critical to quantify the fairness of the model with respect to the diverse range of demographic and intersectional groups that reach out to Trevor. For each finalized model, we computed false positive and false negative rates broken out by over 20 demographic categories, including intersectionality. We made sure that no demographic group was favored or disfavored by the model more often than the previous system.

Model Selection

We experimented with bi-LSTM and transformer-based models, as they have been shown to provide state-of-the-art results across a broad range of textual tasks. We tried embedding the textual inputs using Glove, Elmo, and Universal Sentence Encoder. For transformer-based models, we tried a single-layer transformer network and ALBERT (many transformer layers pre-trained with unlabeled text from the web).

We selected ALBERT for several reasons. It showed the best performance at the high-recall end of the curve where we were most interested. ALBERT allowed us not only to take advantage of massive amounts of pre-training, but also to leverage some of our own unlabeled data to do further pretraining (more on this later). Since ALBERT shares weights between its transformer layers, the model is cheaper to deploy (important for a non-profit organization) and less prone to overfitting (important given the noisiness of our data).

Training

We trained in a three-step process:

  1. Pre-training: ALBERT is already pre-trained with a large amount of data from the web. We simply loaded a pre-trained model using TF Hub.

    Instructions available here for loading a pre-trained model for text classification,.

  2. Further pre-training: Since ALBERT’s language model is based on generic Web data, we pre-trained it further using our own in-domain, unlabeled data. This included anonymized text from chat transcripts as well as from forum posts on TrevorSpace, The Trevor Project’s safe-space social networking site for LGBTQ youth. Although the unlabeled data is not labeled for suicide risk, it comes from real youth in our target demographics and is therefore linguistically closer to our labeled dataset than ALBERT’s generic web corpora are. We found that this increased model performance significantly.

    Instructions available here for checkpoint management strategies.

  3. Fine-tuning: We fine-tuned the model using our hand-labeled training data. We initially used ALBERT just to encode the textual response to “What’s going on” and used one-hot vectors to encode the responses to the binary and multiple-choice questions. We then tried converting everything to text and using ALBERT to encode everything. Specifically, instead of encoding the Yes / No answer to a question like “Do you have thoughts of suicide?” as a one-hot vector, we prepended something like “[ counselor] Do you have thoughts of suicide? [ youth] No” to the textual response to “What’s going on?” This yielded significant improvements in performance.

    Instructions available here for encoding with BERT tokenizer.

Optimization

We did some coarse parameter selection (learning rate and batch size) using manual trials. We also used Keras Tuner to refine the parameter space further. Because Keras Tuner is model-agnostic, we were able to use a similar tuning script for each of our model classes. For the LSTM-based models, we also used Keras Tuner to decide which kind of embeddings to use.

Normally, we would train with as large of a batch size as would fit on a GPU, but in this case we found better performance with fairly small batch sizes (~8 examples). We theorize that this is because the data has so much noise that it tends to regularize itself. This self-regularization effect is more pronounced in small batches.

Instructions available here here for setting up hyperparameter trials.

Conclusion

We trained a text-based model to prioritize at-risk youth seeking crisis services. The model outperformed a baseline classifier that only used responses from several multiple-choice intake questions as features. The NLP model was also shown to have less bias than the baseline model. Some of the highest-impact ingredients to the final model were 1) Using in-domain unlabeled data to further pretrain an off-the-shelf ALBERT model, 2) encoding multiple-choice responses as full text, which is in turn encoded by ALBERT, and 3) tuning hyperparameters using intuition about our specific dataset in addition to standard search methods.

Despite the success of the model, there are some limitations. The intake questions that produced our dataset were not extremely well-correlated with the results of the expert risk assessments that made up our training labels. This resulted in a low signal-to-noise ratio in our training dataset. More non-ML work could be done in the future to elicit more high-signal responses from youth in the intake process.

We’d like to acknowledge all of the teams and individuals who contributed to this project: Google.org and the Google.org Fellows, The Trevor Project’s entire engineering and data science team, as well as many hours of review and input from Trevor’s crisis service and clinical staff.

You can support our work by donating at TheTrevorProject.org/Donate. Your life-saving gift can help us expand our advocacy efforts, train a record number of crisis counselors, and provide all of our crisis services 24/7.

...



📌 How The Trevor Project assesses LGBTQ youth suicide risk with TensorFlow


📈 129.97 Punkte

📌 Apple helps Encircle expand its support for LGBTQ+ youth and their families


📈 39.89 Punkte

📌 Youth Suicide Rate Increased 56% in Decade, CDC Says


📈 35.31 Punkte

📌 Das Suicide Squad-Spiel von Rocksteady trägt den Titel Suicide Squad: Kill the Justice League


📈 31.7 Punkte

📌 Police To Test App That Assesses Suspects


📈 27.5 Punkte

📌 El Reg assesses crypto of UK banks: Who gets to wear the dunce cap?


📈 27.5 Punkte

📌 Use it or lose it: New robotic system assesses mobility after stroke


📈 27.5 Punkte

📌 AI Platform Assesses Trump's and Clinton's Emotional Intelligence


📈 27.5 Punkte

📌 IDC assesses most significant Managed Security Services Providers, Highlights AT&T Cybersecurity


📈 27.5 Punkte

📌 IDC MarketScape assesses 11 companies that offer tools and frameworks for advanced ML platfoms


📈 27.5 Punkte

📌 AI Platform Assesses Trump's and Clinton's Emotional Intelligence


📈 27.5 Punkte

📌 NetApp Spot Security identifies and assesses cloud security posture risks


📈 27.5 Punkte

📌 Python's Steering Council Assesses the After-Guido Era


📈 27.5 Punkte

📌 At-risk youth get a fresh look and a fresh start


📈 26.9 Punkte

📌 Empowering at-risk youth and adults with devices, connectivity and dignity


📈 26.9 Punkte

📌 Facebook Puts AI to Work on Detecting Users at Risk of Suicide


📈 23.29 Punkte

📌 DEF CON 25 BioHacking Village - Trevor Goodman - Hack Your Senses: Sensory Augementation


📈 22.33 Punkte

📌 Battlefield V - Reveal am 23. Mai im Livestream, Trevor Noah moderiert


📈 22.33 Punkte

📌 Trevor Hogan BNBT client.cpp denial of service


📈 22.33 Punkte

📌 Trevor Lee SWServer 2.2 URL directory traversal


📈 22.33 Punkte

📌 GTA 6: Trevor-Synchronsprecher sagt, der Nachfolger erscheint "bald"


📈 22.33 Punkte

📌 "Trevor"-Synchronsprecher verrät, dass "GTA 6" bald erscheint


📈 22.33 Punkte

📌 "Trevor"-Synchronsprecher verrät, dass "GTA 6" bald erscheint


📈 22.33 Punkte

📌 The Role of Compliance in the Federal Gov. - How Security Works - Trevor Bryant - SCW #14


📈 22.33 Punkte

📌 The Role of Compliance in the Federal Gov. - How Compliance Works - Trevor Bryant - SCW #14


📈 22.33 Punkte

📌 GTA 6: Trevor-Darsteller redet über Gerüchte & seine mögliche Rückkehr


📈 22.33 Punkte

📌 Getting Google Scale Threat Detection With Chronicle Detect - Trevor Welsh - ESW #206


📈 22.33 Punkte

📌 Trevor Noah feierte nach sieben Jahren seine letzte "Daily Show"


📈 22.33 Punkte

📌 Daily Show-Moderator Trevor Noah liebt sein Surface Pro X


📈 22.33 Punkte

📌 Nikola Founder Trevor Milton Steps Down as Chairman in Battle With Short Seller


📈 22.33 Punkte

📌 How I Got Here: Trevor’s Career Journey Across Four Countries and Five Roles


📈 22.33 Punkte

📌 NHL 23: Cover mit Sarah Nurse und Trevor Zegras


📈 22.33 Punkte

📌 Trevor Jacob: YouTuber nach absichtlichem Flugzeugabsturz zu sechs Monaten Haft verurteilt


📈 22.33 Punkte

📌 Betrug mit E-Kfz: 4 Jahre Haft für Nikola-Gründer Trevor Milton


📈 22.33 Punkte

📌 Microsoft's Trevor Noah discusses how a new ChatGPT-like AI bot will help smokers overcome Nicotine addiction


📈 22.33 Punkte











matomo