How to cite this paper
Xingyi, G & Adnan, H. (2024). Potential cyberbullying detection in social media platforms based on a multi-task learning framework.International Journal of Data and Network Science, 8(1), 25-34.
Refrences
Ahn, J., & Yoon, E. (2020). Between love and hate: The new Korean wave, Japanese female fans, and anti-Korean senti-ment in Japan. Journal of Contemporary Eastern Asia, 19(2), 179–196. https://doi.org/10.17477/jcea.2020.19.2.179
Brighi, A., Menin, D., Skrzypiec, G., & Guarini, A. (2019). Young, bullying, and connected: Common pathways to cyber-bullying and problematic Internet use in adolescence. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.01467
Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608. https://doi.org/10.1016/j.avb.2021.101608
Chadha, K., Steiner, L., Vitak, J., & Ashktorab, Z. (2020). Women’s responses to online harassment. International Journal of Communication, 14(1), 239-257.
Chen, J., Hu, Y., Liu, J., Xiao, Y., & Jiang, H. (2019). Deep short text classification with knowledge powered attention. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6252–6259. https://doi.org/10.1609/aaai.v33i01.33016252
Chen, Y., Zhou, Y., Zhu, S., & Xu, H. (2012). Detecting offensive language in social media to protect adolescent online safety. https://doi.org/10.1109/socialcom-passat.2012.55
Cobbe, J. (2020). Algorithmic censorship by social platforms: power and resistance. Philosophy & Technology, 34(4), 739–766. https://doi.org/10.1007/s13347-020-00429-0
Dadvar, M., Trieschnigg, D., Ordelman, R., & De Jong, F. (2013). Improving cyberbullying detection with user context. In Lecture Notes in Computer Science (pp. 693–696). https://doi.org/10.1007/978-3-642-36973-5_62
Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2019). Pre-training of deep bidirectional transformers for language un-derstanding. https://doi.org/10.18653/v1/n19-1423
Eckert, S., & Metzger-Riftkin, J. (2020). Doxxing, privacy and gendered harassment. The shock and normalization of veil-lance cultures. M&K Medien & Kommunikationswissenschaft, 68(3), 273-287.
Enke, N., & Borchers, N. S. (2021). Social nedia influencers in strategic communication: A conceptual framework for strategic social media influencer communication. In Routledge eBooks (pp. 7–23). https://doi.org/10.4324/9781003181286-2
Hartmann, J., Huppertz, J., Schamp, C., & Heitmann, M. (2019). Comparing automated text classification methods. Inter-national Journal of Research in Marketing, 36(1), 20–38. https://doi.org/10.1016/j.ijresmar.2018.09.009
Jahan, S., & Oussalah, M. (2023). A systematic review of hate speech automatic detection using natural language pro-cessing. Neurocomputing, 546, 126232. https://doi.org/10.1016/j.neucom.2023.126232
Jones, L. M., Mitchell, K. J., & Finkelhor, D. (2013). Online harassment in context: Trends from three youth internet safe-ty surveys (2000, 2005, 2010). Psychology of violence, 3(1), 53.
Kiritchenko, S., Nejadgholi, I., & Fraser, K. C. (2021). Confronting abusive language online: A survey from the ethical and human rights perspective. Journal of Artificial Intelligence Research, 71, 431-478.
Lindsay, M., Booth, J. M., Messing, J. T., & Thaller, J. (2016). Experiences of online harassment among emerging adults: Emotional reactions and the mediating role of fear. Journal of interpersonal violence, 31(19), 3174-3195.
Liu, G., & Guo, J. (2019). Bidirectional LSTM with attention mechanism and convolutional layer for text classification. Neurocomputing, 337, 325–338. https://doi.org/10.1016/j.neucom.2019.01.078
Liu, L., Jiang, H., & He, P. (2019). On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265
Ma, D., Liu, H., & Song, D. (2020). Word Graph Network: Understanding obscure sentences on social media for violation comment detection. In Lecture Notes in Computer Science. https://doi.org/10.1007/978-3-030-60450-9_58
Mikolov, T., Chen, K., & Corrado, G. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781
Mohammad, F. (2018). Is preprocessing of text really worth your time for online comment classification? arXiv.org. https://arxiv.org/abs/1806.02908
Muneer, A., & Fati, S. M. (2020). A comparative analysis of machine learning techniques for cyberbullying detection on twitter. Future Internet, 12(11), 187.
Pan, J., Lei, T., Kim, K., Han, K. J., & Watanabe, S. (2022). SRU++: Pioneering fast recurrence with attention for speech recognition. In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). https://doi.org/10.1109/icassp43922.2022.9746187
Pitsilis, G. K., Ramampiaro, H., & Langseth, H. (2018). Detecting offensive language in tweets using deep learning. arXiv preprint arXiv:1801.04433
Sun, T., Shao, Y., Li, X., Liu, P., Yan, H., Qiu, X., & Huang, X. (2020). Learning sparse sharing architectures for multiple tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8936–8943. https://doi.org/10.1609/aaai.v34i05.6424
Valenzuela-García, N., Maldonado-Guzmán, D. J., García-Pérez, A., & Del-Real, C. (2023). Too Lucky to Be a Victim? An Exploratory Study of Online Harassment and Hate Messages Faced by Social Media Influencers. European Journal on Criminal Policy and Research, 1-25.
Venkit, P. N., & Wilson, S. (2021). Identification of bias against people with disabilities in sentiment analysis and toxicity detection models. arXiv preprint arXiv:2111.13259, 2021.
Wulczyn, E., Thain, N., & Dixon, L. (2017). Ex machina. Proceedings of the 26th International Conference on World Wide Web. https://doi.org/10.1145/3038912.3052591
Xiang, G., Fan, B., Wang, L., Hong, J. I., & Rosé, C. P. (2012). Detecting offensive tweets via topical feature discovery over a large scale twitter corpus. https://doi.org/10.1145/2396761.2398556
Yin, D., Xue, Z., Hong, L., Davison, B.D., & Edwards, L. (2009). Detection of harassment on Web 2.0. Proceedings of the Content Analysis in the WEB, 2(0), 1-7.
Young, J. C., & Rusli, A. (2019). Review and visualization of Facebook’s FastText pretrained Word Vector model. https://doi.org/10.1109/icesi.2019.8863015
Zhu, J., Tian, Z., & Kübler, S. (2019). UM-IU@LING at SEMEval-2019 Task 6: Identifying offensive tweets using BERT and SVMs. https://doi.org/10.18653/v1/s19-2138
Brighi, A., Menin, D., Skrzypiec, G., & Guarini, A. (2019). Young, bullying, and connected: Common pathways to cyber-bullying and problematic Internet use in adolescence. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.01467
Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608. https://doi.org/10.1016/j.avb.2021.101608
Chadha, K., Steiner, L., Vitak, J., & Ashktorab, Z. (2020). Women’s responses to online harassment. International Journal of Communication, 14(1), 239-257.
Chen, J., Hu, Y., Liu, J., Xiao, Y., & Jiang, H. (2019). Deep short text classification with knowledge powered attention. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6252–6259. https://doi.org/10.1609/aaai.v33i01.33016252
Chen, Y., Zhou, Y., Zhu, S., & Xu, H. (2012). Detecting offensive language in social media to protect adolescent online safety. https://doi.org/10.1109/socialcom-passat.2012.55
Cobbe, J. (2020). Algorithmic censorship by social platforms: power and resistance. Philosophy & Technology, 34(4), 739–766. https://doi.org/10.1007/s13347-020-00429-0
Dadvar, M., Trieschnigg, D., Ordelman, R., & De Jong, F. (2013). Improving cyberbullying detection with user context. In Lecture Notes in Computer Science (pp. 693–696). https://doi.org/10.1007/978-3-642-36973-5_62
Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2019). Pre-training of deep bidirectional transformers for language un-derstanding. https://doi.org/10.18653/v1/n19-1423
Eckert, S., & Metzger-Riftkin, J. (2020). Doxxing, privacy and gendered harassment. The shock and normalization of veil-lance cultures. M&K Medien & Kommunikationswissenschaft, 68(3), 273-287.
Enke, N., & Borchers, N. S. (2021). Social nedia influencers in strategic communication: A conceptual framework for strategic social media influencer communication. In Routledge eBooks (pp. 7–23). https://doi.org/10.4324/9781003181286-2
Hartmann, J., Huppertz, J., Schamp, C., & Heitmann, M. (2019). Comparing automated text classification methods. Inter-national Journal of Research in Marketing, 36(1), 20–38. https://doi.org/10.1016/j.ijresmar.2018.09.009
Jahan, S., & Oussalah, M. (2023). A systematic review of hate speech automatic detection using natural language pro-cessing. Neurocomputing, 546, 126232. https://doi.org/10.1016/j.neucom.2023.126232
Jones, L. M., Mitchell, K. J., & Finkelhor, D. (2013). Online harassment in context: Trends from three youth internet safe-ty surveys (2000, 2005, 2010). Psychology of violence, 3(1), 53.
Kiritchenko, S., Nejadgholi, I., & Fraser, K. C. (2021). Confronting abusive language online: A survey from the ethical and human rights perspective. Journal of Artificial Intelligence Research, 71, 431-478.
Lindsay, M., Booth, J. M., Messing, J. T., & Thaller, J. (2016). Experiences of online harassment among emerging adults: Emotional reactions and the mediating role of fear. Journal of interpersonal violence, 31(19), 3174-3195.
Liu, G., & Guo, J. (2019). Bidirectional LSTM with attention mechanism and convolutional layer for text classification. Neurocomputing, 337, 325–338. https://doi.org/10.1016/j.neucom.2019.01.078
Liu, L., Jiang, H., & He, P. (2019). On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265
Ma, D., Liu, H., & Song, D. (2020). Word Graph Network: Understanding obscure sentences on social media for violation comment detection. In Lecture Notes in Computer Science. https://doi.org/10.1007/978-3-030-60450-9_58
Mikolov, T., Chen, K., & Corrado, G. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781
Mohammad, F. (2018). Is preprocessing of text really worth your time for online comment classification? arXiv.org. https://arxiv.org/abs/1806.02908
Muneer, A., & Fati, S. M. (2020). A comparative analysis of machine learning techniques for cyberbullying detection on twitter. Future Internet, 12(11), 187.
Pan, J., Lei, T., Kim, K., Han, K. J., & Watanabe, S. (2022). SRU++: Pioneering fast recurrence with attention for speech recognition. In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). https://doi.org/10.1109/icassp43922.2022.9746187
Pitsilis, G. K., Ramampiaro, H., & Langseth, H. (2018). Detecting offensive language in tweets using deep learning. arXiv preprint arXiv:1801.04433
Sun, T., Shao, Y., Li, X., Liu, P., Yan, H., Qiu, X., & Huang, X. (2020). Learning sparse sharing architectures for multiple tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8936–8943. https://doi.org/10.1609/aaai.v34i05.6424
Valenzuela-García, N., Maldonado-Guzmán, D. J., García-Pérez, A., & Del-Real, C. (2023). Too Lucky to Be a Victim? An Exploratory Study of Online Harassment and Hate Messages Faced by Social Media Influencers. European Journal on Criminal Policy and Research, 1-25.
Venkit, P. N., & Wilson, S. (2021). Identification of bias against people with disabilities in sentiment analysis and toxicity detection models. arXiv preprint arXiv:2111.13259, 2021.
Wulczyn, E., Thain, N., & Dixon, L. (2017). Ex machina. Proceedings of the 26th International Conference on World Wide Web. https://doi.org/10.1145/3038912.3052591
Xiang, G., Fan, B., Wang, L., Hong, J. I., & Rosé, C. P. (2012). Detecting offensive tweets via topical feature discovery over a large scale twitter corpus. https://doi.org/10.1145/2396761.2398556
Yin, D., Xue, Z., Hong, L., Davison, B.D., & Edwards, L. (2009). Detection of harassment on Web 2.0. Proceedings of the Content Analysis in the WEB, 2(0), 1-7.
Young, J. C., & Rusli, A. (2019). Review and visualization of Facebook’s FastText pretrained Word Vector model. https://doi.org/10.1109/icesi.2019.8863015
Zhu, J., Tian, Z., & Kübler, S. (2019). UM-IU@LING at SEMEval-2019 Task 6: Identifying offensive tweets using BERT and SVMs. https://doi.org/10.18653/v1/s19-2138