Hello, I’m Priscylla Silva, a Ph.D. student in Explainable Artificial Intelligence (XAI) at the University of São Paulo (Brazil), advised by Prof. Luis Gustavo Nonato. My research focuses on developing tools and techniques that improve AI interpretability, allowing for more transparent and reliable decision-making in a variety of domains.
I spent a year as a visiting scholar at the Visualization and Data Analytics Research Center (VIDA) at New York University (NYU), where I worked under the guidance of Prof. Claudio Silva.
My research journey began with a strong foundation in Artificial Intelligence in Education during my Master’s degree at the Federal University of Campina Grande (Brazil). Under the mentorship of Prof. Joseana Macêdo Fechine and Prof. Evandro de Barros Costa, I developed an Intelligent Tutoring System with automatic feedback designed to support students in introductory computer science (CS1) courses.
In my undergraduate studies at the Federal University of Alagoas (Brazil), I collaborated as a developer on two intelligent tutoring systems focused on Mathematics and Propositional Logic. These experiences cultivated my passion for creating AI systems that not only process data but also actively contribute to human learning and understanding.
PhD in Computer Science and Computational Mathematics, current
University of São Paulo
MSc in Computer Science, 2018
Federal University of Campina Grande
BSc in Computer Science, 2014
Federal University of Alagoas
external_link
.Machine learning and deep learning models are pivotal in educational contexts, particularly in predicting student success. Despite their widespread application, a significant gap persists in comprehending the factors influencing these models’ predictions, especially in explainability within education. This work addresses this gap by employing nine distinct explanation methods and conducting a comprehensive analysis to explore the correlation between the agreement among these methods in generating explanations and the predictive model’s performance. Applying Spearman’s correlation, our findings reveal a very strong correlation between the model’s performance and the level of agreement observed among the explanation methods.
This paper describes an approach to help students involved in a Programming Tutoring System, providing them with feedback during the coding problem-solving activities. It provides feedback for students during the coding, helping them to fix mistakes and how to take the next steps to complete the solution. This way, the student does not need to complete and submit a solution to get feedback from the system. The approach uses three feedback resources: videos, text hints, and flowcharts. We conducted an experiment involving 34 students from a programming introduction course. Preliminary results indicated a positive impact on the students learning. Our results also suggested that we can provide valuable feedback to students with difficult to complete a solution.