THE FUTURE IS HERE

Artificial intelligence and Child Protection?

Artificial intelligence and Child Protection??

We live in this era where artificial intelligence (AI) is the talk of the global world. Questions around how AI can support and/or potentially disrupt the human race. I remember when I was in high school, my dad said: “don’t worry son, working in fields that work closely with human and the human mind would not cost you your job.” This is one of the reasons why I continued my tertiary education in psychology and now working in child protection. Just because this could save me a job.

Source

However, recent readings around AI and machine learning etc made me worried. Generally, AI is able to improve our current structure of services delivery by possibly streamlining it into algorithms, speed up the processes and minimizes human errors. I have heard the benefits of implementing AI into the medical world, how it could minimize humanistic error and how the consumers could benefit from a greater database of information across the globe (I would refer you all to read the series of books by Yuval Noah Harari, which was a great inspiration to me.

Trending AI Articles:

1. How ethical is Artificial Intelligence?

2. Predicting buying behavior using Machine Learning

3. Understanding and building Generative Adversarial Networks(GANs)

4. Building a Django POST face-detection API using OpenCV and Haar Cascades

So how could AI affect child protection?

Source

In my perfect little world, everyone follow the rules as dictated by the society and the court. Then AI will not be needed in this industry. However, I m pretty sure it wouldn’t happen. Further to my imaginary, if I was to imagine a world with AI in child protection, I hope that AI would provide the initial screening of problematic families. If families that were flagged by the system and show no improvements in the next, let’s say, 3 months, the children would subject to entering foster care. Even further, people who have their children remove from them in the past are not allowed to have anymore children in the future. Obviously this is just a dream.

In my opinion, it is still the very early stages to determine the role of AI in child protection. I won’t comment how this affect other areas of community services as I am no expert. I don’t think AI will affect my job in the next 5 years, and to be honest, given the amount of sensitive data the government agency holds and the natural complexity of the human mind, AI inclusion might not happen so radically. However, talks have been going on about how algorithms can predict children at risk of abuse and/or neglect. Studies have been completed to conduct risk assessment via AI and then screened by social workers. This appeared to save some time for workers.

There are two areas of concerns currently with the involvement of AI in child protection.

The debate on ethics. To what extent can we expose the private, sensitive information of the families to the computer? With the lack of policies and procedures in data regulation, how would a data breach impede the everyday of life of these vulnerable families? Even if we have secured the answers to these questions, how does this play out across different States? Do we need to walk towards utopia to achieve this?

Source

The next area of debate would be the issue of accuracy in risk assessment. Our job as child protection workers are essentially assessing risk based on historical data and the current situation. This is mostly qualitative data we receive, which is subjected to human bias and error. How can AI prevent these errors from happening? Furthermore, how do we react to false positives?

Source

We talk about how AI can be included, but physically, how can we include AI in child protection work? Installing all different cameras around our lives for surveillance purposes? How about school, hospital, public transport? Many questions we are yet to address.

Source

Other than the two debates, I would also challenge the definition of “safety.” Before working in child protection, my definition of child safety might be completely free from alcohol and other drugs, 100% school attendance, and parents prioritizing the children’s needs above all etc. However, this is not realistic in families we work with. Some of the families are subjected inter-generational abuse and/or neglect, which means the parents in the families are also the victims themselves. We are trained to risk assess and also “sit with risks.” Meaning that I have learnt to understand that not attending school 100% is actually okay. Gradual re-entrance to school is acceptable. Reducing cannabis use from daily to monthly is a great achievement. To be honest, this doesn’t sit with me all that well sometimes, but my definition of “safety” has definitely changed. So if we are not able to identify a single definition of safety, what are we assessing our risks against? If we lack of a definition, what is the algorithm we entered to the AI system?

Source

Consider this: Research say that we started to form our attachment with our mothers while we were in utero. How does AI take into account of the mother-baby relationship? Other research has shown that the mother-baby relationship is particularly difficult to remove. This would mean that even a child was removed from the toxic, abusive family environment, they have a tendency to gravitate back to their birth families. What would AI do when children want to return to the family full of abuse and/or neglect? This would then come into a human rights issue.

In conclusion, I believe there are many questions need to be addressed before we can slowly include AI into the child protection system.

Source

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href


Artificial intelligence and Child Protection? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.