Ethical challenges and solutions in artificial intelligence: Reflections from Members' Week

  • Date:

CoramBAAF ran a special Members' Week session on the ethical challenges and practical solutions surrounding artificial intelligence (AI) in social work last month. The session brought together research insights, practice perspectives from sector experts, and reflections from our own CoramBAAF adoption consultant, Jane Poore. A fascinating aspect to the session was the feedback from members who attended. There was a vast array of different experiences and levels of confidence in using AI in practice.

At the session we heard what artificial intelligence is and what it is not. AI excels at sorting, summarising, organising and analysing large volumes of information at speed. However, AI cannot replace professional judgement, is not knowledgeable in its own right, and should not be a decision maker. Instead, AI should be understood as a powerful tool that can potentially support practice rather than direct it. We were reminded of the core responsibilities of social work; assessment, planning, intervention, review and evaluation,  the central importance of relationship-based practice and that core practice principles in relation to these should not be lost as we take on new technologies. Risks were identified too, including bias in AI systems, over-reliance on technology, and loss of the human voice in social work writing. I went back to the session’s chat log to consider some of the themes which came up for the audience who attended. These are explored below.

1. Safety, security and ethics

A strong theme that emerged was the need for a clearer understanding of using AI safely and securely in social work. Many of our audience expressed uncertainty about which tools were safe. Questions came up about what the green shield on Microsoft Co-pilot meant, and how systems designed for their agency differ from those publicly available. People wanted to understand the data protection issues and where data was stored and retained. It was difficult for practitioners to always know this. This is essential in social care due to the large amount of personal information that is recorded. There is a clear need for sector-wide guidance and consistency.

A question emerged about independent social workers and how they can access the same secure IT systems and platforms as an employee of an agency. There were also questions about the ethical obligation to be transparent with service users when AI is used in relation to them. Consent, particularly around recording and summarising meetings, was highlighted as a key issue.

2. Critical thinking and training

The importance of maintaining human judgement and critical reflection was repeatedly emphasised. The use of AI in practice can risk over-reliance on tools. An added risk alongside this is the high level of affirmation that tools can suggest. Many people recognise how overly positive AI tools can be e.g. feedback about a “great idea” when this may not always be the case.  This affirmation could potentially lull practitioners into a false sense of security based on feedback from an AI tool.

The importance of other human perspectives through examples such as group supervision was highlighted as effective, as well as the vital importance of humans being the key decision makers in terms of what is written about people. Some attendees raised concerns about how AI may affect new social workers' development as professionals. Some felt early-career practitioners should avoid AI until core skills are formed, to protect the development of analytical thinking in practice.

3. Benefits

Despite concerns, participants shared positive experiences of AI improving efficiency and supporting good practice. An example was shared where the adoption of AI tools was noted to have supported neurodivergent staff, something that has been noted in other studies and surveys. Some shared that the use of AI in their agency had enhanced assessment and supervision quality and that when used safely and skilfully, AI was adding genuine value.

4. Holding on to relationship-based practice

Participants stressed that AI must enable rather than get in the way of relationship-based practice. As the quest for greater efficiencies is made, it is important that the time saved serves the people that social care is there to support. We need AI to support more relationship-based practice and not be a precursor to doing more with less. Human connection and relationships remain vital and we need to keep this at the heart of practice.

Reflection

The session highlighted both the opportunities and the ethical challenges AI presents. As new technology and its use accelerates, social work must develop policies, training, and procedures that support safe, human-centred practice for the benefit of children and families.

CoramBAAF will be hosting an event on 19 May on Exploring Practice: Using Agentic AI to Support Social Work Practice in Fostering. Please join us to find out how AI is being used to support fostering practice in a local authority. Sign up by clicking the link.

Using AI ethically, safely and effectively in support of children and families is at the heart of our work. We are will be developing a community of practice around AI in fostering, adoption and kinship care. Please come along to the event above to find out more.