Women’s Aid Ends Use of X

Women’s Aid, a national organisation working to prevent and address the impact of domestic violence and abuse including coercive control in Ireland, will no longer maintain a presence on the platform X from 8th January 2026.

The organisation has watched the increased levels of unchecked hate, misogyny, racism and anti-LGBTI+ content on the platform with growing unease and concern. The current scandal which has seen the creation and sharing of AI deepfakes, non-consensual intimate imagery, and production of child sexual abuse material by X’s own AI Grok, in breach of the platforms own guidelines and regulations is a tipping point.

This online violence against women and children – especially girls – has often devastating real life impacts and we no longer view it as appropriate to use such a platform to share our work.

This has not been an easy decision. Women’s Aid was an early user of social media, including Twitter/X since 2009. We have engaged with and informed our supporters of the prevalence and impact of domestic abuse, promote our frontline support services to those affected and push for positive social change.

We firmly believe that social media platforms have a crucial role to play in a healthy society, providing crucial townhall spaces for thoughtful, respectful, constructive and positive dialogue. By leaving we acknowledge that we are ceding the stage to the malign actors, and bots who will continue to overrun the space creating and spreading disinformation and other harmful content with effective impunity. However, as an organisation working to end violence against women and children, we balance the costs with any benefits to our continued engagement in this space and find we can no longer tolerate this situation.

While we have reduced leverage on this platform, we call on Governments and Regulators in both Ireland and at EU level to act swiftly and decisively to create effective accountability, legislation and regulation to ensure companies must have guardrails that protect truth, and prevent harm so that in the future any user can use X, and any online platform safely.

ISPCC announces global project to prevent online child sexual exploitation and abuse

The project, spearheaded by Greek non-profit child welfare organisation The Smile of the Child, will be co-created by children and young people to ensure their voices are heard ISPCC is honoured to announce its participation in a worldwide project designed to transform how we prevent and respond to online child sexual exploitation and abuse.

Safe Online, a global fund dedicated to eradicating online child sexual exploitation and abuse, is funding the project called “Sandboxing and Standardizing Child Online Redress”.

The COR Sandbox project will establish a first-of-its-kind mechanism to advance child online safety through collaboration across sectors, borders and generations.

The project is led by The Smile of the Child, Greece’s premier child welfare organisation and ISPCC is a partner alongside The Young and Resilient Research Centre at Western Sydney University, Child Helpline International and the Centre for Digital Policy at University College Dublin.

Sandboxes bring together industry, regulators and customers in a safe space to test innovative products and services without incurring regulatory sanctions and they are mainly used in the finance sector to test new services. The EU is increasingly encouraging the use of sandboxes in the field of high technology and artificial intelligence.

Through the participation of youth, platforms, regulators and online safety experts, this first regulatory sandbox for child digital wellbeing will provide for consistent, systemic care and redress for children from online harm, based on their rights under the United Nations Convention on the Rights of the Child (UNCRC).

Getting reporting and redress right means that we can keep track of harms and be able to identify systemic risk. Co-designing the reporting and redress process with young people as equitable participants can help us understand what they expect from the reporting process and what remedies are fair for them putting Article 12 of the UNCRC into action.

The project also benefits from the guidance of renowned digital safety experts, including Project Lead and Scientific Coordinator Ioanna Noula, PhD, an international expert on tech policy and children’s rights; pioneering online safety and youth rights advocate Anne Collier; youth rights and participation expert Amanda Third, PhD, of the Young and Resilient Research Centre; international innovation management consultant Nicky Hickman; IT innovation and startup founder Jez Goldstone; and leading child online wellbeing scholar Tijana Milosevic, PhD.

ISPCC Head of Policy and Public Affairs Fiona Jennings said: “This project is a wonderful example of what we can achieve when we collaborate and listen to children and young people. Having robust online reporting mechanisms in place is a key policy objective for ISPCC and this project will go a long way towards making the online world safer for children and young people to participate in.”

Project lead Ioanna Noula said: “ISPCC’s contribution to a project, which seeks to build coherence around the issue of online redress, will be a catalyst for real and substantial change in the area of online reporting. Helplines play a key role in flagging illegal and/or harmful content. As the experts in listening and responding to children, ISPCC can provide insight from an Irish context to help spearheading the implementation of the Digital Services Act and the wellbeing of children online.”

The Role of AI in Identifying and Preventing Nursing Home Abuse: A New Frontier

Nursing home abuse is a pervasive issue that impacts the most vulnerable members of our society. According to recent studies, approximately one in six older adults experience some form of abuse in care settings. This alarming statistic highlights the need for innovative solutions to protect residents and ensure their safety and well-being. Enter artificial intelligence (AI), a technology that has the potential to revolutionize how we identify and prevent nursing home abuse. By harnessing the power of AI, we can create a safer environment for elderly residents, empowering caregivers and families alike.

AI technologies have emerged as essential tools in various sectors, including healthcare. These intelligent systems can analyze vast amounts of data, identify patterns, and even predict potential risks. In the context of nursing homes, AI can be applied to monitor interactions between staff and residents, detect signs of abuse, and facilitate timely interventions. As we delve deeper into the role of AI in addressing nursing home abuse, it becomes clear that we are entering a new frontier in elder care—one that prioritizes safety, transparency, and accountability.

Understanding Nursing Home Abuse

According to the Law Office of Michael D. Waks, nursing home abuse manifests in several forms, including physical, emotional, financial, and neglect. Physical abuse involves inflicting harm or pain on a resident, while emotional abuse encompasses verbal assaults, threats, or humiliation. Financial abuse refers to the illegal or improper use of a resident’s funds, and neglect occurs when caregivers fail to meet the basic needs of residents. Recognizing these types of abuse is essential for developing effective prevention and intervention strategies.

The impact of nursing home abuse extends beyond the individual victim. Families endure emotional distress and often face significant financial burdens as they seek justice for their loved ones. In many cases, abuse goes unreported due to fear or lack of awareness. By leveraging AI technologies, we can enhance the detection of abuse and provide the necessary support to those affected, ultimately transforming the landscape of elder care.

The Emergence of AI in Healthcare

AI has already begun to reshape the healthcare industry, offering innovative solutions for improving patient outcomes and streamlining processes. Machine learning, natural language processing, and predictive analytics are just a few AI applications that have gained traction in recent years. These technologies enable healthcare providers to analyze patient data, predict potential health issues, and enhance decision-making.

In elder care, AI’s potential is particularly promising. For instance, AI can monitor residents’ health data in real-time, providing caregivers with valuable insights that inform care decisions. Furthermore, AI systems can streamline administrative tasks, allowing staff to focus more on resident care and less on paperwork. As the technology continues to advance, the integration of AI into nursing homes will become increasingly vital for ensuring the safety and well-being of residents.

How AI Can Identify Nursing Home Abuse

AI technologies play a crucial role in identifying nursing home abuse by monitoring interactions and analyzing data. For example, video surveillance systems equipped with AI can detect unusual behavior patterns, such as a caregiver exhibiting aggressive behavior towards a resident. These systems can alert management and initiate investigations, allowing for timely interventions that can prevent further abuse.

Moreover, AI can analyze communication patterns between residents and staff. By employing sentiment analysis, AI systems can assess the emotional tone of conversations, identifying potential signs of distress or fear among residents. This proactive approach enables nursing homes to address concerns before they escalate, fostering a culture of safety and trust.

AI’s Preventative Measures Against Abuse

In addition to identifying abuse, AI can implement preventative measures that enhance the safety of nursing home residents. For instance, AI-driven alert systems can notify caregivers when a resident exhibits signs of distress or if a caregiver’s behavior deviates from established norms. These alerts promote accountability and ensure that staff members remain vigilant in their interactions with residents.

Furthermore, AI can be instrumental in training and educating nursing home staff. By analyzing data on best practices and common pitfalls, AI systems can provide tailored training programs that improve caregiving techniques. Empowering staff with the knowledge and tools to deliver compassionate care not only reduces the risk of abuse but also enhances the overall quality of life for residents.

Ethical Considerations and Challenges

While the integration of AI in nursing homes offers significant benefits, it also raises ethical concerns that must be addressed. Privacy is a paramount consideration, as residents have the right to feel secure in their living environment. Implementing AI technologies requires careful planning to ensure that residents’ privacy is protected while still allowing for effective monitoring.

Additionally, the cost of implementing AI solutions can be a barrier for many nursing homes, particularly those with limited budgets. To overcome this challenge, collaboration among stakeholders—such as government agencies, technology providers, and care facilities—will be essential. By fostering partnerships and investing in AI technologies, we can create a more sustainable model for enhancing elder care while addressing the pressing issue of nursing home abuse.

Conclusion

The role of AI in identifying and preventing nursing home abuse represents a new frontier in elder care, one that prioritizes safety, transparency, and accountability. By leveraging AI technologies, nursing homes can enhance their ability to detect and address abuse, ultimately fostering a culture of care and respect for residents. As we move forward, it is crucial for stakeholders to embrace these innovations while addressing the ethical considerations and challenges that accompany them. Together, we can create a safer environment for our elderly population, ensuring that they receive the compassionate care they deserve. Through the integration of AI, we can pave the way for a future where nursing home abuse becomes a relic of the past, replaced by a commitment to dignity and respect for all residents.

 

Instagram launches new tools to stop abuse. #Instagram #SocialMedia

Instagram have announced a new way to protect people from seeing abusive DMs  as well as the ability to prevent someone you’ve blocked from contacting you from a new account. Social media has been under big pressure to tackle such and much more is still needed to be done here to curb this kind of behaviour this is a step forward and lets see what the future hold, see the Press release below.

A new feature to filter abusive messages

We understand the impact that abusive content – whether it’s racist, sexist, homophobic, or any other kind of abuse – can have on people. Nobody should have to experience that on Instagram. But combatting abuse is a complex challenge and there isn’t one single step we can take to eliminate it completely. For example, we know that many in our community, particularly people with larger followings, have faced abuse in their DM request inbox from people they don’t follow.

Because DMs are private conversations, we don’t proactively look for hate speech or bullying the same way we do elsewhere on Instagram. That’s why we’re introducing a new tool which, when turned on, will automatically filter DM requests containing offensive words, phrases and emojis, so you never have to see them. This tool focuses on DM requests, because this is where people usually receive abusive messages – unlike your regular DM inbox, where you receive messages from friends. It will work in a similar way to the comment filters we already offer, which allow you to hide offensive comments and choose what terms you don’t want people to use in comments under your posts. You can turn both comment and DM request filters on and off in a new dedicated section of your Privacy Settings called Hidden Words.

We’ve worked with leading anti-discrimination and anti-bullying organisations to develop a predefined list of offensive terms that will be filtered from DM requests when the feature is turned on. We know different words can be hurtful to different people, so you’ll also have the option to create your own custom list of words, phrases or emojis that you don’t want to see in your DM requests. All DM requests that contain these offensive words, phrases, or emojis – whether from your custom list or the predefined list – will be automatically filtered into a separate hidden requests folder. If you choose to open the folder, the message text will be covered so you’re not confronted with offensive language, unless you tap to uncover it. You then have the option to accept the message request, delete it, or report it.

This new feature is designed to help protect you from potentially offensive or abusive DM requests, while also respecting your privacy. All message filtering will take place on your own device, which means this feature won’t send any message content back to our servers. Using this feature doesn’t share the content of your DM requests with us, unless you report them.

We’ll start rolling out this feature in Ireland and a number of other countries in the coming weeks and will look to expand to more countries over the next few months.

A new way to protect you from unwanted contact 

We’re also making it harder for someone who you’ve already blocked from contacting you again through a new account. With this feature, whenever you decide to block someone on Instagram, you’ll have the option to both block their account and pre-emptively block new accounts that person may create. This will be available globally in the next few weeks.

This is in addition to our harassment policies, which already prohibit people from repeatedly contacting someone who doesn’t want to hear from them. We also don’t allow recidivism, which means if someone’s account is disabled for breaking our rules, we would remove any new accounts they create whenever we become aware of it.

Continuing our work to combat offensive comments

As well as using our proactive detection technology to help catch violating comments, we offer a number of tools to help you control abuse in your comments. If you have a public account, you have the option to only allow comments from people you follow and/or are following you.

We’re also starting to hide common misspellings of offensive words from your manual comment filter list, so that even if a word you don’t want to see is accidentally or deliberately spelled wrong, you still won’t see it in your comments.

We know there’s still more we can do, and we’re committed to continuing our fight against bullying and online abuse. We’ll keep working in partnership with experts, industry organizations, teens, creators, and public figures to understand their experience on Instagram and how we can evolve our policies and products to protect them from online abuse.