30% of office workers have tried to gain unauthorised access to a colleague’s device

IT.ie, a leading Irish IT managed services company, today announces new survey results which reveal the prevalence of insider-threats within Irish businesses with 30% of office workers admitting that they have tried to gain access to a colleague’s device without their permission in the last 12 months.

The research also found that 35% of employees have actively sought out private information about a colleague, client, or customer such as their age, salary, CV or home address, from a work computer system.

The research of 1,000 office workers based in Ireland was carried out by Censuswide on behalf of IT.ie and SonicWall, a global leader in cybersecurity innovation. It forms part of IT.ie and SonicWall’s ‘Cyberpulse Ireland 2025’ report, which explores the cyberthreat landscape in Ireland, according to the sentiment and behaviours of employees.

Alongside the above high-risk behaviours, the survey also showed that a cohort of office workers may see themselves as internet sleuths, possibly influenced by viral online videos featuring content creators who try to outsmart online scammers. Almost two-in-five (38%) have engaged with a scammer – intentionally or unintentionally at work in the last 12 months, while one in four have sent an angry response to someone who they presumed to be a scammer. Furthermore, 17% say they have either duped, or attempted to dupe, an online scammer.

Concerningly, 17% have engaged with a scammer to try to convince them not to target the business – an approach that still carries significant risk.

Eamon Gallagher, Founder and Managing Director, IT.ie, said: “The report highlights some troubling behaviours among office workers that has the potential to expose businesses to considerable risk. While some employees, particularly from the younger cohort, attempt to challenge, outsmart, or reason with scammers, cybercriminals will often capitalise on their efforts, using manipulation tactics that leave the organisation more vulnerable to compromise.

“This behaviour is similar to that of those who attempt to gain access to their colleague’s device or are seeking out their personal information. While their actions may not always stem from malicious intent, it does represent a breakdown in boundaries and data responsibility.

“While these findings may raise concerns, they ultimately represent a valuable opportunity for businesses. By investing in employee training and onboarding, strengthening internal policies, implementing more robust access controls, and creating better cybersecurity awareness, organisations can address these vulnerabilities head-on. At the end of the day, a company’s cybersecurity is only as strong as its least prepared employee.”

Stuart Taylor, Senior Director of Regional Sales for Northern Europe, SonicWall said“These findings are a stark reminder that the greatest risks often come from within the organisation, not intentionally, but through curiosity, complacency, or even misplaced confidence.

“Every click and every login matters. That’s why businesses need layered security that combines zero-trust principles, strong access controls, and continuous user education. Technology alone isn’t enough; it has to work hand-in-hand with clear policies and a culture of accountability. When employees understand the ‘why’ behind security and organisations enforce the ‘how’ through smart solutions, you close the gaps that attackers are so quick to exploit.”

AI Expert Warns: 5 Things You Should Never Let Artificial Intelligence Do

Artificial intelligence has quickly become a huge part of our daily lives, making things easier across all kinds of industries. From helping us draft emails to suggesting the next thing we should buy, AI is showing up everywhere and transforming how we get things done. Joe Davies is a tech AI expert at fatjoe tells us more.
“AI can process data faster than we ever could, but when it comes to understanding the complexities of human emotion or making ethical decisions, there’s no replacement for the human brain. Technology enhances us, but it shouldn’t replace us.”
So, while AI is a great tool, it’s not a replacement for the human touch. Here are 5 tasks you should never let AI do:
1. Writing Your Resume or CV
Many turn to AI to help draft resumes or generate professional content. While these tools can suggest catchy phrases and templates, they miss the essential personal touch.
Why You Shouldn’t:
A resume isn’t just a list of jobs; it’s a reflection of your career journey, your growth, and what makes you stand out. AI struggles to incorporate the subtleties that differentiate you, such as a memorable story or a lesson learned on the job. A resume written by AI lacks the authenticity and personalization that makes you a unique candidate.
“AI can help structure your resume, but a true reflection of your journey, accomplishments, and personality can only be done through human personalization.”
2. Making Big Life Decisions (like moving or ending a relationship)
AI can give pros and cons or analyze options, but it can’t factor in your emotional state, intuition, or the things left unsaid.
Why You Shouldn’t: 
Life-changing choices aren’t just logical—they’re deeply emotional and nuanced. Only you can weigh what feels right based on your values, past experiences, and gut instinct.
“Some decisions require heart, not just data. That’s something AI will never truly understand.”
3. Making Critical Financial Decisions
AI tools can provide useful data analysis, but they can’t factor in the personal considerations that shape your financial situation.
Why You Shouldn’t: 
Whether it’s investing or saving, financial decisions are deeply personal. Your risk tolerance, long-term goals, and life circumstances should all influence the choices you make. While AI can process numbers efficiently, it lacks the ability to weigh these personal factors.
“AI may suggest investment strategies, but it can’t account for the personal side of financial decision-making. Only humans can truly align financial decisions with life goals.”
4. Medical Diagnoses
Although AI is increasingly used in healthcare to assist with diagnostics, it should never replace the judgment of a qualified medical professional.
Why You Shouldn’t:
Medical diagnoses require not only technical knowledge but also an understanding of patient history, symptoms, and personal circumstances. AI can provide support by analyzing medical data, but it cannot offer the holistic perspective a doctor or healthcare provider brings.
“AI can assist in diagnosing, but it should always be used as a complement, not a replacement, for medical professionals. Only a human can take into account the full picture of a patient’s health.”
5. Making Ethical and Moral Decisions
AI can help analyze data, but it falls short when it comes to making ethical choices.
Why You Shouldn’t:
Ethical and moral decisions often involve understanding societal impacts and complex human emotions. AI operates on logic, not on the ability to empathise or consider the broader human experience. While AI can offer data-driven insights, the final decision often requires human judgment and values.
“AI is a tool to assist in decision-making, but it shouldn’t be the sole decision-maker when it comes to ethics. Human judgment, empathy, and moral reasoning will always be necessary in these areas.”
AI is undeniably powerful and is reshaping industries at a rapid pace. However, the human element remains irreplaceable in many areas. By understanding where AI excels and where it falters, we can leverage its strengths while still preserving the critical role of human creativity, empathy, and decision-making.
“AI is a remarkable tool for efficiency, but it is crucial that we strike a balance and recognise where AI should assist, not replace, the human element.”