Irish employees thrive with AI, while employers fall behind

Surveying 1,000 people in Ireland, the Deloitte Digital Consumer Trends report shows that over two thirds (67%) of GenAI users say it boosts their productivity at work, but less than one in four (24%) say their employer actively encourages use of the technology.

The research shows that 90% of Irish companies lack a GenAI policy and that while GenAI users are more likely to use the technology for personal reasons (69%), the percentage using it for work tasks is up from 32% in 2023 to 36%.

A total of 48% of respondents have used GenAI, an increase from 33% in 2023. Meanwhile, the percentage of those who are not aware of GenAI is down from 38% in 2023 to 27%.

Of those using GenAI, 10% are using it daily, 28% are doing so weekly and 15% are using it monthly. A total of 46% are using it less than monthly, with 24% of this cohort saying they don’t know how to use it well and 18% saying they are dissatisfied with the answers they receive.

Use of GenAI is highest among younger people at 85% for those aged 18 to 24, followed by 69% for the 25 to 34 age group and 56% for those aged 35 to 44. Usage then drops significantly to 34% for those aged 45 to 54, 22% for those between 55 and 64 and 20% for those aged 65 to 75.

Most people use GenAI for personal reasons (69%) ahead of professional or work reasons (36%) and for educational purposes (38%).

Of the 67% of users who believe GenAI makes them more productive at work, 44% say they use the technology for writing and editing emails and for looking up information. A total of 42% use it to generate ideas, followed by creating written content (38%), summarising texts and reports (35%), editing (26%), analysing data (25%) and generating images (20%).

When GenAI users were asked if their employer encouraged them to use the technology at work, just 8% strongly agreed with the statement and 16% agreed.

The survey showed that uncertainty around GenAI and its impact on future workforces continues to be a concern with 60% of users worried that it will reduce the number of jobs available in the future and 46% concerned that it will replace some of their role in the workforce.

While they are concerned about the potential impact of GenAI on their future, a significant proportion of users trust the technology. A total of 28% of users said GenAI responses were unbiased and 34% agreed that the technology “always produces accurate responses”.  This is despite well-documented issues with the reliability of the technology.

The survey also showed that a majority of those who are aware of GenAI would be less inclined to trust AI-generated emails (66%) and AI-delivered customer services (63%).

Meanwhile, ChatGPT remains the most popular GenAI tool among people in Ireland having been used by 49% of GenAI users. This is far ahead of similar products such as Snapchat’s ‘My AI’ (15%), Microsoft Copilot (13%) and Google Gemini (12%). The survey took place prior to the release of DeepSeek’s latest AI model.

Emmanuel Adeleke, Deloitte Ireland’s GenAI Leader, said: “Employees in Ireland are racing ahead of their employers when it comes to GenAI. This means gains are being left on the table by employers and innovation is being stymied. We’re seeing the wide range of benefits GenAI creates for our clients in Deloitte, such as improved efficiency and productivity, but our survey shows that the vast majority of organisations do not have GenAI policies in place and they are not actively promoting its use or leading on its adoption even though their employees are increasingly using it to complete everyday tasks. 

“It is vital employers take the lead on the use of GenAI. They need to invest in initiatives and organisational changes that will drive adoption of GenAI tools and identify successful use cases for their organisations. 

“There is a risk in not reacting to the increase in usage, particularly because users are not fully appreciative of the dangers involved as indicated by the level of trust certain users have in GenAI tools, despite well-documented reliability issues. If employers invest properly in GenAI and integrate it correctly, they will uncover the challenges involved and the tremendous potential of this technology.”

He added: “Our survey found that some users are willing to experiment with GenAI, but they are lacking confidence when it comes to knowing how to use it and ultimately find the experience to be unsatisfactory. Organisations can address this through training and support, ensuring employees can use GenAI to meet their needs and transition into more frequent and more confident users. Employers should also consider a tailored approach for GenAI in the workplace that can address the differences in usage among age groups. They can enhance workplace AI tools to boost professional usage, and address age disparities by ensuring that resources and training are accessible to all and building a comprehensive change management strategy to increase the adoption and impact of GenAI tools.”   

Cybersecurity experts show biggest scam threats for 2025

Smarter, faster, and more sophisticated scams are coming. Thanks to AI, scammers are more efficient than ever, stealing money at record rates. Every day AI tools such as ChatGPT and OpenAI are used as scam arsenal, leading to around 13 million people in the UK to lose around £1.4bn each year.

Global scam protection leader F-Secure stays one way step ahead of cyber criminals, defending people from scams before they happen. F-Secure’s team of cybersecurity experts share the new threats the country will face in 2025:

New regulations for banks, telcos and social media companies who fail to prevent scams

Calvin Gan, Senior Manager, Scam Protection Strategy, says: “Right now lawmakers around the world are targeting telecom providers, banks, and social media companies, saying they should be held responsible when their customers fall victim to fraud. Australian lawmakers are pushing through a bill that will fine companies up to $50 million for failing to protect their customers from scams, and here, in a world first, UK bank refunds for fraud became mandatory after the Payment Systems Regulator (PSR) reduced the maximum compensation from a previous proposal of £415,000 to £85,000, covering more than 99% of claims.

“Passing new laws that empower businesses to beef up protection against scams is a welcomed move. Scam fighting is not a top-down only effort but involves everyone from governments to organisations and even individuals. Just like we’ve seen with GDPR in Europe forcing companies to take data privacy more seriously, new legislation like this would create an extra protection mechanism for consumers.

“Still, there’s no 100% guaranteed way to prevent scams from happening in the first place. People need to take precautions daily, especially on scam-prone channels like social media and messaging apps.

Cheap, easy AI tools will be deployed in sophisticated cyber attacks

Laura Kankaala, Head of Threat Intelligence: “Using AI tools for malicious purposes (like generating malicious and manipulative content) has already been evident throughout this past year. As we head into 2025, we are bound to see more sophisticated attacks that leverage everyday AI tools – like ChatGPT, ElevenLabs, or basically any AI tool that is cheap and easy to access online. The reality is that cyber criminals are abusing this readily available technology to fine-tune their scams and consumers must be better informed, whether that’s from their bank, mobile phone or another service provider, or by the cybersecurity industry to help educate consumers. We all play a part.“

“While AI companies do put restrictions on malicious usage, most of them are not very successful at it. They need to be doing more to stop the use of their platforms for nefarious purposes – it cannot only be left up to legislation to enforce boundaries for what kind of content can be generated. Bottom line, the companies developing these tools should also be held up to a higher moral standard.”

Multi-stage scams will become more prevalent 

Joel Latto, Threat Advisor, says: “Cybercriminals have long relied on social engineering, and multi-stage scams represent some of their most deceptive tactics. These schemes often involve direct interaction with victims, enhancing their believability. For instance, a scammer might call a victim claiming they’ve applied for a loan. When the victim denies it, they are “transferred” to a supposed bank representative—another scammer, probably sat next to them—who proceeds to seek sensitive banking details. Malware further elevates these schemes, rerouting legitimate customer service calls to fraudsters or tricking victims into contacting fake numbers embedded in phishing emails.

“Such scams are effective because victims believe they are speaking with genuine, helpful representatives, which makes them more susceptible under pressure. This is something we’ve seen dramatised through TV series’ such as Cold Call, which has recently rocketed up the charts on Netflix following its release five years ago. Perhaps more popular now because scams are much more commonplace, and viewers are much more likely to relate.

“Until now, the scalability of these scams was limited by the human capacity of fraudsters, who could only handle a limited number of interactions in specific languages and time zones. AI is changing this equation. With the rise of sophisticated conversational AI chatbots, scammers can now mimic real human interactions at scale, conducting conversations 24/7 across multiple languages. Coupled with realistic deepfake audio, these new call-based scams blur the line between human and machine interaction, making them far more dangerous than traditional robocalls.

“To counter these evolving threats, defenses must adapt, and mobile phone service providers must act. Blocking call-forwarding malware, detecting suspicious numbers, and developing sophisticated audio analysis tools to spot deepfakes are essential. Equally critical is educating users about the signs of scams and potential red flags. Defensive strategies must evolve as fast as attackers’ capabilities, leveraging AI-driven solutions and strong collaboration between cybersecurity experts, telecom providers, and regulatory bodies.”

High-yield, high-risk: the rise of Bitcoin investment scams on a new playing field

Sarogini Muniyandi, Senior Manager, Scam Protection Engineering, says: “Decentralised Finance (DeFi) is a new blockchain-based financial service that’s been gaining traction and acceptance over the last year. DeFi refers to financial services provided by an algorithm on a blockchain, without a financial services company. It is an alternative approach that largely operates outside the traditional centralized financial infrastructure.

“As DeFi becomes mainstream, scammers will take advantage of anyone interested in Bitcoin investment and other digital assets, especially those that are unfamiliar with the risks of blockchain-based finance. By 2025, DeFi is expected to attract even more users seeking alternatives to traditional finance. The DeFi market provides loans, interest-bearing accounts, and high-yield investments that promise substantial returns, which can entice investors of all experience levels. With the rising popularity of DeFi, the total value locked (TVL) in these projects is projected to grow, making it a prime target for fraudsters who can steal funds on a larger scale.

“DeFi platforms operate on decentralised blockchain networks, allowing users to participate without traditional identification or regulatory oversight. This open environment enables scammers to steal victims’ funds and vanish into thin air, all while remaining anonymous. By manipulating the smart contract and tools used to automate DeFi functions, the risks of stealing investor funds are at stake. Some DeFi platforms offer investors with unsustainable, extremely high-yield rates for farming Bitcoin derivatives, only for investors to later discover they can’t withdraw their Bitcoin or that the platform has disappeared with their funds.

‘While DeFi offers financial freedom and potential profits, its open, unregulated, and anonymous nature also creates a ripe environment for scams – something every Bitcoin investor needs to be aware of in 2025.”

New research reveals high usage of ChatGPT in Irish workplaces

ChatGPT is a more popular and widely used generative AI (GenAI) tool in Ireland than in the UK, a recent study of senior key decision makers in the UK & Ireland has revealed.

The study, conducted earlier this year by Coleman Parkes Research Ltd. and commissioned by SAS, surveyed 200 UK & Ireland GenAI strategy and data analytics decision-makers to pulse check major areas of investment and the hurdles organisations are facing around the technology.

It asked questions about organisations’ current plans to deploy GenAI, how the technology is integrated into their strategic planning, and what challenges they are facing. Find out more by reading the report entitled Generative AI Challenges and Potential Unveiled: How to Achieve a Competitive Advantage.

The research found that ChatGPT is by far the most popular GenAI tool in Ireland, with 29% of those who use GenAI in their professional lives saying it is the tool they used most often in the workplace. Meanwhile, other tools, such as DALL-E 2 and Jasper, were only used by 4% of respondents.

ChatGPT was also the most used tool in the UK, with 10% of respondents saying they use it in the workplace. However, Google AI was not far behind, with 8% of respondents selecting it as their preferred tool.

Proprietary/closed source large language model (LLM) was found to be the most common approach to adopting LLMs in Ireland, with 27% of organisations having already done so. However, this was the least common approach in the UK, with only 11% having adopted this approach.

Instead, open source LLM is the most popular option in the UK, with 33% having already adopted this approach, compared to 24% in Ireland.

The study found that more organisations in Ireland are fully prepared to integrate GenAI (11%) than in the UK (7%), but data privacy is the greatest concern in both regions, with three-quarters of organisations in Ireland ranking it as their top worry.

Meanwhile, the biggest challenge in implementing effective governance and monitoring for GenAI is technological limitations in both the UK & Ireland, with 45% of organisations in Ireland ranking it as their top challenge, compared to 28% in the UK.

Despite this, 58% of organisations in Ireland plan to introduce GenAI over the next three years, with 31% of them aiming to do this within the next year. Adoption is expected across many departments, with marketing, sales, IT and finance the most common – a clear majority (75% and above) are either using or planning to use GenAI in these areas.

Customer engagement and personalisation is seen to be the greatest potential benefit of adopting GenAI, with over three-quarters of Irish organisations saying they believe improvements will be made to their organisation in this area.

However, UK organisations are less enthusiastic about the benefits of GenAI. Just over half (56%) believe that customer engagement and personalisation will be improved, while 50% think it will improve the accuracy of predictive analytics. For Ireland these percentages are 77% and 64% respectively.

Speaking on the findings, Jean De Villiers, Head of Analytics at SAS Ireland, said: “There are promising signs of innovation in Ireland, with more organisations saying they are fully prepared to integrate GenAI than in the UK. Our research shows senior key decision-makers in Ireland recognise the many benefits of GenAI, and are aware of the improvements that it can make to customer engagement, predictive analytics, and competitive edge.

“It would appear to be just a matter of time before more organisations in Ireland implement it, with over half planning to do so in the next three years. First, they must tackle the challenges that they are encountering, such as technological limitations, which are being seen more widely in Ireland than in the UK. We are looking forward to supporting our customers through their journey towards trustworthy AI and GenAI adoption, and assisting them in using our technology to achieve the positive outcomes they seek.”

The SAS study sets out a number of recommendations that organisations should follow to successfully deploy GenAI, including the four steps below:

  • Strategic deployment

  • Comprehensive governance

  • Technological integration

  • Expert guidance

SAS’ global report on GenAI adoption has also been published, which provides further guidance around best practices and strategic insights aimed at empowering businesses to harness the technology’s full potential, along with comparisons across key markets and industry sectors.

Find out more by reading the full global report here.

One year after the launch of ChatGPT over 300,000 people in Ireland have used AI at work

New research conducted by Deloitte in Ireland shows that a little under 2 in 5 respondents (38%) were not aware of Generative AI. For those who were aware of the technology 49% of them were aware of ChatGPT. Over half of (51%) respondents who have used the technology used it “once or twice to try” or less than monthly, while 6% of respondents use it daily.

Speaking today on the publication of research undertaken with 1,000 Irish respondents on their awareness and use of Generative Artificial Intelligence, Colm McDonnell, Partner Risk Advisory with Deloitte said; “As we approach the first anniversary of ChatGPT’s launch, it is interesting that Deloitte research finds that over 300,000 people in Ireland have used the technology for work purposes. By far the most popular purpose of using Generative AI is for personal purposes while 34% of respondents use it for education. It is clear from these responses that the use of Generative AI will only increase with time and greater adoption. It is imperative that we prepare for increased adoption.”

Deloitte’s research also found that from those who have used Generative AI more than one in three (35%) believe it always produces factually accurate responses, and 31% agree that its responses are unbiased.

Colm McDonnell continued; “From our work we believe that Generative AI adoption is still at the early stages. As it is increasingly utilised, we as a society need to balance the requirements for trust and safety along with the need to harness the potential of technology. In our work with clients, we view trustworthy AI through six dimensions – impartial, transparent, accountable, secure, respectful of privacy and reliable. At Deloitte we believe the new regulations on the way from the European Union will be a crucial element in striking the balance between trust, safety, and the potential opportunities.”

Generative AI in the workplace 

Emmanuel Adeleke, Partner, AI & Data, Deloitte said; “We know from the Deloitte Digital Consumer trends research that 11% of Irish workers have used Generative AI in the workplace. This is despite the fact that amongst those respondents who were aware of these tools, 37% believed their employer would not approve of them using Generative AI for work purposes. It is fair to conclude that employers and employees would benefit from clarity around the acceptable and appropriate use of Generative AI. Furthermore, businesses will also have to look at how they engage with their customers, suppliers and regulators on these technologies. Like all transformative changes a certain lag-time between innovation and response is to be expected but it is vital that those managing businesses are proactive, open and accurate in all conversations on Generative AI. Deloitte’s research shows that AI is here to stay in the workplace, and that is unlikely to change.”

The impact of Generative AI on the workforce is also an issue that is front of mind for a lot of employees. Among those who were aware of the technology over 3 in 5 (62%) believed that Generative AI will reduce the number of jobs available in the future and almost half of respondents (46%) are concerned that Generative AI will replace some of their role in the workforce in the future.

Emmanuel Adeleke continued; “Generative AI presents a wide range of possibilities, such as freeing up time for employees to focus on tasks that matter most to their organisations. Our research shows that some of the workforce are already beginning to experiment to see how they can use, so it is important that employers and their employees communicate effectively about how and where the technology will be introduced, and what benefits it will bring. In Deloitte we believe that far more open and substantive engagement needs to take place about the implications of this new technology on tasks within the firm, both now and into the future.”

Emmanuel Adeleke concluded “While there is increasing industry awareness that Global AI regulations (e.g. the European Union AI Act) will eventually address ethics concerns, some organisations are hesitant to move beyond ad hoc AI experiments until they have regulatory clarity. Additionally, divergence in international approaches to regulation, while not a new phenomenon, will add complexity to the Al agendas in global institutions. We believe that Ireland can play an important role in this process. ”

Artificial Intelligence: Does It Have the Capability to Take Over the World?

Some experts have expressed their concerns about the rapid growth and the unpredictable nature of AI models. However, Microsoft’s head of AI confirms that the company will stay committed to its efforts in this area. If we go a few years back, Microsoft invested $1 billion in artificial intelligence start-up OpenAI and now is only working to enhance this.

Microsoft’s Point of View on Artificial Intelligence

Microsoft, which financial resources and computing power were established through Azure, has now developed GPT4. This is the most powerful language model that OpenAI has ever created, and you can find it under the name ChatGPT – at first sight, just a chatbot.

While some are expressing concerns, Eric Boyd, the corporate vice president of Microsoft AI Platforms, highlighted the huge potential of this technology. According to him, it will enhance human productivity and drive global economic growth. Therefore, he believes that it would be wrong if we just ditch this newly developed technology.

Furthermore, Microsoft integrated GPT4’s strong abilities into its Bing search engine. A few months ago, the company also integrated this advanced technology into the virtual digital assistant – Copilot. This will help with improving existing software products, such as word processing and spreadsheets.

According to Eric Boyd, Microsoft’s focus on AI is not about taking over the world but rather about changing the relationship humans and computers have. More precisely, Microsoft tends to modify traditional interfaces and enable more language-based interactions. As a result, this will help us move on from always relying on keyboards.

Additionally, in response to the concerns about rapid AI development, Boyd acknowledges the expertise of the industry analysts and claims that Microsoft gives serious consideration to their feedback. However, he states that there is no way for doubt or worry as their concerns are distant from the actual work of OpenAI. 

Despite all rumors about AI, Boyd says that the current capabilities of language models like ChatGPT are the future. He argues that their goal is not for AI to take over the world by supporting its claims with the limited abilities of these models, such as only generating text as output.

More so, he is more concerned about the overall AI potential that may worsen the already-existing social issues. Therefore, he believes that it’s crucial to know how to safely and responsibly use AI in different models and apps. 

Is Artificial Intelligence Indeed a Threat to Humanity?

The role of AI has grown in almost every industry. For example, people nowadays implement AI in healthcare, real estate, business communications, manufacturing, and website building. But the usage of AI goes further and becomes part of our every day hobbies, such as streaming content online or gambling. 

Now, there is rarely a streaming platform or a casino that doesn’t use AI to improve its product in one way or another. For example, in countries like the UK, where gambling is a highly competitive industry, the best UK slot casinos embrace AI to improve their recommendation algorithms and predictive models to stay ahead of their competition.

But as Boyd believes, the main worry regarding AI is the potential harm that could arise if the technology is employed inappropriately or if it’s applied to tasks that it’s not suitable for, such as air traffic management. He also adds that there is a high risk of malicious attacks by hackers by implementing malware software in AI algorithms.

Due to this, he says that there must be a certain limit to which AI can be part of our lives and how companies should implement it. For example, you shouldn’t sell your organization’s face recognition software to law enforcement agencies. Also, it would be best if there were different regulatory frameworks and guidelines that would address all AI-related concerns so that you can have more assurance about your safety.

Not only does Boyd emphasizes the importance of regulatory measures and the need to determine where AI is suitable for use, but he also mentions that Microsoft has gained a significant advantage in the competitive landscape of AI breakthroughs. This is because this revolutionary company has leading AI research divisions. 

However, other tech giants like Google also start by establishing AI research divisions and work hard in order to bring AI products to customers. Therefore, there are no signs of slowing down within BigTech. AI only shows more and more powerful signs of growth and advancement, raising the need for educating employees and companies on how to work with it and how to implement it.