Shaping AI: What it should it do and for whom?

A new NESC report Artificial Intelligence in Service of Society: Navigating our way Forward emphasises that Ireland is currently in a critical window of opportunity. As AI becomes increasingly embedded across public services, workplaces and everyday life, now is the time to put the right foundations in place: strengthening skills, governance, infrastructure and public trust so that Ireland can realise the benefits of AI while minimising foreseeable risks and unintended consequences.
The report outlines that the goal should be to proactively shape AI, so that its benefits can be realised responsibly, equitably, and sustainably. Taking a broad socio-technical perspective, the report argues that AI is not merely a technological tool, but a transformation shaped by governance, institutional capacity and societal choices.
According to Dr Siobhán O’Sullivan, Senior Policy Analyst at NESC:
     There is a tendency to treat AI as a purely technical phenomenon—something to be evaluated on the basis of whether it works as designed. But that framing misses the most important questions. AI systems do not operate in a vacuum; they are embedded in organisations, workplaces, public services and communities, and their impacts emerge from that interaction. A socio-technical lens asks not only whether a system functions as intended, but who benefits, under what conditions, and at what cost. That shift in perspective is what moves us from asking what AI can do, to asking what it should do—and for whom.
 
The report sets out five interconnected priorities:
  • Responsible and Strategic Adoption: AI should address clearly defined public and organisational needs and align with workforce skills, data quality and institutional capacity.
  • Trustworthy and Ethical Practice: Systems must be transparent, accountable and subject to meaningful human oversight, translating ethical principles into real-world actions.
  • Anticipatory Governance: With AI evolving rapidly, regulation must be forward-looking, adaptive and based on continuous monitoring rather than reactive fixes.
  • AI Literacy as National Infrastructure: Building widespread understanding of AI is essential for workforce adaptation, democratic oversight and responsible use.
  • Public Legitimacy: Long-term success depends on securing public trust through inclusive engagement and sustained societal dialogue.
A central theme of the report is the requirement for AI systems to be safe, ethical and trustworthy in practice, not only in principle. AI systems are probabilistic and imperfect. Meaningful human control is essential to prevent over-reliance, loss of judgement and accountability gaps. High-level ethical principles must be translated into concrete practices with individuals and institutions building genuine ethical capability to ensure AI operates safely, fairly and effectively.
Among the report’s most significant findings is its designation of AI literacy as essential national infrastructure. Ireland has a growing ecosystem of AI literacy initiatives, but these remain fragmented, and significant gaps persist in public understanding. NESC calls for development of AI-literate citizens capable of questioning and scrutinising AI systems, and AI-literate senior leaders capable of providing effective organisational oversight—both of which are identified as preconditions for democratic accountability.
On governance, the report notes that the trajectory of AI capability remains genuinely uncertain, and that regulatory approaches must be agile, anticipatory and continuously updated. In line with the National Digital and AI Strategy, NESC argues that continuous monitoring is central—particularly given evidence gaps and the tendency for AI systems to behave differently in complex real-world environments than in controlled settings. Anticipatory governance enables policymakers to detect emerging risks early and respond proactively rather than reactively.
Dr Larry O’Connell, Director of NESC, noted that:
    The trajectory of AI capability is genuinely uncertain. We cannot predict with confidence what the technology will look like in five or ten years, which means governance designed only for today’s systems may be inadequate for tomorrows. Anticipatory governance gives us the tools to prepare for multiple possible futures — through strategic foresight, continuous monitoring and flexible regulatory approaches that can detect emerging risks early and respond proportionately. The goal is not to predict the future but to build the institutional resilience to navigate it, whatever form it takes.
 
To read the report in full please click here.

27% of IT leaders concerned about ability to detect deepfake attacks

Storm Technology, a Littlefish company, today announces survey findings which reveal that 27% of IT leaders are concerned about their ability to detect deepfake attacks over the next 12 months. This concern was felt by more respondents in larger enterprises (33%) than SMBs (23%).

The research – conducted by Censuswide and involving 200 IT decision-makers and leaders across Ireland and the UK (100 in each market) – found that the biggest concerns around AI and security over the next year are data breaches (34%), data protection (33%), and increased risk of adversarial or cyber-attacks (31%). Meanwhile, a quarter (25%) consider shadow AI (use of unsanctioned or unpermitted tools) among their biggest concerns.

This is not necessarily surprising given that half of respondents (50%) know that people in their organisation are using such tools and some 55% admitted to using unsanctioned or unpermitted tools themselves. Forty-two per cent of IT leaders also opined that company data is not safe for input into these platforms.

Perhaps exacerbating this issue, just 60% of companies have been specific about which AI tools are sanctioned or permitted.

More broadly, over a fifth (21%) of IT leaders do not have a high degree of trust in AI tools and almost a third (32%) of companies do not have a strategy in place to address any AI risks that arise.

The research showed that 79% of IT leaders in Ireland and the UK agree their organisation needs to focus more on the regulation of AI tools and 28% do not believe their governance around AI tools is adequate. This rose to more than a third (35%) among Irish respondents.

When it comes to AI and data, 24% of IT leaders do not think their business data is ready for AI, with a similar proportion (23%) of the opinion that that their data governance policies are not robust enough to support secure AI adoption. This could explain why 78% believe a data readiness project is required to ensure successful AI adoption in their company.

Sean Tickle, Cyber Services Director, Littlefish, said: “AI is rapidly reshaping the enterprise landscape, but the speed of adoption is outpacing the maturity of governance. When nearly a third of organisations lack a strategy to manage AI risk, and over half of IT leaders admit to using unsanctioned tools, it’s clear that shadow AI isn’t just a user issue—it’s a leadership one.

“Deepfake threats, data governance gaps, and a lack of trust in AI platforms are converging into a

Xiaomi’s New Report Reaffirms Its Commitment to Sustainability and Innovation

Xiaomi is pleased to share the release of its seventh annual Environmental, Social, and Governance (ESG) Report, highlighting the company’s ongoing leadership in accessible technology, climate-change mitigation and adaptation, and circular economy practices.

At COP29 in November 2024, Xiaomi unveiled a new strategy for sustainable development, placing greater emphasis on inclusive products, technology equality, and its “Human x Car x Home” ecosystem strategy, designed to deliver a smart, sustainable lifestyle for consumers. 

 As part of its commitment to a more inclusive digital experience, Xiaomi enhanced its TalkBack feature, enabling accurate recognition and real-time narration of text in images, providing users with visual impairments a more seamless ‘reading’ experience.

In terms of climate action, Xiaomi not only sets greenhouse gas (GHG) reduction targets for its own operations but also requires its smartphone supply chain partners to adopt equivalent GHG reduction measures. By 2030, suppliers must reduce annual carbon emissions by at least 5% (based on 2024 levels) and use a minimum of 25% renewable electricity. By 2050, 100% renewable electricity usage is required.

Xiaomi also carries out electronic waste recycling programs worldwide and plans to recycle a total of 38,000 tons of electronic waste over five years (2022 to 2026) and achieved 95.94% of this target as of the end of 2024.

Xiaomi remains committed to driving innovation and breakthroughs toward a better future through its ongoing pursuit of sustainable development. For further details, view the full report here.