Children, AI, and Learning: Why the Concern Is Real — and How AI Can Be Used Wisely

Artificial Intelligence (AI) is no longer a futuristic concept limited to science fiction. In Singapore, AI is increasingly embedded in children’s daily lives through educational platforms, language-learning apps, smart devices, and digital homework tools. Schools are also beginning to explore AI-enabled systems for personalised learning and administrative support.

As AI becomes more visible in learning environments, concern among parents and educators has intensified. These concerns are not speculative or reactionary; they are grounded in research, policy discussions, and child-protection frameworks developed by governments and international organisations. In Singapore, agencies such as the Ministry of Education (MOE), the Infocomm Media Development Authority (IMDA), and the Personal Data Protection Commission (PDPC) have emphasised the need for responsible and ethical use of AI, particularly when children are involved (MOE, 2023; PDPC, 2021; IMDA, 2020).

This article examines the key concerns surrounding children’s use of AI and explores how AI, when guided responsibly, can still serve as a valuable learning tool in the Singapore context.

Widespread and Evidence-Based Concerns About Children Using AI

Schools and Governments Are Raising Red Flags

Education systems worldwide have begun introducing safeguards around AI use in classrooms. In Singapore, MOE has taken a cautious and measured approach, emphasising that technology should support—not replace—core teaching and learning processes (MOE, 2023). While AI tools are being explored for adaptive learning and feedback, MOE has highlighted the importance of maintaining academic integrity and critical thinking skills.

UNESCO has similarly warned that unregulated AI in education could weaken students’ analytical abilities and widen inequality between those who have access to quality guidance and those who do not (UNESCO, 2023). These concerns align closely with Singapore’s long-standing emphasis on developing higher-order thinking and character education rather than relying solely on automated systems.

These responses highlight a growing institutional concern: AI can alter how children learn, and without oversight, may prioritise speed and convenience over intellectual growth.

Privacy and Data Safety Are Major Parental Fears

Data privacy represents one of the most serious risks associated with children’s interaction with AI. Many AI-powered applications collect voice recordings, images, or behavioural data in order to function. In Singapore, this raises particular concern under the Personal Data Protection Act (PDPA), which requires organisations to protect personal data and obtain meaningful consent for its collection and use (PDPC, 2021).

IMDA’s Model AI Governance Framework further stresses that AI systems should be transparent, accountable, and designed with safeguards to prevent misuse—especially for vulnerable groups such as children (IMDA, 2020).

UNICEF (2021) has emphasised that children’s biometric data, such as facial images and voice patterns, can become permanent digital identifiers if mishandled. Unlike passwords, biometric markers cannot be changed. This creates long-term risks related to surveillance, profiling, and identity security, particularly when children are unable to fully understand or consent to how their data is used.

For parents in Singapore, where digital services are widely integrated into daily life, these risks underscore the importance of strong regulation and careful selection of AI-based tools used at home and in school.

Cognitive and Emotional Development Risks

Researchers have expressed concern that AI may interfere with key developmental processes. Educational psychologists stress the importance of “productive struggle” — the mental effort required when solving problems independently. According to Harvard Graduate School of Education, learning occurs most effectively when children grapple with challenges rather than receive immediate answers (Harvard Graduate School of Education, 2023).

If AI tools routinely provide instant solutions, children may bypass this process and weaken their ability to reason, persist, and evaluate information critically. This concern resonates with Singapore’s emphasis on developing self-directed learners who can adapt to complex and changing environments (MOE, 2023).

There are also emerging concerns about emotional attachment to AI systems. Children may anthropomorphise chatbots or digital assistants, mistaking simulated responses for genuine understanding or care. Research suggests that children can develop emotional trust in AI agents without recognising their limitations, which may affect social development and emotional resilience (Blum-Ross & Livingstone, 2022).

International Child-Rights Organisations Respond

UNICEF’s Policy Guidance on AI for Children provides a rights-based framework for evaluating children’s interactions with AI. It emphasises three core principles:
protection from harm,
provision of beneficial technology, and
participation in decisions affecting digital environments (UNICEF, 2021).

This framework aligns with Singapore’s Smart Nation and digital literacy strategies, which stress that technology should enhance wellbeing and inclusion rather than compromise them (IMDA, 2020). It reinforces the idea that children should not be passive users of AI systems but should be empowered with knowledge about how these tools work and how they influence behaviour and choices.

Can AI Still Be a Useful Learning Tool?

Despite legitimate concerns, research also indicates that AI can play a constructive role in education when used thoughtfully and under adult guidance.

AI as a Support, Not a Replacement

The OECD (2021) notes that AI can enhance learning when it is designed to assist rather than replace human instruction. AI systems can provide explanations, targeted practice, and formative feedback that help students understand concepts more deeply.

In Singapore, such tools may complement classroom instruction by supporting revision, language learning, and differentiated pacing, while teachers remain responsible for judgement, values, and social interaction (MOE, 2023).

Supporting Diverse Learners

AI also holds promise for inclusive education. Language learners can benefit from translation and speech-recognition tools, while students with learning differences may use text-to-speech and adaptive pacing technologies to access content more effectively (World Economic Forum, 2023).

In multicultural classrooms such as those in Singapore, these tools can help bridge language barriers and reduce frustration, provided that they are implemented ethically and do not replace meaningful teacher–student interaction.

What Responsible AI Use Looks Like

Responsible use of AI in education requires:

  • Clear boundaries, so AI does not replace core cognitive tasks
  • Adult supervision, to guide use and interpretation
  • Transparency, so children know when they are interacting with AI
  • Balanced learning, including offline activities and peer interaction
  • AI literacy, enabling children to understand that AI systems are designed by humans and can make errors

Teaching children how AI generates responses — and where it can fail — is essential for developing critical digital awareness, which is a core goal of Singapore’s digital literacy efforts (IMDA, 2020; MOE, 2023).

A Child-Centred Framework for AI in Learning

A growing number of researchers and policymakers advocate for a child-centred approach to artificial intelligence in education—one that balances opportunity with protection. This approach shifts away from blanket bans toward structured “guardrails” that maximise learning benefits while reducing developmental risks (UNESCO, 2023; UNICEF, 2021).

Figure 1. The Child-Centred AI Blueprint: Balancing Opportunity and Protection. This framework illustrates how AI can support children’s learning and development while requiring strong safeguards for privacy, emotional wellbeing, and academic integrity. It reflects international guidance emphasising that AI should be used as a supportive tool rather than a substitute for human learning and relationships (UNICEF, 2021; UNESCO, 2023; OECD, 2021).

As illustrated in Figure 1, AI offers educational opportunities such as personalised learning, language support, and improved accessibility for diverse learners. Interactive AI tools can support vocabulary development and comprehension when used as guided learning aids rather than answer generators (OECD, 2021; World Economic Forum, 2023).

At the same time, the framework highlights essential risks that must be addressed. One concern is the so-called “empathy gap”, where AI simulates human interaction without genuine emotional understanding. Children may misinterpret these responses as real social engagement, potentially affecting social development (Blum-Ross & Livingstone, 2022).

Privacy remains central. AI systems often rely on voice recordings, facial images, or behavioural data, which can expose children to long-term data security risks if poorly governed. UNICEF (2021) stresses that children’s biometric data requires stronger protection than adult data because it creates permanent digital identifiers.

The framework also distinguishes between AI as a scaffolding tool and AI as a shortcut. When AI supports thinking—by prompting reflection or explaining concepts—it can enhance learning. When it replaces thinking—by generating full answers—it risks weakening cognitive development and academic integrity (Harvard Graduate School of Education, 2023; UNESCO, 2023).

Together, these elements reinforce a central principle: children should not merely use AI but should be taught to understand it. This includes knowing that AI is created by humans, can make errors, and should not be treated as a source of emotional authority or moral judgment (UNICEF, 2021).

A Balanced Way Forward

Concerns about AI are grounded in credible evidence relating to privacy, development, and educational integrity. Ignoring these risks would be irresponsible. However, rejecting AI entirely may deprive children of tools that could support learning and inclusion.

A balanced approach acknowledges both the dangers and the potential. Guided use, grounded in child-centred policy and ethical oversight, offers the most realistic path forward for Singapore’s education system (MOE, 2023; IMDA, 2020; UNICEF, 2021).

Technology should serve children — not shape their development at the expense of human connection, curiosity, and critical thought. AI must remain a tool under human guidance, not a substitute for learning or relationships. With careful governance and informed adult involvement, AI can support children’s education while safeguarding their wellbeing.

 

References

Blum-Ross, A., & Livingstone, S. (2022). The trouble with “screen time” rules. London School of Economics.
https://www.lse.ac.uk/media-and-communications/research/parenting-for-a-digital-future/the-trouble-with-screen-time-rules

Harvard Graduate School of Education. (2023). Why kids need to struggle in learning.
https://www.gse.harvard.edu/ideas/usable-knowledge/18/10/why-kids-need-struggle-learning

Infocomm Media Development Authority (IMDA). (2020). Model AI Governance Framework (2nd ed.).
https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2020/model-ai-governance-framework-second-edition

Ministry of Education Singapore (MOE). (2023). EdTech Masterplan 2030: Transforming Education through Technology.
https://www.moe.gov.sg/education-in-sg/educational-technology-journey/edtech-masterplan

Ministry of Education Singapore (MOE). (2023). AI in Education Ethics Framework (Student Learning Space).
https://www.learning.moe.edu.sg/ai-in-sls/responsible-ai/ai-in-education-ethics-framework

Personal Data Protection Commission (PDPC). (2021). Advisory Guidelines on the PDPA for Children’s Personal Data.
https://www.pdpc.gov.sg/help-and-resources/2021/02/advisory-guidelines-on-the-pdpa-for-childrens-personal-data

OECD. (2021). Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development.
https://www.oecd.org/education/ai-in-education-challenges-and-opportunities.pdf

UNESCO. (2023). Guidance for Generative AI in Education and Research.
https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research

UNICEF. (2021). Policy Guidance on AI for Children.
https://www.unicef.org/innocenti/reports/policy-guidance-ai-children

World Economic Forum. (2023). Shaping the Future of Learning: The Role of AI in Inclusive Education.
https://www.weforum.org/agenda/2023/04/ai-education-learning-inclusion/

CATEGORIES

Articles

Comments are closed