Menu
Hit enter to search or ESC to close
CIRL Menu
  • About the Centre
  • Our Journal
  • Our Research
  • Our Blog
  • Events
  • Resources and Professional Development
  • Get in Touch
  • ETON COLLEGE
  • PARENT PORTAL
  • EMPLOYMENT OPPORTUNITIES
  • THE TONY LITTLE CENTRE (CIRL)
  • ETONX
  • COLLEGE COLLECTIONS
  • OEA ONLINE
  • FACILITIES FOR HIRE
twitter Iconlinkedin Icon

Home News & Diary School Blog

Blog

Future-proofed or left behind: Young people’s views on AI, skills and careers

Home News & Diary School Blog

06 Mar 2026

Uncategorised

Our latest research examines how students perceive the role of AI in shaping their futures, influencing their education, the skills they need to develop, the opportunities ahead, and the concerns they carry about a rapidly changing world. Here are ten key insights from the study.

1. AI is already embedded in students’ academic lives

AI tools are widely used for drafting essays, summarising material, generating revision questions and clarifying difficult concepts. For many students, AI is simply another study aid. Yet despite this widespread adoption, guidance from schools remains inconsistent. Students report policies ranging from informal encouragement to blanket bans, often with little explanation of the reasoning behind them. This creates an environment where students experiment independently, sometimes unsure where ethical or academic boundaries lie.

2. Students are worried about overreliance

Young people are not blindly enthusiastic about AI, even if many initially began by using it extensively. Many express concern that over-dependence on AI could weaken their critical thinking, writing fluency and independent problem-solving. Some report consciously limiting their own AI use to avoid becoming “lazy” thinkers. This self-regulation suggests an awareness of cognitive offloading, but it also indicates a lack of structured frameworks to guide healthy usage. Interestingly, many participants also felt a sense of responsibility toward younger peers, those who will grow up not knowing a world without AI, to ensure they continue to use their own judgement rather than rely too heavily on AI-driven decisions.

My biggest concern about AI is a future where people are overly reliant on generative AI in particular. There seems to be an epidemic of people using generative AI (such as ChatGPT) for minute tasks, removing certain critical thinking and problem solving characteristics in a myriad of people.

3. School responses to AI are inconsistent

Our findings highlight stark variation in how schools respond to AI. Some schools actively explore its use, while others restrict it heavily, and approaches often vary even within the same school. As a result, students’ AI literacy depends heavily on where they study and which teachers they have. Perhaps this explains why the most frequently cited source of learning about AI was social media. Only around 12% of students said they had learnt about AI through their lessons. This inconsistency risks amplifying existing educational inequalities, as students with structured exposure will enter higher education and employment better prepared than those without.

4. Students recognise AI’s limitations

Contrary to common stereotypes, students are highly aware of AI’s flaws, including hallucinations, factual inaccuracies and formulaic writing styles. They do not see AI as infallible; instead, they treat it as a time-saving but imperfect assistant. However, students also recognise that not everyone shares this understanding, meaning some students risk being left behind unless AI literacy becomes more widely taught.

5. Career uncertainty is rising – but no panic yet

Most students do not believe AI will completely eliminate their desired professions. However, they are uncertain about how entry-level roles, internships and early career pathways might change. This uncertainty is especially pronounced among students approaching major decisions about university courses or apprenticeships. Despite these concerns, students are generally optimistic: many believe AI will create new jobs, or that existing roles will evolve and adapt alongside technological change.

I hope it will result in mundane jobs being automated to free more human hours for more fulfilling activities.

6. Students trust industry voices more than schools on AI and work

When considering AI’s impact on employment and how they can best prepare for the future, students place significant trust in professionals working directly with AI technologies in industry. This reflects a perception that industry is moving faster than education and can adapt more quickly. Many students feel that teachers may not always feel confident discussing AI and the workplace, or may not have the most up-to-date knowledge about how these technologies are shaping different sectors.

7. Human skills are seen as the real competitive advantage

Across discussions, students consistently emphasise qualities such as empathy, creativity, adaptability, leadership and emotional intelligence as irreplaceable strengths. They see AI as capable of automating routine tasks, but not replicating nuanced human judgement or interpersonal connection. This perspective reframes AI not as a competitor, but as a complement to their future careers. However, many students also say they do not feel their schools or colleges equip them adequately with these skills, and some lack confidence in their own abilities in these areas.

8. Comfort with online sharing shapes how students use AI

A particularly striking insight is the mismatch between how students define personal AI use and what they actually do. Some claim not to use AI personally, yet describe uploading photos or sharing personal details in order to receive tailored advice. This reflects a broader generational norm: many young people are accustomed to sharing aspects of their lives online. As a result, certain forms of personal disclosure do not automatically register as privacy risks. For many students, using AI in personal contexts is simply an extension of their everyday engagement with social media and digital platforms.

9. Being “digital natives” doesn’t mean students are AI literate

A common assumption is that young people naturally understand new technologies because they have grown up with them. However, our research shows that while students are comfortable using digital tools, including AI, many have never received structured instruction on how to evaluate AI outputs or consider the ethical implications of these technologies. Instead, much of their exposure to AI comes from sources such as social media, rather than through lessons, teachers or careers guidance in school. This creates a gap between familiarity and true literacy.

10. Without coordinated action, AI readiness will be uneven

The overarching concern of the report is not that young people are unprepared, but that preparation is uneven. Access to structured AI literacy, ethical guidance, career insight and data education varies significantly. Without coordinated intervention, gaps in readiness may mirror or deepen existing socioeconomic divides.

Read the full report here.

Back to all blogs

Next up...

Event

Events

February 2024

About the Centre

Our Research

Contact Us

Eton College
Windsor, Berkshire,
SL4 6DW

[email protected]

Quick Links

  • About the Centre
  • Our Journal
  • Our Blog
  • Our Research
  • Events
  • Get in Touch
Registered Charity Number 1139086
© Eton College 2026
Web design by TWK