Introduction

What is AI, what can it do, and what are its limitations?

AI is already built into many of the apps Outwood staff use every day, from Google Search summaries to Gemini in Gmail. Understanding what it is, what it can and can’t do, and how to use it safely will help you get more from these tools in your role.

What is AI?

Artificial intelligence (AI) refers to computer systems that can perform tasks typically thought to require human intelligence, such as understanding language, recognising patterns, making decisions, and generating content.

The type of AI you’ll encounter most often in everyday tools is generative AI. These are systems that create new content (text, images, audio) based on patterns learned from existing material.

Large language models (LLMs) are a specific type of generative AI focused on understanding and generating text. AI assistants like Gemini, ChatGPT, and Claude are all powered by LLMs.

How does it work?

Tech companies prepare a new AI for use through a process called training: it analyses vast amounts of text, images, and other content to learn patterns and relationships. That accumulated knowledge is what it draws on when you ask it something.

Think of generative AI as a very sophisticated autocomplete (a huge simplification, but a useful one). It predicts the most likely next word or sentence based on everything it learned during training. This is why it can sound fluent and confident while still being factually wrong: it’s generating a plausible-sounding response rather than looking up a verified answer.

Illustration of an LLM responding to “The quick brown fox jumps…” with “…over the lazy dog”

What can it do?

Generative AI is genuinely versatile—new use cases are emerging all the time. Here are some examples of what it can do:

  • Refine written content: adjust letters, reports, or communications for tone and audience.
  • Summarise and extract information: get a quick summary of a long document, or identify common themes from a set of student responses.
  • Generate ideas and resources: create discussion questions, practice exercises, or lesson starter activities.
  • Explain and adapt content: explain complex concepts in plain language, or rework material for a different year group or audience.
  • Give feedback: review a piece of writing, compare approaches, or check work against a set of criteria.
  • Work with images and other media: generate images for resources, or describe the content of an uploaded photo or diagram.

Built-in AI vs AI assistants

Built-in AI features are woven into the tools you already use, with AI powering new capabilities behind the scenes. These can appear as a new button, a suggestion, or a wizard. They’re familiar enough that it just feels like a helpful feature rather than something fundamentally new, but some of these capabilities would have been impractical or impossible just a few years ago.

They are the easiest to work with and tend to be reliable in consistently producing good-quality results. The trade-off is less flexibility; you’re limited to what the feature provides, with little control over how it works.

Standalone AI assistants like Gemini, ChatGPT, and Claude give you a blank canvas. You can ask them anything and refine your request through conversation, making them more powerful for complex or open-ended tasks. The trade-off is that you get out what you put in: good results require clear, specific instructions, which is where prompting skills matter.

When you can provide instructions directly to an AI, those instructions are called prompts. For practical guidance on writing effective prompts—including structure, techniques, and worked examples—see the prompting guide in the Gemini app section. The same principles apply whichever AI assistant you use.

Limitations and considerations

AI can be a genuinely useful tool, but it’s important to understand where it currently falls short. These limitations are improving over time, but they are real constraints to be aware of today.

AI can make mistakes

When AI confidently presents incorrect or misleading information as factual, it’s said to be hallucinating. That can include made-up statistics, false historical information, invented citations, incorrect calculations, or outdated information presented as current.

It can also fail to push back if you provide it with incorrect information, instead working from a false premise.

Warning

Always check AI-generated content before using or sharing it.

AI doesn’t know everything

Generative AI is only as current as the data it was trained on—it may not be aware of recent events or developments. It also doesn’t know anything that wasn’t in its training data in the first place: internal policies, local knowledge, or anything that was never publicly available—unless you provide that information directly in your prompt.

If you ask it about a real person, a local system, or an internal process without providing context, it may generate a plausible-sounding answer that is entirely made up.

AI can reflect biases

AI can inherit biases from its training data. For example, asking for “an image of a school IT technician” will likely produce a white male, because that demographic is over-represented in the training data. AI can also default to Western or US-centric perspectives when asked about “standard” or “typical” things, from food to classroom behaviour to greetings.

Be particularly mindful when:

  • Generating images of people
  • Creating resources for diverse student populations
  • Asking for “typical” or “standard” examples
  • Seeking advice on cultural or social topics

Review AI output critically and be prepared to adjust or discard anything that doesn’t reflect our values or the communities we serve.

Take care with sensitive data

Most AI services process your data on external servers, outside the Trust’s control. Many “free” tools also retain a record of what you share, and may use it to train their AI.

Important

Do not share personal information, confidential data, or student work with AI tools, unless we have an organisational agreement in place.

When using approved tools like those in the Getting Started guide, sign in with your school Google account to ensure your use falls within Outwood’s agreements.

For questions about using new AI tools or services, contact the IT helpdesk.

Other current limitations

  • Limited context: AI can only hold so much in mind within a single conversation. If you give it a very long document, it may lose track of details from earlier in the text. For long tasks, it’s often better to work in sections.
  • Inconsistency: the same prompt can produce different results on different occasions, so AI isn’t well-suited to tasks that need repeatable outputs, such as generating a standard letter template that must look identical each time.
  • Nuance: AI may miss sarcasm, subtle context, or cultural cues that would be obvious to a human reader.

Next steps

Ready to start using AI tools? The Getting Started guide covers the tools available to Outwood staff and how to access them.