AI in instructional design: Where AI succeeds and what are its limits
AI is changing instructional design, but how are we really using it? Are we just impressed or maybe even bored by it? Before you let AI run your next course, it helps to know what it does well, where it falls short, and why the best instructional designers know it cannot take their place.

With so many new AI tools showing up in training and education over the past three years, it’s clear that if you’re already an expert in your field, you’ll know how to make the most of AI.
At conferences and in talks, we often hear that “AI is the future,” and it’s no surprise that everyone wants to use it. AI-generated content is even being ranked on IMDb, and some believe it could win an Oscar. What once took years and big budgets can now be done in hours, often with similar results.
But is that really the case?
Of course, AI can create, no doubt in that, but can it re-create the same nuanced perfection that evokes emotions? Think from a learning perspective. Can AI-generated courses truly replace the trainer in the room, who offers a glance of encouragement to the new employee?
AI can't do everything instantly, and not all models are equally capable. As instructional designers or trainers, we only access a part of the AI ecosystem.
To understand what we mean by that, let’s take a look at the different AI models.
The types of AI models
Think of AI models as types of helpers, each trained to be really good at specific things.
Large Language Models (LLMs)
This is probably the most familiar type. LLMs, including ChatGPT, Claude, and Gemini, are trained on vast collections of text from books, websites, and articles. Their strength is in language: when you ask a question, they provide detailed, well-structured responses that draw on patterns from their training data.
LLMs learned language by consuming massive amounts of text, so they’re good at writing, explaining, summarizing, and having conversations that follow specific patterns… this always comes after that.
Small Language Models (SLMs)
SLMs are like specialists. While LLMs know a lot about many topics, SLMs are trained on specific subject matter—such as only medical textbooks or only culinary content. Examples include Microsoft Phi-3.5-mini or Google Gemma 3 (4B). Their focus enables faster performance and lower resource consumption.
They don’t know as much overall, but they’re faster, cost less to run, and can even work on a laptop or phone. They’re useful when you need something specific and want to avoid the cost of larger models.
World Models
World Models are designed to understand cause and effect, enabling them to simulate real-world scenarios. They are adept at predicting outcomes and exploring 'what-if' situations. While still emerging, they are promising for applications like virtual training simulations and interactive learning.
Multimodal Models
Most early AI only understood text. Multimodal models can see images, hear audio, and read text simultaneously. Show it a diagram and ask a question about it? No problem. This is increasingly important in eLearning, where content isn't just words on a screen.
Generative AI (the umbrella term)
Generative AI describes any model that creates new content, whether text, images, video, or audio. All the model types discussed—LLMs, SLMs, world models, and multimodal models—can be generative. For example, image generators like Midjourney produce visual content, while LLMs like Claude or ChatGPT generate text-based content.
How is AI empowering instructional designers
Our CPO, Lars-Petter Windelstad Kjos, dives into how AI can help learning professionals overcome the age-old tension between efficiency and engagement.
Which model is most common among instructional designers
You probably already know the answer: it’s LLMs.
Most instructional designers—almost 80% according to ATD’s study—use generative models, especially LLMs. They mainly use them for outlining, storyboarding, writing objectives, and creating narration.
The UK Department for Education also found that AI can reduce workload across the sector, letting teachers and instructors focus more on delivery instead of course creation.

*All LLMs are Generative AI, but not all Generative AI are LLMs.
What are the advantages of using AI as an instructional designer?
For most experienced instructional designers, AI speeds up drafting, repurposing, and summarizing. The designer still decides on learning goals, checks the quality of the narrative, and makes sure the learner’s experience is positive.
Let’s take a look at those advantages:
- Scaling personalization
- The primary objective of any instructional designer is to make learning engaging. AI helps create learner-centric content that fits their learning paths and preferences. It can analyze past performances, learners’ paces, and reports to determine which lessons suit students best, whether they learn faster with images or need firsthand experience to retain knowledge.
- First-draft outputs
- Tasks like drafting opening text, summarizing SME notes, rewriting content for different audiences, and generating multiple versions of scenarios or questions are time-consuming but structured. AI can produce first drafts in seconds, freeing designers to focus on subtler decisions about pacing, tone, and flow. This means less time on “word-crunching” and more on learning strategy and user experience.
- Fluency of process
- Someone who knows their audience, goals, and constraints can guide the model, refine its outputs, and spot where things drift from the intended outcome. You can prompt the model to turn rough SME notes, meeting recordings, or bullet-point outlines into clear objectives, module flows, and learner-facing text. This fluency means less time wrestling with language and more time iterating on logic, sequencing, and interaction design.
- Faster iteration and experimentation
- You can generate multiple versions of a scenario, question set, or onboarding flow in minutes rather than hours. This lets you test different angles—tone, complexity, examples—without starting from scratch each time. Faster iteration means you can get feedback from stakeholders earlier and adapt instead of committing to a single version too late.
How is AI limited in instructional design?
AI cannot do or know everything. It’s not supposed to. Especially LLMs that follow patterns and give answers based only on available data cannot be expected to understand context-sensitive design.
AI has several limitations that keep it from replacing human expertise.
- No pedagogical understanding
- AI models can repeat phrases about “Bloom’s taxonomy” or “active learning,” but they do not truly understand how a learning objective maps to transfer, assessment, or scaffolded practice. They mimic patterns, not reason about what learners will do with the knowledge. This means AI-generated structures often need a human to check if they will help learners perform, not just pass a quiz.
- Lack of context or reasoning
- AI does not know your organization’s culture, stakeholders’ observations, or subtle reasons why a topic is sensitive or strategic. It cannot read hesitation in a meeting, sense frustration in a workshop, or adjust tone based on real-time relationships. Because of this, AI-first content can feel generic, emotionally flat, or misaligned with learners’ real-world constraints.
- “AI-slop” or sameness
- When IDs outsource too much drafting and ideation, materials can become uniform and verbose; some researchers call this “AI-slop”: text that is fluent but superficial, avoiding nuance, controversy, or real-world messiness. That dulls the impact of learning experiences and can erode designers’ creative agency over time.
- Ethical bias and blind spots
- AI inherits biases and gaps in its training data, which can appear in language, examples, and assumed prior knowledge. It also raises data privacy and equity concerns when learner data feeds models without transparency, consent, or safeguards. Instructional designers still bear responsibility for fairness, accessibility, and compliance, even when AI operates behind the interface.
- No real-world accountability
- AI cannot answer nuanced questions like “Should we even train here?” or “Is this really the right priority?” It cannot evaluate whether a performance problem is a training problem, nor negotiate with stakeholders, SMEs, or budgets. Those decisions remain human because accountability cannot be outsourced to a model.

The best immersive solutions keep the IDs in the center
AI is a powerful tool in instructional design, but it can’t think like a learning expert, feel like a person, or take responsibility like a designer.
At We Are Learning, we design immersive solutions with the instructional designer at the center, not the AI. For example, we use AI to:
- Transform any document into a Story-ready script
- Auto-animate 3D characters
- Auto-translate into 30+ languages
- Use prompts to convert an image to a video
These capabilities make the design process more fluid and scalable, but they do not replace decisions only an expert ID can make: what to include, what to cut, how to sequence, and how to make the experience feel human and relevant.
Disclaimer: AI was used to assist in the preparation of this article. AI tools were utilized to help smooth sentence structure and gather research material. However, the article itself was written by a human author, with all analysis and conclusions being the result of human expertise.

Barnana Sarkar
Content Marketing Specialist
Barnana brings 5+ years of experience across B2B and B2C, creating thoughtful, performance-driven content that connects and converts. With a diverse professional background in EdTech, FemTech, and FinTech, she believes learning should be fun and that the more engaged you are, the more you retain. When she’s not immersed in content management, she’s strength training for faster freestyle laps in the pool, exploring photography, or curating small independent art shows in Paris!
The non-negotiable principles for scenario-based learning

Don't miss out on our latest news. Get the inside knowledge on product updates and upcoming events.








