Boost Your E-Learning with the AI Block in VTS Editor
For training managers, instructional designers, or HR leaders, two questions often arise: how to keep learners engaged over time, and how to prove the training’s impact on skills? Generative artificial intelligence — built-in via this block — provides a simple and operational answer. Without any development work, you can insert real-time responses, personalized feedback, and dialogues that adapt to each person directly into your scenarios. Tangible results: more relevant learning, reduced production time, and better performance tracking whether you deploy via SCORM on your LMS or through VTS Perform.
What the AI Block of VTS Editor Enables
This block queries an AI model (via API key, for example), then stores the response in a variable (default: _aiResponse
), which you can use immediately within the scenario. You can display it, control the next steps, calculate a score, or adjust paths. The experience remains seamless: everything happens within the same scene without exiting the module.
Key Settings
- Model and message (prompt)
- Output variable
- Maximum wait time (timeout)
- “Wait for result” option if the next scene depends on the response
If there’s an error or timeout, a second output is triggered: you can display a fallback message, open an alternative resource, or retry the request with a limited number of attempts.
Context Setting and Output Formats
Adjust the context depending on use (one-time hint, tutor for multiple exchanges, route branching). Choose the most useful format:
- Simple text for explanations, rephrasing, or analogies
- Isolated values (variables) when you only need a number or keyword
- JSON to control the scenario cleanly, such as:
{"feedback":"…","note":7,"route":2,"hint_1":"…"}
Memory and Response Formats
Memory Levels
- None: each request is independent (ideal for immediate help).
- Standard: history is preserved within the same scene (handy for a learning coach).
- Custom: a shared identifier passes context across multiple blocks, even in different scenes.
Output Formats
- Text: explanations, rewritten instructions, examples.
- Variables: a single value like
globalScore
ortargetRoute
. - JSON: clear structure for routing and scoring based on conditions.
Benefits for L&D and HR Teams
You strengthen alignment between learning goals and user experience. Learners receive targeted feedback, at the right time, with the right level of detail (criteria, examples, suggestions for improvement). On the production side, AI speeds up design (contextual examples, pre-translations, varying instructions by skill level) while maintaining editorial control. Explore the tool in detail on the VTS Editor page.
To prove impact, track via your LMS or VTS Perform:
- Completion and dropout rates by chapter and learner population
- Active time spent in scenes where AI is used
- Success/failure, progress, overall and skill-by-skill scores
- Help usage (clicks, number of hints), verbatims, and satisfaction
This data enables A/B testing (prompts, learning paths) and continuous improvement. To see real-life results, explore our client cases.
Real-World Use Cases of the AI Request Block
- Conversational tutor within the module: a “Need help?” button opens a mini form. Learners ask their question, the coach answers in an encouraging tone. Fewer dropouts, more autonomy. See also research on intelligent tutors at Carnegie Mellon – LearnLab.
- Criteria-based feedback after a quiz: you send the question, the answer, and your grading rubric. The structured return (JSON) contains “score,” “feedback,” and “route” (remedial, deep dive, success).
- Instructions tailored to level: versions like “quick,” “step-by-step,” or “with industry analogy” based on the
learner_level
variable. - Open-ended role play: the AI plays the client, the learner responds freely, and the conversation adjusts. Very useful to assess posture, empathy, and listening.
- Contextualized examples: generate 2–3 realistic examples by sector and level. Present them, then follow with a short quiz.
- Progressive help without “spoiling”: first click = hint 1, second = hint 2, etc. Use JSON to structure multiple levels of help and preserve the challenge.
- Free-text answer evaluation: score based on visible criteria (accuracy, clarity, structure), with advice for improvement.
- Adaptive routing: compute a
difficulty_level
andnext_topic
, then direct the learner accordingly. - Localization and editorial tone: use pre-translations and tone adjustments (more directive, more supportive), reviewed by your experts.
Implementation: Prompts, Scenarios, and Costs
Technical Requirements
- Validated technical framework (API key, network access, proxy if needed)
- Clear internal rules (who manages the key, what data is used, learner notification)
- Prepared variables (e.g.,
learner_question
,_aiResponse
,target_route
,ai_score
,ai_feedback
)
How to Write Good Prompts
- Define the role, tone, max length, and expected format
- Provide useful context (learning goal, target level, learner input)
- Ask for a usable format (values or JSON), and offer a sample ideal output
To explore the pedagogical value of feedback, see Vanderbilt University – Center for Teaching. For an overview of AI use in education, visit MIT – Teaching + Learning Lab.
Setting Up in VTS Editor
- Learner input: form (text or numeric)
- Output: message/voice, character emotions/animations
- Logic: conditions, switches, sequences
- Scoring: score, score check, progress tracking, badge
- Help and navigation: resources, clickable areas, decor interactions
- Local traceability: summary of key steps, no sensitive data stored
Need a hand getting started? Check out our training and support offers.
Managing Wait Time and Costs
- Shorten prompts, limit “max tokens,” and ask for brief responses
- Reuse context using custom memory (shared identifier) instead of resending everything
- Enable “wait for result” only if the follow-up depends on the response; otherwise, let the answer arrive in the background
Service Continuity and Fallback Messages
A timeout or unavailability should not block learners. Use the second output to show a clear message, open a backup resource, offer a fixed hint, or restart the request with a limited attempt count.
Compliance, Security, and Framework
- Minimize data: no unnecessary personal information in prompts
- Inform learners of AI assistant usage
- Centralize and protect API keys
- Log only what’s strictly necessary, for a short period
Measuring Impact and Scaling Up
With SCORM exports and/or VTS Perform, collect the metrics that matter: help usage, resolution time, number of hints, skill-based scoring, completion and success rates, satisfaction feedback. Test your prompts (level of detail, tone), compare paths (remediation vs. standard), and iterate. For an overview of opportunities and limitations, also read Stanford HAI – How AI could transform education.
Want to see real-world impact? Request a VTS Editor demo.
1-Hour Action Plan
- Create a scene with a form (question) → AI block (text or JSON) → message
- Set a timeout and a fallback route via the second output
- Switch to JSON with a switch block for two to three paths (remediation, deep dive, success)
- Track score/progress and export via SCORM or deploy through VTS Perform
FAQ – VTS Editor AI Block
Does AI replace human evaluation?
No. It accelerates, structures, and personalizes feedback. Final validation remains educational and managerial.
What if the response is off-topic?
Frame the format (JSON), provide a sample good output, limit length, and plan a fallback (second output).
Can I use a different provider than those mentioned?
Yes. The block is based on an API call; choose the appropriate model and follow authentication and format requirements.
Is it compatible with an LMS?
Yes, via SCORM export. VTS Perform additionally provides detailed skill tracking and useful analytics for training managers.