Enhance E-Learning Engagement with the VTS Editor Gaze Block
In immersive modules, attention is captured within seconds. The VTS Editor Gaze Block acts like a spotlight: a character’s gaze directs focus, gives meaning to action, models expected behaviors (listening, confidence, hesitation), and reduces cognitive load by minimizing verbal instructions. This powerful non-verbal cue accelerates understanding and improves retention, as demonstrated by research on gaze cueing and signaling in multimedia learning (Frischen, Bayliss & Tipper, 2007; de Koning et al., 2009).
With VTS Editor, instructional designers, training managers, and HR professionals can control these signals without coding: a few settings are all it takes to sync gazes with dialogue, emotion, sound, and media. The result: clearer storytelling, more natural reactions, exportable as SCORM for tracking in your LMS or in VTS Perform. To explore more about the authoring tool, visit VTS Editor.
VTS Editor Gaze Block Options and Settings
Directing the Gaze: Target Selection
The block allows you to choose the gaze target of a character: the learner (camera), another character, straight ahead (neutral gaze), or a point of interest (POI) in the scenery. This precision enhances turn-taking, draws attention to interactive objects, or expresses subtle emotions. To configure your scenes, also explore our VTS Editor sceneries and their POIs.
- Learner: create direct connection during a greeting, key instruction, praise, or prompt. Face-to-face enhances engagement.
- Character X: anchor attention during a dialogue. It’s clear who is speaking, listening, and reacting.
- Point of Interest X: lead the learner’s eye to an object, screen, or document. Ideal for initiating interactions.
- Look Ahead: indicates a pause, reflection, or distance taking.
Two Settings That Make All the Difference
Delay (in seconds, decimals allowed): Delays the gaze activation to match a keyword, sentence end, sound cue, or media appearance. Short delays (0.2–0.6s) simulate micro-reactions. Longer delays (0.8–1.2s) suggest reflection.
Duration: The length the gaze remains before reverting to the default behavior. User-tested benchmarks: 0.5 to 1.5s for glance signals; 2 to 3s to clearly draw attention to a POI; 4 to 6s when a media needs to be observed. Too short, and the signal is missed; too long feels artificial.
Fine Synchronization and Chaining
In graph construction, the block can trigger outgoing actions either at the start or at the end of the gaze animation. This allows seamless chaining: play a “pop” sound just after the gaze lands, activate a clickable area once the gaze is set, or leave a second of observation before continuing.
Effective Combinations with Other Blocks
- Speak: Combine dialogue and gaze to clarify who is speaking. A slight delay (0.2–0.4s) emulates natural response.
- Emotion: Add a light emotion (Indecision 1, Joy 1, Sadness 1) to the gaze for enhanced non-verbal nuance.
- Character Animation: Add realism with a head nod or slight torso tilt when looking at someone.
- Spatialized Sound: Pull the learner toward a POI with sound, then confirm with a gaze (e.g., phone rings on the right + look at phone).
- Media in Scenery: Gaze at a location, then display image/video. The eye follows the narrative thread.
- Clickable Areas / Scenery Interaction: Direct the gaze first, then make the object clickable to reduce verbosity.
- Force 360 / Freeze 360: In 360° sceneries, point the camera to a key area, allow exploration, then place one or two gazes on POIs to prevent disorientation.
Mistakes to Avoid and Best Practices
Avoid excessive micro-shifts that cause fatigue. Prioritize purposeful transitions that support a clear goal. Ensure visual consistency: looking at an invisible or off-screen object creates disconnect. Check POI visibility before triggering gazes. Calibrate durations carefully: a gaze must be perceivable; prolonged gazes shouldn’t freeze the scene. Finally, synchronize your cues: a gaze that contradicts the emotion or spatial audio hinders rather than helps. These practices align with the multimedia learning signaling principle, linked to better understanding and lower cognitive load (de Koning et al., 2009).
Pedagogy: Guide, Contextualize, and Assess with the VTS Editor Gaze Block
Reduce Cognitive Load and Better Direct Attention
In rich scenography, a gaze is often clearer than a sentence. Use simple guidance: gaze to a POI, short message, make a zone clickable. The learner understands the action without being overloaded with instructions. Gaze-based guidance builds upon robust “gaze cueing” effects studied in cognitive psychology (Frischen, Bayliss & Tipper, 2007).
Soft Skills: Model Listening, Assertiveness, and Empathy
Example: managerial interview. Gaze at the speaker while they talk, then gaze at the camera when asking a question: models active listening and openness. Agreement/disagreement: steady gaze, micro-emotion (Joy 1 or Anger 1 depending on context), and minimal gesture. A decision can be implied through alternating quickly between two POIs (two documents) followed by a “look ahead,” signaling reflection.
Assess Without Breaking Immersion
After a Quiz or Phrase Choice, use non-verbal feedback. Correct answer: brief learner gaze + Joy 1; partial answer: “look ahead” + Indecision 1; incorrect answer: look at an expert character with Sadness 1 or Anger 1 depending on your training culture. Combine scores and conditions to branch the learning path, offer contextual help or point to a recap. For tracking, use SCORM or VTS Perform.
Accessibility and Multilingual Projects
Complement the gaze with additional cues: subtitles, short messages, subtle icons, light notification sounds. For media, ensure readability (darkened background, good contrast). For visually impaired audiences, sometimes state the intent (“Look at the screen on the left”), then let the gaze guide. For multilingual uses, customize dialogues, instructions, and POIs per language while keeping a consistent gaze logic; use flags to avoid redundant actions.
Gaze Control in VTS Editor: 6 Ready-to-Use Mini Scenarios
Welcome Briefing: Build Connection
Context: safety onboarding, managerial path launch, internal client reception. Goal: create connection and set the scene. Sequence Speak (welcome message) then gaze at learner, with mild Joy 1. 0.3s delay, 2.5s duration: the learner feels addressed. End with a “look ahead” to signal the transition. Target KPIs: reduced early dropouts, better instruction retention.
Ringing POI: Guide Without Commands
Context: helpdesk, factory, retail. Trigger right-side spatialized sound (0.2s fade-in), gaze to POI phone (0.2s delay, 1.2s duration), then activate interaction on the object. If the learner delays, replay the sound slightly louder and repeat gaze. KPIs: reduced time to first valid click, fewer random clicks.
Three-Person Dialogue: Clarify Turn-Taking
Context: performance review with observer, mediation, client meeting. Sequence: Speak (manager) → gaze at employee → Speak (employee) → gaze at manager → Speak (observer) → gaze at learner to prompt input. Short delays (0.2 to 0.4s), durations 1 to 2s. Add occasional head nods for rhythm.
Micro-Feedback After a Choice
Context: sales, compliance, client relations. After a Phrase Choice, condition gaze and emotion based on response score. Good choice: gaze at learner (1s) + Joy 1; average choice: “look ahead” + Indecision 1; bad choice: look at expert + Sadness 1. Follow up with a score check to guide deeper learning or progression.
Product Demo: Eye Contact Then Focus
Context: sales training, technical support, internal marketing. Start with speech, gaze at learner (2s), then Media in scenery on a product screen, gaze at that POI (3 to 4s) while explaining the value. A light “pop” during switch enhances clarity.
360 Scenery: Map Out Exploration
Context: industrial site, store, emergency room. Use Force 360 to aim toward Zone A (0.8 to 1.2s animation, wait till end), keep Freeze 360 on for free nav, then place one or two gazes on key POIs (1 to 2s each). If needed, briefly regain control to jump to Zone B. For matching characters, explore VTS Editor characters.
Proven Impact: Engagement and Measurement
Implementation Effort and Educational Benefits
Effort is low: gaze settings can be added in minutes to existing scenes. The impact on instruction clarity, storyline flow, and professional behavior modeling is high. Less text, more efficient visual cues.
Measurement and Real-World Results
Use Score and Verify Score blocks to open/close branches based on performance. Track progress, success rates, and time spent via SCORM or VTS Perform. For example, the Manpower case saw engagement rates rise from 7% to 67% with immersive modules built in VTS Editor.
Beyond case studies, research confirms that social and non-verbal cues—such as gaze—support attention and motivation, boosting learning in multimedia environments (de Koning et al., 2009).
VTS Editor Gaze Block Implementation Checklist
- Clarify intent: guiding, validating, signaling, questioning, soft skill modeling? Note it in the block.
- Select the right target: learner, character, POI, straight ahead, based on the desired effect.
- Set delay and duration: 0.2 to 0.6s for natural feel; 0.5 to 6s depending on subtle cue vs. observation.
- Synchronize: align with Speak, Emotion, Sound, and Media blocks.
- Test quickly: internal A/B testing on 2–3 durations to determine best clarity.
- Measure: track success, time to action, irrelevant clicks; iterate.
- Capitalize: build a micro-recipe kit (e.g., gaze to POI + sound, post-choice gaze + light emotion).
Apply these best practices early in the design of your VTS Editor scenarios and monitor performance improvements via your LMS or VTS Perform. To explore the tool and its potential, visit the VTS Editor page or request a free trial at Virtual Training Suite.
A gaze is not a decorative “extra”: it’s a lever for attention, understanding, and engagement. Used wisely, it transforms a simple series of screens into a guided and memorable experience—fully aligned with the principles of signaling and attention guiding recognized in research (Frischen, Bayliss & Tipper, 2007).