The first article in this series was meant to give you a basic overview of some of the most important practical UX principles I’ve researched and built. While not exhaustive (I save that for class), it provides a glimpse into some of the VIP(virtual interaction points) and the constructs that are available to potentially use. Like any well-crafted experience, you need to understand all the basic tenants of the audience using a product.
Planning For The Unknown Is Tricky
When crafting your onboarding experience there is a lot of technology behind the scene. Even for people developing it’s still challenging to craft and build the right experience that will have broad customer appeal. So let’s take a look at how we might go about building an onboarding experience. Based upon the above as a holographic UX designer (Immersive Experience Designer) It’s best to start what is known.
A VIP can contain multiple start-up sequences. It’s a safe bet that you are going to want to get the most of your experience by crafting objects that interact with the real world. So we have to apply some computer vision learning to the VEX(virtual experience). Our goal during this process is to give context & potential meaning (not exact meaning) to what the spatial maps dictate as planes. These can be identified as possible areas that objects can reside, but keep in mind there is always a large margin for error in complex VIPS. This means you need to have fallbacks in order to capture the margin for user success. This will get better over time.
Listed below is a concept project we submitted originally to the Unity competition.
Once you have identified these planes we have the ability to apply classification. So let’s pretend in this scenario we want to create an application that will allow students to dissect a virtual frog. Of course, following the nomenclature of all HoloApps. It’s HoloFrogv1.0.(joke) For this example let’s think about crafting this as a single user app, but we could easily connect this context to other students to allow them to share in the virtual fun. A virtual quest might be a nice add-on later (scope creep).
- First Time Students
- Maybe Instructors
Spatial Scene 1A
- The Classroom
Typically I would write some user stories here, but I’m going to jump ahead to the meat. I want to be able to have the students place the live frog on the floor and then have it move to the table. Why live? I thought it would be more fun to see the physical moving parts of the Frog and then think about the muscles, etc that make everything work. Before I get to that these are some of the items running through my mind.
I know I’m going to first need to understand the space. I also know an onboarding sequence doesn’t have to be boring.
- We need A UI (froggy colors perhaps)
- We need a Frog animation sequence (animators / and more fun)
- We need multiple frog state models & prefabs to work within the scene transitions.
- We need to define our VEX’s (I think of these as functional workflows through the scene shifts) – Later we will discuss
- We need to orient & acclimate the actor (in this case a child).
- We need to think about what capabilities we want to use. Let’s break it down into simple chunks of storyboard.
The frog will be our scene actor and will be our guide all the way to the table. I would typically sketch this out capture the key frame moments. It’s important to note that an application like this would be primarily in a setting where we might not know all the variables.
- We need to locate an empty desk
- We need to locate the floor and any walls
- We need to identify any type of obstructions and remove those from the equation
- We need to provide instructions on app usage (we are going to use voice as I hate reading lots of text).
- We need a start-up view
- We need to think about how the frog might interact with space (in a fun inquisitive way)
Child First Time User (audience)
The child after teacher assists puts the HoloLens on his/her head. Most likely you will have to open an app (unless you have kiosk mode). Now that the app is open we display a simple loading sequence animation. Let’s call upon our 3d animation Holo frog to play leap-frog across the terrain (a pseudo frogger clone). The movement of the frog across space helps us to calculate and determine the area, size, obstructions etc. (this is our spatial mapping phase).
My voiceover comes over the HoloLens.
“Hello, welcome to a safe way to dissect and examine the body of a frog.” Let’s get started. “Your frog is a curious creature and wants to explore” (Overlay tap event cursor tap here) – Scene note (here or here to place your frog (floor) – Only identify the spots that are not going to be our dissection place and identify as not as possible” placement location. We are going to need an animation sequence for our frog (jumping, and more)…
- The spatial mapping has to occur, so it makes sense to make it an inclusive and fun part of the interaction. Having the frog leap from plane to plane as it becomes recognized as a surface might be a great fun approach. We could also have some stats showing over the experience (Total Frogs saved today by Holofrog) and have all the frogs as mini frogs hopping across the loading spatial map. The point here is to engage from the moment we first connect with the VEX (that means through the loading /mapping initialization sequence).
- I prefer voice and a point and orient cursor method for this example as it will showcase in a very visual way how we move progressively through the application experience.
- At some point, we may need a menu system. I envision this menu system could become part of the plane that we identified as the working space for the table. This will be dependent upon the available space. The reason we would place a menu construct on the environment is to give our students a point and place where interaction can occur in context to the Frog dissection. If we place it too far away the focal point will be out of context. It’s for this reason I would not use a tag along (system) or HUD.
Later we will take a look at understanding the possible similarities between existing mobile design patterns and the evolution of these patterns into new levels of system interaction and design.