WorldViz User Forum

WorldViz User Forum (https://forum.worldviz.com/index.php)
-   Vizard (https://forum.worldviz.com/forumdisplay.php?f=17)
-   -   vizcave head tracking curiosity (https://forum.worldviz.com/showthread.php?t=4220)

victorqx 04-27-2012 05:23 AM

vizcave head tracking curiosity
 
Dear Sir/Madam,

For the past couple of weeks I have had the fortune of getting to know Vizard better. I have had a number of demonstrations of CAVE systems that were using Vizard and I'm in the process of building our own PowerWall. This in order to convince other people in my company that this is really useful technology and try and persuade others to support my quest for our own CAVE.

I find the examples and forums that I have found here to be very helpful and I'm getting along quite well with building our PowerWall. There is one thing that has me stumped though and that is the head tracking. I do hope that my question hasn't been asked before, but my searches have not been able to turn up an answer.

I have been trying out the example found here: http://docs.worldviz.com/vizard/Vizcave.htm
I simply run this single display script on my desktop computer. I have many different ways of demonstrating the behavior that confuses me, but without actually being there, I will try to suffice with a description.

I start the script and use the mouse to rotate so that I am facing the right hand wall (the small wall with the 'Mona Lisa' is just visible on the left hand side of the screen). If I now use the 'w' key to simulate moving my head forward, the effect that I get is confusing to me. It seems like the warping effect that occurs is what I would expect to see if my screen were configured as the left-hand wall of a CAVE.

According to the documentation I should use the caveorigin as a node to move myself around in the virtual environment. But I see confusing behavior as soon as I start rotating around this caveorigin (in this case with the mouse) and then moving forward/backward with the 'w' and 's' keys.

Could you please try out the use-case scenario that I have described and let me know how it works for you? Am I doing something wrong, is there some setting that I am overlooking? If it would help, I will gladly make a small video demonstrating this behavior if I know where to send it to.

Many thanks and kind regards,

Victor

farshizzo 04-27-2012 10:10 AM

In that example, you should use the mouse and arrow keys to move the virtual viewpoint around. Using the 'w' and 's' keys will simulate moving your physical head forward/backward, as if you were actually standing in front of the powerwall. If you were to view the image from the perspective of the simulated physical head location it would not appear warped.

Gladsomebeast 04-27-2012 12:35 PM

As the user's tracked head gets closer to a "powerwall" the image appears to shrink if you are just looking at the desktop view. In my mind the conceptual idea that justifies the shrinking image is that as you get closer to this "window on the virtual world" your field of view increases. To fit the more data on a fixed sized screen, the image starts to shrink from a fixed perspective. If you were being tracked and moving closer to the screen, the shrinking size of the image is compensated for by you getting close to the screen, thus the perceived scale of the virtual world does not change.

victorqx 05-01-2012 06:13 AM

Thanks to both farshizzo and Gladsomebeast for answering my question. I was looking with some colleagues of mine and we all agreed that the example that I was trying to get across wasn't very representative.

I'll give a little more background of what we're doing. I have Kinect here for very simple headtracking. I'm also using a Space Navigator for navigation. At the moment I have 2 42" television set up in a 90 degree angle on a table. I'm using these to try out a few things. This week I intend to order 6 42" 3d television to build a 'mini-CAVE'. The idea is to put these in a 3x2 setup, where one row is pretty much at eye-level and the other row is below the first row and tilted at a 45 degree angle towards the user.

When playing around with our test setup a lot of colleagues who came by mentioned that it seemed that objects were 'moving away from them' when they approached the televisions. At first I thought that this was because of the whole window analogy, as Gladsomebeast has explained. But, people insisted that things were moving away from them so I tried out some things. When I stopped working with the 'caveorigin' node (from the cave example) but started moving the scene itself around, people stopped mentioning that they thought things were 'moving away from them'. I've been verifying with a collegue of mine today and we both agree that there does seem to be a perspective difference between moving the caveorigin node and the scene itself, mostly visible when turning at non-90 degree angles. We're still arguing whether this is a mind-trick or a real effect.

Unfortunately, I have been unable to create some pictures or a movie that demonstrates this effect. The scene where this effect is most noticeable is within a scene that we are working on here that is still classified, so I can't post it.

To roughly sketch what we are looking at, is a scene which has a window in the scene. We place the window on top of the televisions in one position and then start moving. When moving with the caveorigin node, the frame of the window moves backward when we approach the screen, when moving with the scene itself, the frame of the window stays where it is supposed to stay.

I'll keep working on a convincing set of pictures or video to demonstrate this effect and hope to be able to get back to you.

Kind regards,

Victor

farshizzo 05-01-2012 11:26 AM

It's difficult to say what the problem could be without seeing any code. If I had to guess, I would say the Kinect is not accurate enough to provide head tracking within a CAVE. If the Kinect is not properly calibrated, or if there is a large enough latency, then that might explain the issues you are experiencing.

victorqx 05-01-2012 11:55 PM

What I've noticed is that the Kinect is indeed not an incredibly accurate tracking system. For our purposes at this moment it is sufficient, but in the future we will want a much more accurate system. However, the scenario as I was describing it, didn't seem to have anything to do with the Kinect, since I didn't change the tracking solution between the scripts where I used either caveorigin or the scene node.

Some code of our setup:
The screens are setup at an angle of 90 degrees. The Kinect is setup directly in the middle, which means that it 'looks' at an angle of 45 degrees of the screens. In this setup, each screen is setup diagonally according to the cave coordinates, like so:

Code:

      #""" Setup CAVE walls
        W = 0.94
        H = 0.531
        D = 0.94
        TABLEHEIGHT = 0.79
        SQRT2 = 0.7071
       
        C0 = -SQRT2*W,  H+TABLEHEIGHT, -SQRT2*D
        C1 = -SQRT2*W,  H+TABLEHEIGHT, -SQRT2*D
        C2 =  0,                  H+TABLEHEIGHT,  0
        C3 =  SQRT2*W,  H+TABLEHEIGHT, -SQRT2*D
        C4 = -SQRT2*W,  TABLEHEIGHT,  -SQRT2*D
        C5 = -SQRT2*W,  TABLEHEIGHT,  -SQRT2*D
        C6 =  0,            TABLEHEIGHT,    0
        C7 =  SQRT2*W,  TABLEHEIGHT,  -SQRT2*D

        viz.go(viz.FULLSCREEN)
        viz.mouse.setVisible(viz.OFF)
       
        cave = vizcave.Cave()
       
        FrontWall = vizcave.Wall(upperLeft=C1, upperRight=C2, lowerLeft=C5, lowerRight=C6, name='Front Wall' )       
        cave.addWall(FrontWall, mask=viz.MASTER)

        BEZEL = 0.05
        C2 =  0+BEZEL,          H+TABLEHEIGHT,  0-BEZEL/2.0
        C3 =  SQRT2*W+BEZEL, H+TABLEHEIGHT, -SQRT2*D-BEZEL/2.0
        C6 =  0+BEZEL,                  TABLEHEIGHT,          0-BEZEL/2.0
        C7 =  SQRT2*W+BEZEL, TABLEHEIGHT,        -SQRT2*D-BEZEL/2.0
        RightWall = vizcave.Wall(upperLeft=C2, upperRight=C3, lowerLeft=C6, lowerRight=C7, name='Right Wall' )
        cave.addWall(RightWall, mask=viz.CLIENT1)
        #EO Setup CAVE walls"""

In the code below I am using the caveorigin for changing the location of the user within the virtual environment. In the scenario where I would change the scene, most code stays the same, except that I would add an extra node (with viz.addGroup()) and make that the parent of all other nodes. By changing this root node I can move/rotate the entire scene.

The Kinect is added to this mix like this:

Code:


      final = viz.addGroup()       
       
        #""" Use Kinect for headtracking
        HEAD = 0
        vrpn = viz.addExtension('vrpn7.dle')
        marker = vrpn.addTracker( 'Tracker0@localhost',HEAD )
        linkedView = viz.link(marker, final) # viz.MainView
        linkedView.setMask(viz.LINK_POS)
        linkedView.postScale([1, 1, -1])
        linkedView.postTrans([0, 1.5, 0])
        # Use Kinect for headtracking"""

        cave.setTracker(pos=final)
        caveorigin = vizcave.CaveView(final)

And we use the Space navigator like so:

Code:

        #"""use Space Navigator for movement
        def spacemove(e):       
                position = caveorigin.getPosition()
                moveTo = vizact.move(e.pos[0]*0.333, e.pos[1]*0.1, e.pos[2]*0.333, e.elapsed*0.5)
                caveorigin.clearActionList()
                caveorigin.addAction(moveTo)       
        viz.callback(vizspace.TRANSLATE_EVENT,spacemove)

        def spacerot(e):
                angles = caveorigin.getAxisAngle()
                if angles[1] < 0:
                        angles[3] = -angles[3]
                caveorigin.setAxisAngle([0,1,0,e.ori[1]*e.elapsed + angles[3]])
        viz.callback(vizspace.ROTATE_EVENT,spacerot)
        #EO use Space Navigator for forward/backward/rotation and keyboard PageUp/PageDown for up/down"""

So, with this setup I can add things to the scene, have the Kinect do headtracking and use the Space Navigator for movement within the virtual environment. As I mentioned, we seem to observe a difference in experience between using the caveorigin for navigation or the root scene node. It is a somewhat subtle difference, but people kept indicating that they felt objects 'moving away' from them when using the caveorigin, but thought they remained static (although still 'zooming out') when using the scene node for navigation.

Again, I'm having a hard time reproducing this in a movie or with camera stills. We'll be getting our 3d screens soon and I'll see if that improves this further. Perhaps it has something to do with the fact that our Kinect is for position tracking only and we don't use any rotation for our headtracking?

Thanks again and kind regards,

Victor

Gladsomebeast 05-02-2012 03:50 PM

Hi Victor,

I'm betting we simply need to get the wall definitions and kinect tracker coordinates working nicely together. Trust vizcave. When things shrink on the walls, this should be compensated for by the user's eyes getting closer to the screen.

Lets setup a third person view of the scene. We will add a 3d model to represent the user's head position. The cave.drawWalls() function will come in handy to show our wall definitions. When we have this setup, we test by having the user stand in known physical locations and checking that our virtual 3rd person view shows the users representation in the correct position relative to the walls.

I suspect that the problem will come from the offset you do to the Kinect data. We'll need to figure out the postTrans and possibly postEuler so that when people stand in the middle of the cave the viewtracker.getPosition command should return [0, high of person, 0].

Here is some code that does a third person view for the powerwall example script:


Code:

import viz
import vizcave
import viztracker

#Dimension of PowerWall in meters
WIDTH      = 3.0
HEIGHT      = 3.0
DISTANCE    = 2.0

#Initialize graphics window
viz.go()

#Create single power wall
PowerWall = vizcave.Wall(  upperLeft=(-WIDTH/2.0,HEIGHT,DISTANCE),
                            upperRight=(WIDTH/2.0,HEIGHT,DISTANCE),
                            lowerLeft=(-WIDTH/2.0,0.0,DISTANCE),
                            lowerRight=(WIDTH/2.0,0.0,DISTANCE),
                            name='Power Wall' )

#Create cave object with power wall
cave = vizcave.Cave()
cave.addWall(PowerWall)
cave.drawWalls()

#Create tracker object using the keyboard (WASD keys control the viewpoint, the user's eye location)
#Make the starting location for the user's eye above origin
viewtracker = viztracker.KeyboardPos()
viewtracker.setPosition(0.0,1.8,0)

#Pass the viewpoint tracker into the cave object so it can be automatically updated
cave.setTracker(pos=viewtracker)

#visualize tracker
eyeTrackerRepresentation = viz.add('biohead_eyes.vzf')
viz.link(viewtracker, eyeTrackerRepresentation)


#Create CaveView object for manipulating the entire cave environment
#The caveorigin is a node that can be adjusted to move the entire cave around the virtual environment
caveorigin = vizcave.CaveView(viewtracker)

#XXX don't work with cave.drawWalls()
#Create another tracker using the keyboard and mouse (arrow keys adjust position, mouse changes orientation)
#origintracker = viztracker.KeyboardMouse6DOF()
##Link the keyboard/mouse so that it moves the cave and user around the virtual environment
#originlink = viz.link (origintracker, caveorigin)


#Add gallery environment model
viz.add('gallery.ive')


BirdEyeWindow = viz.addWindow()
BirdEyeWindow.fov(60)
BirdEyeWindow.setPosition([0,1])
BirdEyeWindow.setSize(.5, .5)
BirdEyeView = viz.addView()
BirdEyeWindow.setView(BirdEyeView)
BirdEyeView.setPosition([0,6,0])
BirdEyeView.setEuler([0,90,0])


#only show eyeTrackerRepresentation in 3rd person view
thirdPersonMask = viz.addNodeMask()
eyeTrackerRepresentation.setMask(thirdPersonMask)
BirdEyeWindow.setCullMask(thirdPersonMask)

for z in range(-2, 2):
        viz.add('ball.wrl', pos=[0, 1, z])


victorqx 05-11-2012 12:13 AM

Hello Gladsomebeast,

One key thing I noticed in your email is where the caveorigin is meant to be. I now understand it is meant to be the center of the CAVE. I had interpreted it as being the lower left corner of the front wall of the CAVE. I have recalibrated the Kinect to take this into account and I do believe things are better.

I'm very busy with gesture control at the moment and still waiting for the 3d screens. As soon as those screens are in, I can really verify whether things are working 'naturally' now. Thanks again for all your help! I'll keep you posted on this thread when new developments happen.

victorqx 05-29-2012 08:31 AM

Just a quick remark from my side. The 3d screens are in and I've been able to play around with a setup of 2 of these. The whole 'weird' behavior I was talking about is gone! It's pretty cool to see how amazingly well this is now working.

As a side note, this does cause me to believe that when in 2d mode, your eyes don't 'want' to believe that you're looking through a 'window' when looking at the screens.

Anyway, thanks for all your help!

Gladsomebeast 05-29-2012 07:16 PM

Sweet! Good work.

Making the room dark will help with 3d effect as well. Seeing stuff behind screen and side of screen breaks depth effect a little.


All times are GMT -7. The time now is 02:25 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright 2002-2023 WorldViz LLC