vizcave head tracking curiosity
Dear Sir/Madam,
For the past couple of weeks I have had the fortune of getting to know Vizard better. I have had a number of demonstrations of CAVE systems that were using Vizard and I'm in the process of building our own PowerWall. This in order to convince other people in my company that this is really useful technology and try and persuade others to support my quest for our own CAVE. I find the examples and forums that I have found here to be very helpful and I'm getting along quite well with building our PowerWall. There is one thing that has me stumped though and that is the head tracking. I do hope that my question hasn't been asked before, but my searches have not been able to turn up an answer. I have been trying out the example found here: http://docs.worldviz.com/vizard/Vizcave.htm I simply run this single display script on my desktop computer. I have many different ways of demonstrating the behavior that confuses me, but without actually being there, I will try to suffice with a description. I start the script and use the mouse to rotate so that I am facing the right hand wall (the small wall with the 'Mona Lisa' is just visible on the left hand side of the screen). If I now use the 'w' key to simulate moving my head forward, the effect that I get is confusing to me. It seems like the warping effect that occurs is what I would expect to see if my screen were configured as the left-hand wall of a CAVE. According to the documentation I should use the caveorigin as a node to move myself around in the virtual environment. But I see confusing behavior as soon as I start rotating around this caveorigin (in this case with the mouse) and then moving forward/backward with the 'w' and 's' keys. Could you please try out the use-case scenario that I have described and let me know how it works for you? Am I doing something wrong, is there some setting that I am overlooking? If it would help, I will gladly make a small video demonstrating this behavior if I know where to send it to. Many thanks and kind regards, Victor |
In that example, you should use the mouse and arrow keys to move the virtual viewpoint around. Using the 'w' and 's' keys will simulate moving your physical head forward/backward, as if you were actually standing in front of the powerwall. If you were to view the image from the perspective of the simulated physical head location it would not appear warped.
|
As the user's tracked head gets closer to a "powerwall" the image appears to shrink if you are just looking at the desktop view. In my mind the conceptual idea that justifies the shrinking image is that as you get closer to this "window on the virtual world" your field of view increases. To fit the more data on a fixed sized screen, the image starts to shrink from a fixed perspective. If you were being tracked and moving closer to the screen, the shrinking size of the image is compensated for by you getting close to the screen, thus the perceived scale of the virtual world does not change.
|
Thanks to both farshizzo and Gladsomebeast for answering my question. I was looking with some colleagues of mine and we all agreed that the example that I was trying to get across wasn't very representative.
I'll give a little more background of what we're doing. I have Kinect here for very simple headtracking. I'm also using a Space Navigator for navigation. At the moment I have 2 42" television set up in a 90 degree angle on a table. I'm using these to try out a few things. This week I intend to order 6 42" 3d television to build a 'mini-CAVE'. The idea is to put these in a 3x2 setup, where one row is pretty much at eye-level and the other row is below the first row and tilted at a 45 degree angle towards the user. When playing around with our test setup a lot of colleagues who came by mentioned that it seemed that objects were 'moving away from them' when they approached the televisions. At first I thought that this was because of the whole window analogy, as Gladsomebeast has explained. But, people insisted that things were moving away from them so I tried out some things. When I stopped working with the 'caveorigin' node (from the cave example) but started moving the scene itself around, people stopped mentioning that they thought things were 'moving away from them'. I've been verifying with a collegue of mine today and we both agree that there does seem to be a perspective difference between moving the caveorigin node and the scene itself, mostly visible when turning at non-90 degree angles. We're still arguing whether this is a mind-trick or a real effect. Unfortunately, I have been unable to create some pictures or a movie that demonstrates this effect. The scene where this effect is most noticeable is within a scene that we are working on here that is still classified, so I can't post it. To roughly sketch what we are looking at, is a scene which has a window in the scene. We place the window on top of the televisions in one position and then start moving. When moving with the caveorigin node, the frame of the window moves backward when we approach the screen, when moving with the scene itself, the frame of the window stays where it is supposed to stay. I'll keep working on a convincing set of pictures or video to demonstrate this effect and hope to be able to get back to you. Kind regards, Victor |
It's difficult to say what the problem could be without seeing any code. If I had to guess, I would say the Kinect is not accurate enough to provide head tracking within a CAVE. If the Kinect is not properly calibrated, or if there is a large enough latency, then that might explain the issues you are experiencing.
|
What I've noticed is that the Kinect is indeed not an incredibly accurate tracking system. For our purposes at this moment it is sufficient, but in the future we will want a much more accurate system. However, the scenario as I was describing it, didn't seem to have anything to do with the Kinect, since I didn't change the tracking solution between the scripts where I used either caveorigin or the scene node.
Some code of our setup: The screens are setup at an angle of 90 degrees. The Kinect is setup directly in the middle, which means that it 'looks' at an angle of 45 degrees of the screens. In this setup, each screen is setup diagonally according to the cave coordinates, like so: Code:
#""" Setup CAVE walls The Kinect is added to this mix like this: Code:
Code:
#"""use Space Navigator for movement Again, I'm having a hard time reproducing this in a movie or with camera stills. We'll be getting our 3d screens soon and I'll see if that improves this further. Perhaps it has something to do with the fact that our Kinect is for position tracking only and we don't use any rotation for our headtracking? Thanks again and kind regards, Victor |
Hi Victor,
I'm betting we simply need to get the wall definitions and kinect tracker coordinates working nicely together. Trust vizcave. When things shrink on the walls, this should be compensated for by the user's eyes getting closer to the screen. Lets setup a third person view of the scene. We will add a 3d model to represent the user's head position. The cave.drawWalls() function will come in handy to show our wall definitions. When we have this setup, we test by having the user stand in known physical locations and checking that our virtual 3rd person view shows the users representation in the correct position relative to the walls. I suspect that the problem will come from the offset you do to the Kinect data. We'll need to figure out the postTrans and possibly postEuler so that when people stand in the middle of the cave the viewtracker.getPosition command should return [0, high of person, 0]. Here is some code that does a third person view for the powerwall example script: Code:
import viz |
Hello Gladsomebeast,
One key thing I noticed in your email is where the caveorigin is meant to be. I now understand it is meant to be the center of the CAVE. I had interpreted it as being the lower left corner of the front wall of the CAVE. I have recalibrated the Kinect to take this into account and I do believe things are better. I'm very busy with gesture control at the moment and still waiting for the 3d screens. As soon as those screens are in, I can really verify whether things are working 'naturally' now. Thanks again for all your help! I'll keep you posted on this thread when new developments happen. |
Just a quick remark from my side. The 3d screens are in and I've been able to play around with a setup of 2 of these. The whole 'weird' behavior I was talking about is gone! It's pretty cool to see how amazingly well this is now working.
As a side note, this does cause me to believe that when in 2d mode, your eyes don't 'want' to believe that you're looking through a 'window' when looking at the screens. Anyway, thanks for all your help! |
Sweet! Good work.
Making the room dark will help with 3d effect as well. Seeing stuff behind screen and side of screen breaks depth effect a little. |
All times are GMT -7. The time now is 09:46 PM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright 2002-2023 WorldViz LLC