WorldViz User Forum  

Go Back   WorldViz User Forum > Vizard

Reply
 
Thread Tools Rate Thread Display Modes
  #1  
Old 08-04-2008, 08:51 AM
michaelrepucci michaelrepucci is offline
Member
 
Join Date: Jul 2008
Posts: 53
vizcave and quad-buffered stereo

I'm running a single Power Wall setup with a head-tracker, and having difficulty setting up stereo for this environment. I'm using an NVidia Quadro FX 3700, and setting Vizard to use quad-buffered stereo.

The key pieces of my code are:
Code:
#setup tracker
vrpn = viz.add('vrpn7.dle')
head = vrpn.addTracker('Tracker0@hiball')
#head = viztracker.add() #simulated tracker

#setup cave
cave = vizcave.Cave()
cave.addWall(wall,viz.MASTER)

#setup viewpoint
cave.setTracker(pos=head,ori=head)
view = vizcave.CaveView(tracker)
link = viz.link(view,viz.MainView)
link.setDstFlag(viz.LINK_POS_RAW)

#start application
viz.go(viz.QUAD_BUFFER)
#viz.go() #non-stereo setup
If I run this without stereo the application starts with the viewpoint looking in the right direction. But when I start the application with quad-buffered stereo, the viewpoint is skewed off to a funny part of the scene. I tried to debug this using a simulated keyboard tracker (see commented line above), and the same thing happens: without stereo the viewpoint looks correct; with stereo the viewpoint is skewed in an odd direction.

I suppose that I can correct for this skewness by using setPosition and setEuler on the view, but I fundamentally don't understand why stereo should cause the viewpoint to be grossly different. Shouldn't it be approximately the same view, with slightly shifted frustums for the left and right eyes?
Reply With Quote
  #2  
Old 08-04-2008, 04:06 PM
farshizzo farshizzo is offline
WorldViz Team Member
 
Join Date: Mar 2003
Posts: 2,849
You do not need to link the cave view object to the main viewpoint, it is automatically handled for you. So you can remove the following 2 lines of code:
Code:
link = viz.link(view,viz.MainView)
link.setDstFlag(viz.LINK_POS_RAW)
When you specify both position and orientation for the cave tracker, that sets the cave in stereo mode which means it will only display correctly when stereo is enabled.

Does the frustum look skewed when you view the screen from the point of view of the tracker? The whole point of vizcave is that it skews the frustum so that everything appears physically correct from the point of view of the 3d tracker.
Reply With Quote
  #3  
Old 08-06-2008, 09:46 AM
michaelrepucci michaelrepucci is offline
Member
 
Join Date: Jul 2008
Posts: 53
Woops. Sorry those lines were left over from an immersive setup I had made. They weren't actually called in my current vizcave code.

Things looked "skewed" in the sense that I would expect the default position for the view to be at x=y=z=0 in world coordinates, with the "camera" looking down the positive z axis. That way if I drew an on-the-fly object centered at [0,0,1], it would appear directly in front of me. If I setup as above and start without stereo, then the scene looks like I expect. (Though given what you said, I don't understand what it really means to specify both position and orientation for the cave tracker, but then to render not in stereo. Is the frustum adjusted as if the viewer were a cyclops, or is the cave setup ignored?) But in stereo the view is directed away from that object.

Oh, I just figured out that how I define my single cave wall changes this view, but in a way that totally isn't clear to me. So if I use

Code:
upperLeft = [-1, 1, 1]
upperRight = [1, 1, 1]
lowerLeft = [-1, -1, 1]
lowerRight = [1, -1, 1]
wall = vizcave.Wall('wall',upperLeft,upperRight,lowerLeft,lowerRight)
then I get what I expect. Whereas if I use coordinates that describe the actual physical offsets of my screen from the zero point of my tracker device, then the view I get seems to correspond to (correct me if I'm wrong) sitting at the zero point of my device and looking at the center (or maybe upper-left) of the screen. When and how do I make a transformation to place this zero point somewhere else? Or should I just use my screen height and width without offsets?

Here's what I've come up with so far, though I'm not sure it's correct. Let me know what you think.

Code:
upperLeft = [-screenXOffset-screenWidth, screenYOffset+screenHeight, screenZOffset]
upperRight = [-screenXOffset, screenYOffset+screenHeight, screenZOffset]
lowerLeft = [-screenXOffset-screenWidth, screenYOffset, screenZOffset]
lowerRight = [-screenXOffset, screenYOffset, screenZOffset]

...

view = vizcave.CaveView(tracker)
view.setPosition([screenWidth/2+screenXOffset,0,-screenZOffset])
If I understand correctly, this moves my zero point directly below the center of the screen. It seems to work okay (though I'm not sure if it's perfect - just tested it quickly), but is it the right way to do this? Or is it possible to change the zero point of the sensor (VRPN) at some point earlier in the setup? Or do I specify my wall differently?

Thanks again for your help!
Reply With Quote
  #4  
Old 08-07-2008, 11:38 AM
farshizzo farshizzo is offline
WorldViz Team Member
 
Join Date: Mar 2003
Posts: 2,849
The corner positions of the walls need to be in the same coordinate frame as the tracker you pass to the cave.setTracker() command. You do not need to apply any offsets to the data. If you pass an orientation tracker then vizcave will use the position and orientation of the trackers to extract the position of the left/right eyes for the stereo frustums. If you are not in stereo mode, then you will not see the effect of the modified left/right eye frustums.
Reply With Quote
  #5  
Old 10-18-2011, 02:54 PM
AySz88 AySz88 is offline
Member
 
Join Date: Aug 2011
Posts: 13
Hi! I'm working with Michaelrepucci's code.

I'm curious what is the correct way of giving cave.setTracker() a tracker that is tracking a location on the head that isn't right between the eyes? In our case, the tracker is producing both position and orientation data from a sensor near the top of the head on a helmet. If I understand correctly, I need to produce another tracker that transforms that sensor's position to the location between the eyes, and then give that tracker to cave.setTracker().

Right now, it uses the filter plug-in (filter.position()) to do this. But my understanding is that this will cause the position data to be translated by some constant vector in world space, instead of a vector that rotates with the helmet.

How do I apply the translation correctly?

[edit] I should mention that, for our application, just using the sensor location doesn't quite give us good enough results.

Last edited by AySz88; 10-18-2011 at 03:02 PM.
Reply With Quote
  #6  
Old 10-25-2011, 05:11 PM
AySz88 AySz88 is offline
Member
 
Join Date: Aug 2011
Posts: 13
Solution - I've noticed that any linkable can be given to <cave>.setTracker(). So my current strategy is to link my tracker with an empty group node, and call <link>.preTrans(). Then I gave the empty node to <cave>.setTracker().

Perhaps this should be documented? The filter extension documentation doesn't make clear that "offset" is a fixed positional offset in the world coordinate frame. Maybe also add something along the lines of "If you need more complex transformations, link an empty group to the tracker(s), apply the appropriate transformations to the link, and then use the group as the new tracker".
Reply With Quote
  #7  
Old 10-25-2011, 05:22 PM
farshizzo farshizzo is offline
WorldViz Team Member
 
Join Date: Mar 2003
Posts: 2,849
You can avoid using an empty group node by passing the link object as the tracker. For example:
Code:
#Create tracker contain raw data
raw_tracker = createMyRawTracker()

#Create link that offsets raw data to actual center of eye
eye_tracker = viz.link(raw_tracker,viz.NullLinkable)
eye_tracker.preTrans([x,y,z])

#Use link as cave tracker
cave.setTracker(pos=eye_tracker,ori=eye_tracker)
Reply With Quote
  #8  
Old 11-29-2011, 12:15 PM
AySz88 AySz88 is offline
Member
 
Join Date: Aug 2011
Posts: 13
Confirmed - thanks!

Links are much more powerful than I anticipated - one can add/subtract/etc. sensors by using pre/postMultiplyLinkable and the appropriate transformations along with them.

FYI, I had a tough time realizing "pre" multiplication is "before", which would be "to the right" instead of "to the left" in OpenGL's conventional notation. It was obfuscated by the link operators documentation, which seems to use left-to-right row-major matrix multiplication, a DirectX math notation, instead of right-to-left column-major, which is OpenGL's convention. Vizard's other documentation pages use OpenGL convention.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 11:47 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright 2002-2023 WorldViz LLC