WorldViz User Forum  

Go Back   WorldViz User Forum > Vizard

Reply
 
Thread Tools Rate Thread Display Modes
  #1  
Old 04-11-2016, 12:32 PM
Vaquero Vaquero is offline
Member
 
Join Date: Nov 2015
Posts: 62
Unhappy Augmented Reality: virtual world not overlapping real world correctly

I'm having the problem, that the virtual world and its objects don't overlap with the real world. When moving the head, the virtual environment moves with it to some degree. Why is that? There's a video link below.
I shortly described the issues I'm having with AR and the tracking in another thread, but decided it may deserve its own. I searched the forum for drift and how it could be related to the IntertiaCube, so I tested the head tracking with a rigid body composed only out of optical markers. There was quite a lot of jitter, but surpsingly the tilting world happened, too! So the inertia sensor can't be the sole cause.

Here's the recording: https://youtu.be/aU3ea96-3I0
It was really difficult holding the webcam inside the see-through-HMD, so I apologize for the bad quality.

Another issue is the scaling. When applying no scaling, it seems the ground is 0.8 lower than it should be, visually. That's the distance I have to lift the real object in order for the virtual one to be at my foot. It's hard to see in the video and much more prominent when really looking through the HMD with both eyes. When applying a scale of 0.5, visually it looks much more stable, and the virtual object approximately overlaps with the real one, but the coordinates get halved, which I don't want. Also moving up and down seems to move the floor in opposite direction.
In both cases, the closer I get to the ground, the more it seems to kind of match.

Here's my code:
Code:
import viz
import vizconnect
import vizshape
import vizact

import nvis 
nvis.nvisorST()

# This is the variable I try to find the correct values for
scaleFactor = [0.5, 0.5,0.5] # seems more correct in the view, but coordinates are halved
#scaleFactor = [1, 1, 1]

# Use VRPN to connect to phasespace.
vrpn = viz.add('vrpn7.dle')
VRPN_SOURCE = 'Tracker0@localhost'
headSensor = 0
rigidSensor = 5

headPosTracker = vrpn.addTracker(VRPN_SOURCE,headSensor)
headPosTracker.swapPos([1,2,-3])
headPosTracker.waitForConnection(timeout=0.5)

# add iSense tracker (inertia tracker on HMD)
isense = viz.add('intersense.dle')
headOriTracker = isense.addTracker()
vizact.onkeydown('r',headOriTracker.reset)

# Merge head trackers to a 6DOF tracker and link to main view
headTracker = viz.mergeLinkable(headPosTracker, headOriTracker)
viewLink = viz.link(headTracker,viz.MainView)
# convert input to vizard's coordinate system
viewLink.swapPos([-1, 2, -3])
# the tracked LED is not where the eyes are, so adjust it?
#viewLink.preTrans([0,-0.08,-0.05]) # preTrans?
#viewLink.postTrans([0,-0.08,-0.05]) # or postTrans?
# Now this is where the scale comes into play
viewLink.postScale(scaleFactor) # scale the tracker coordinates before applying them to the camera (viewpoint)

# Add text for displaying head tracker coordinates
headText = viz.addText('head:',parent=viz.SCREEN)
headText.fontSize(60)
headText.color(viz.RED)
headText.setPosition(0,0.95)

rigidTracker = vrpn.addTracker(VRPN_SOURCE,rigidSensor)
# Add a axis model to the tracker for visualization
rigidAxis = vizshape.addAxes(length=0.1)
rigidAxisLink = viz.link(rigidTracker,rigidAxis)
# convert input to vizard's coordinate system
rigidAxisLink.swapPos([-1, 2, 3])
rigidAxisLink.swapQuat([-1,2,3,-4])
rigidAxisLink.postScale(scaleFactor) # scale the tracker coordinates before applying them to the rigid body link

# display the coordinates at the tracker
rigidText = viz.addText3D('tracker',pos=[0.02,0.02,0],scale=[0.05,0.05,0.05],parent=rigidAxis)
rigidText.color(viz.YELLOW)

# updates the coordinates in the text elements
def updateCoordinateDisplays():
	#first update the coordiantes for the headTracker
	headX = '%.3f' % viewLink.getPosition()[0]
	headY = '%.3f' % viewLink.getPosition()[1]
	headZ = '%.3f' % viewLink.getPosition()[2]
	sep = ","
	seq = (headX, headY, headZ)
	coords = sep.join(seq)
	headText.message('head:' + coords)

	#update 3D text for rigid tracker
	rigidX = '%.3f' % rigidAxisLink.getPosition()[0]
	rigidY = '%.3f' % rigidAxisLink.getPosition()[1]
	rigidZ = '%.3f' % rigidAxisLink.getPosition()[2]
	seq = (rigidX, rigidY, rigidZ)
	coords = sep.join(seq)
	rigidText.message(coords)
	
viz.mouse.setVisible(True)

# Add a grid for reference
vizshape.addGrid(size=(5,4), step=0.1, boldStep=0.5, axis=vizshape.AXIS_Y, lighting=False)
# show XYZ axes geometry
originAxes = vizshape.addAxes(length=0.5)

# Update coordinate display every frame
vizact.onupdate(0,updateCoordinateDisplays)

# Just to make sure vertical sync is ON
viz.vsync(1)
# Helps reduce latency
viz.setOption('viz.glFinish',1)
# 4x antialiasing
viz.setMultiSample(8)

# Display world in HMD
viz.window.setFullscreenMonitor([4,3])
viz.go(viz.FULLSCREEN|viz.STEREO)
Since the marker position is probably not the position I want the view to originate from, should I use preTrans or postTrans on the link? I don't quite understand the difference. And where should the view actually originate from? Eye position? Could this be a cause for the drift?
Reply With Quote
  #2  
Old 04-12-2016, 08:17 AM
Jeff Jeff is offline
WorldViz Team Member
 
Join Date: Aug 2008
Posts: 2,471
In your code, two coordinate swaps are applied to the position data. One to the headtracker and one to the viewlink. Do all virtual movements (left,right,up,down) match the physical ones? If so, then that's probably fine but it should only be necessary to apply coordinate swaps once.

You may need to account for a unit conversion between Phasespace and Vizard but applying an additional scale factor should not be necessary. I would recommend testing with a simple script, just add an axes model linked to the head tracker and print out the data every frame. Verify that works properly on its own before incorporating the tracking data into this script. Is the Phasespace origin calibrated to the ground plane?

To account for the offset between the user's eyes and position tracker mounting, apply a preTrans operation to the viewlink. The values you are using indicate the position tracker is mounted in front of the user's eyes. Is that the case?

Code:
viewLink.preTrans([0,-0.08,-0.05])
Reply With Quote
  #3  
Old 04-12-2016, 09:14 AM
Vaquero Vaquero is offline
Member
 
Join Date: Nov 2015
Posts: 62
Thanks for pointing out the double swap, Jeff! I overlooked that. Positional and querternion swaps are all set to work correctly.

It turns out that most of the issues I see are related to the FOV. Culprit are these lines:
Code:
import nvis 
nvis.nvisorST()
We do use the NVisor ST50, but Vizard's nvisorST() initializes settings for the nvisor ST60, which has a vertical FOV of 40°, whereas the ST50 has a vertical FOV of 32°. The resolution seems the same at 1280x1024 for each display. This is a ratio of 1.25.
So setting the FOV to these settings makes things much better, but not perfect. There's still quite a lot of rotation around the Y-axis when turning the head, but much less up and down movement. It fixes the scale of the perceived world, so the scale doesn't seem to be an issue anymore.

The ST50 has a vertical FOV of 32° as I mentioned, but a horizontal FOV of 40°, the diagonal (binocular) FOV is 50°. So I changed some lines to this:
Code:
viz.fov(32, 1.25)
viz.window.setFullscreenMonitor([4,3])
viz.go(viz.FULLSCREEN | viz.STEREO_HORZ | viz.HMD)
But as I mentioned, still not good enough, because the world ist still rotating with the view to some dregree and the FOV is close, but still seems a bit off.
I also turned up the sensitivity of the InertaCube4 in hopes to reduce latency. Are there recommended settings for use in VR with Vizard?

The LED that's tracking the HMD is 5cm in front of the eyes and 8cm up (on the top of the HMD). Would it be better to put it somewhere else? Or should I put 2 LEDs on each side of the interia cube and see if I can create this optical heading I read about in vizconnect?

Last edited by Vaquero; 04-12-2016 at 09:18 AM.
Reply With Quote
  #4  
Old 04-12-2016, 09:50 AM
Vaquero Vaquero is offline
Member
 
Join Date: Nov 2015
Posts: 62
The actual setting that reduced some of the movement of the world is this:
Code:
isense = viz.add('intersense.dle')
headOriTracker = isense.addTracker()
headOriTracker.setPrediction(50)
What's recommended for the other tracker settings like AccelSensitivity and Sensitivity, Compass, CompassCompensation etc.?
Reply With Quote
  #5  
Old 04-12-2016, 10:28 PM
Jeff Jeff is offline
WorldViz Team Member
 
Join Date: Aug 2008
Posts: 2,471
It sounds like drift could be an issue here. First try Intersense's calibration procedures for reducing drift. Download ISDemo from Intersense if you don't have it installed on your computer yet. Go to the InertiaCube downloads page and download the InertiaCube Product CD. ISDemo can be found in the programs folder. Instructions for the calibration procedures can be found in their documentation. Keep the IC4 mounted on the ST50 for the calibration since they are used together. You will need to perform both "Fixed Metal Calibration" and "Magnetic Environment".

If you still have drift issues after calibrating, you can use vizconnect's optical heading plug-in with two position sensors mounted on either side of the HMD to correct it.
Reply With Quote
  #6  
Old 04-13-2016, 03:37 PM
Vaquero Vaquero is offline
Member
 
Join Date: Nov 2015
Posts: 62
Thanks a lot again, Jeff. I thought the calibration would have been done already by someone from tech. But with recalibrating it's already better. But sometimes the Yaw rotates even though the HMD with the IC4 lays flat on the ground. It's slow, but noticable.

The other adjustment that seems to get closer to the real thing are the viewLink.preTrans values. If I find out which values work best for me and where the resulting point of origin is approximately compared to the head, I'll share it here.
Reply With Quote
  #7  
Old 04-13-2016, 03:43 PM
Vaquero Vaquero is offline
Member
 
Join Date: Nov 2015
Posts: 62
One more question: For the optical heading, must the markers be in line with the center line left and right of the IC4? Because that's not possible with the way the IC4 is mounted on the device. It's ontop the back where the surface is curved, so I would only be able to place markers left and right besides the front of the inertia cube.
Reply With Quote
  #8  
Old 04-15-2016, 02:14 AM
Jeff Jeff is offline
WorldViz Team Member
 
Join Date: Aug 2008
Posts: 2,471
It's fine if the position markers are not in the same line as the inertia cube. The orientation in terms of pitch and roll does need to be aligned though. You can perform a mounting offset in Vizconnect if the orientation sensor is not likely to be aligned with the user's head.
Reply With Quote
Reply

Tags
augmented reality, drift, fov, intersense, tracking


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Making Virtual World Using Video Recording Hitesh Vizard 2 06-22-2015 10:03 AM
Showing another program in a virtual world Frank Verberne Vizard 3 01-16-2013 11:26 AM
why is time faster in my virtual world? billjarrold Vizard 1 11-24-2009 06:33 PM


All times are GMT -7. The time now is 06:18 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright 2002-2023 WorldViz LLC