WorldViz User Forum  

Go Back   WorldViz User Forum > Vizard

 
 
Thread Tools Rate Thread Display Modes
Prev Previous Post   Next Post Next
  #1  
Old 04-11-2016, 12:32 PM
Vaquero Vaquero is offline
Member
 
Join Date: Nov 2015
Posts: 62
Unhappy Augmented Reality: virtual world not overlapping real world correctly

I'm having the problem, that the virtual world and its objects don't overlap with the real world. When moving the head, the virtual environment moves with it to some degree. Why is that? There's a video link below.
I shortly described the issues I'm having with AR and the tracking in another thread, but decided it may deserve its own. I searched the forum for drift and how it could be related to the IntertiaCube, so I tested the head tracking with a rigid body composed only out of optical markers. There was quite a lot of jitter, but surpsingly the tilting world happened, too! So the inertia sensor can't be the sole cause.

Here's the recording: https://youtu.be/aU3ea96-3I0
It was really difficult holding the webcam inside the see-through-HMD, so I apologize for the bad quality.

Another issue is the scaling. When applying no scaling, it seems the ground is 0.8 lower than it should be, visually. That's the distance I have to lift the real object in order for the virtual one to be at my foot. It's hard to see in the video and much more prominent when really looking through the HMD with both eyes. When applying a scale of 0.5, visually it looks much more stable, and the virtual object approximately overlaps with the real one, but the coordinates get halved, which I don't want. Also moving up and down seems to move the floor in opposite direction.
In both cases, the closer I get to the ground, the more it seems to kind of match.

Here's my code:
Code:
import viz
import vizconnect
import vizshape
import vizact

import nvis 
nvis.nvisorST()

# This is the variable I try to find the correct values for
scaleFactor = [0.5, 0.5,0.5] # seems more correct in the view, but coordinates are halved
#scaleFactor = [1, 1, 1]

# Use VRPN to connect to phasespace.
vrpn = viz.add('vrpn7.dle')
VRPN_SOURCE = 'Tracker0@localhost'
headSensor = 0
rigidSensor = 5

headPosTracker = vrpn.addTracker(VRPN_SOURCE,headSensor)
headPosTracker.swapPos([1,2,-3])
headPosTracker.waitForConnection(timeout=0.5)

# add iSense tracker (inertia tracker on HMD)
isense = viz.add('intersense.dle')
headOriTracker = isense.addTracker()
vizact.onkeydown('r',headOriTracker.reset)

# Merge head trackers to a 6DOF tracker and link to main view
headTracker = viz.mergeLinkable(headPosTracker, headOriTracker)
viewLink = viz.link(headTracker,viz.MainView)
# convert input to vizard's coordinate system
viewLink.swapPos([-1, 2, -3])
# the tracked LED is not where the eyes are, so adjust it?
#viewLink.preTrans([0,-0.08,-0.05]) # preTrans?
#viewLink.postTrans([0,-0.08,-0.05]) # or postTrans?
# Now this is where the scale comes into play
viewLink.postScale(scaleFactor) # scale the tracker coordinates before applying them to the camera (viewpoint)

# Add text for displaying head tracker coordinates
headText = viz.addText('head:',parent=viz.SCREEN)
headText.fontSize(60)
headText.color(viz.RED)
headText.setPosition(0,0.95)

rigidTracker = vrpn.addTracker(VRPN_SOURCE,rigidSensor)
# Add a axis model to the tracker for visualization
rigidAxis = vizshape.addAxes(length=0.1)
rigidAxisLink = viz.link(rigidTracker,rigidAxis)
# convert input to vizard's coordinate system
rigidAxisLink.swapPos([-1, 2, 3])
rigidAxisLink.swapQuat([-1,2,3,-4])
rigidAxisLink.postScale(scaleFactor) # scale the tracker coordinates before applying them to the rigid body link

# display the coordinates at the tracker
rigidText = viz.addText3D('tracker',pos=[0.02,0.02,0],scale=[0.05,0.05,0.05],parent=rigidAxis)
rigidText.color(viz.YELLOW)

# updates the coordinates in the text elements
def updateCoordinateDisplays():
	#first update the coordiantes for the headTracker
	headX = '%.3f' % viewLink.getPosition()[0]
	headY = '%.3f' % viewLink.getPosition()[1]
	headZ = '%.3f' % viewLink.getPosition()[2]
	sep = ","
	seq = (headX, headY, headZ)
	coords = sep.join(seq)
	headText.message('head:' + coords)

	#update 3D text for rigid tracker
	rigidX = '%.3f' % rigidAxisLink.getPosition()[0]
	rigidY = '%.3f' % rigidAxisLink.getPosition()[1]
	rigidZ = '%.3f' % rigidAxisLink.getPosition()[2]
	seq = (rigidX, rigidY, rigidZ)
	coords = sep.join(seq)
	rigidText.message(coords)
	
viz.mouse.setVisible(True)

# Add a grid for reference
vizshape.addGrid(size=(5,4), step=0.1, boldStep=0.5, axis=vizshape.AXIS_Y, lighting=False)
# show XYZ axes geometry
originAxes = vizshape.addAxes(length=0.5)

# Update coordinate display every frame
vizact.onupdate(0,updateCoordinateDisplays)

# Just to make sure vertical sync is ON
viz.vsync(1)
# Helps reduce latency
viz.setOption('viz.glFinish',1)
# 4x antialiasing
viz.setMultiSample(8)

# Display world in HMD
viz.window.setFullscreenMonitor([4,3])
viz.go(viz.FULLSCREEN|viz.STEREO)
Since the marker position is probably not the position I want the view to originate from, should I use preTrans or postTrans on the link? I don't quite understand the difference. And where should the view actually originate from? Eye position? Could this be a cause for the drift?
Reply With Quote
 

Tags
augmented reality, drift, fov, intersense, tracking

Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Making Virtual World Using Video Recording Hitesh Vizard 2 06-22-2015 10:03 AM
Showing another program in a virtual world Frank Verberne Vizard 3 01-16-2013 11:26 AM
why is time faster in my virtual world? billjarrold Vizard 1 11-24-2009 06:33 PM


All times are GMT -7. The time now is 10:45 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright 2002-2023 WorldViz LLC