View Single Post
  #8  
Old 04-18-2011, 09:03 AM
Zhi Zhi is offline
Member
 
Join Date: Mar 2011
Posts: 49
Quote:
Originally Posted by JvdBosch View Post
The setScale indeed rescales the copy of the texture, to be re-applied to the node and blended with the original texture.

I don't get your second question... Could you clarify it?
My understanding of the node.texblend() method is that it blends the two texture layers of the same entity. In my case, it is the ground. If the ground is only a single entity, this might work. However, the grass texture I have only matches 2 m x 2 m area (if you apply it to a larger area, it looks not real). To have a larger area of grass field, I have to repeat the ground patch (in my case, more than 2000 repeats). Although the texture itself is seamless (it is amazingly made), it contains both high spatial frequency and low spatial frequency components. The high spatial frequency component will make the repeated ground contain noticeable linear perspective cue. It is trivial for most VR applications, such as games. But, it is not good for some spatial perception experiments, because the real grass field do not have such a strong linear perspective cue. The trick I used here is to superimpose a huge texture (the 100m x 100m patch), which only contains low frequency component. This will substantially reduce the noticeable linear perspective cure. See the attached screen shots for comparison. To do this, I used the node.alpha() method. My question is whether the node.texblend() method can accomplish it as well.
Attached Thumbnails
Click image for larger version

Name:	linearperspective.JPG
Views:	1598
Size:	46.6 KB
ID:	418   Click image for larger version

Name:	less linear perspective.JPG
Views:	1591
Size:	52.8 KB
ID:	419  

Last edited by Zhi; 04-18-2011 at 09:06 AM.
Reply With Quote