#1
|
|||
|
|||
Multi-Pass Rendering
It is written in Vizard's Core Features under Technical Features that Vizard supprots multi-pass & multi-stage rendering. I am working on Augmented Reality & multi passing is essential for us to get realistic output.
By using <viz>.addRenderTexture() command I am able to get the screen output. By using shaders I can even process that texture. But my requirement is to process the processed textures. Input_Texture --- > Pass 1 ------> Output 1 Input_Texture --- > Pass 2 ------> Output 2 . . . Input_Texture --- > Pass n ------> Output n {Output 1, Output 2, .... , Output n} --- > Final Pass ----> Final Output Please help me with the solution. If I am conceptually wrong then please correct me. Thanks! |
#2
|
|||
|
|||
To elaborate my question, I am explaining this with code:
Code:
import viz viz.go() viz.add('piazza.osgb') # Add render texture using depth pixel format img = viz.addRenderTexture() # Use render node to render the scene to the depth texture rn1 = viz.addRenderNode() rn1.setRenderTexture(img) rn1.setEuler(180,0,0) rn2 = viz.addRenderNode() rn2.setRenderTexture(img) rn2.setEuler(180,0,0) # Display depth texture on the screen quad1 = viz.addTexQuad() quad1.setScale(3,2) quad1.setPosition(-2,2,6) quad2 = viz.addTexQuad() quad2.setScale(3,2) quad2.setPosition(2,2,6) # Use shader to display depth texture in linear range fragSepia = """ uniform sampler2D InputTex; void main() { vec4 color = texture2D(InputTex,gl_TexCoord[0].st); gl_FragColor.r = dot(color.rgb, vec3(.393, .769, .189)); gl_FragColor.g = dot(color.rgb, vec3(.349, .686, .168)); gl_FragColor.b = dot(color.rgb, vec3(.272, .534, .131)); gl_FragColor.a = color.a; } """ fragGray = """ uniform sampler2D InputTex; void main() { vec4 color = texture2D(InputTex,gl_TexCoord[0].st); float lum = 0.222*color.r + 0.707*color.g + 0.071*color.b; gl_FragColor = vec4(lum,lum,lum,color.a); } """ shader = viz.addShader(frag=fragSepia) quad1.apply(viz.addUniformInt('InputTex',0)) quad1.apply(shader) quad1.texture(img) shader2 = viz.addShader(frag=fragGray) quad2.apply(viz.addUniformInt('InputTex',0)) quad2.apply(shader2) quad2.texture(img) # Need to specify clip plane here and in shader viz.clip(0.1,200) Left Side is Sepia Effect & Right Side is Gray Effect. I want to produce third effect i.e. 70% Sepia & 30% Gray. I am willing to use Multi-Passing & everything should be behind the screen & main window should have the final output. Thanks! |
#3
|
|||
|
|||
Just to be clear, are you wanting to apply the shader to the entire screen, or just to a single object within the world?
|
#4
|
|||
|
|||
I am planning to apply the shader to the entire screen just like vfx guys do.
In our project of Augmented Reality, the CG quality is very superior than the HD webcam generated Image. So the image & cg are not going together. Our vfx guy suggested us that we have to separately make CG blur and add some post rendered effects on it; we have to separately enhance the image quality of Webcam output and then marge both. |
#5
|
|||
|
|||
Hello all! We have developed further of our own and found one solution. We are sharing the code with you.
Code:
import viz viz.go() #Adding Models to two different scenes piazza = viz.addChild('piazza.osgb',viz.WORLD,2) pit = viz.addChild('pit.osgb',viz.WORLD,3) #Creating render node & adding Render Texture for Scene 2 video1 = viz.addRenderTexture() cam1 = viz.addRenderNode(size=(1280,720)) cam1.setScene(2) cam1.setFov(60.0,1280/720.0,0.1,1000.0) cam1.attachTexture(video1) cam1.setRenderLimit(viz.RENDER_LIMIT_FRAME) #Creating render node & adding Render Texture for Scene 3 video2 = viz.addRenderTexture() cam2 = viz.addRenderNode(size=(1280,720)) cam2.setScene(3) cam2.setFov(60.0,1280/720.0,0.1,1000.0) cam2.attachTexture(video2) cam2.setRenderLimit(viz.RENDER_LIMIT_FRAME) # Adding quad to Main Scene & scaling & positioning it to occupy whole screen quad = viz.addTexQuad(viz.WORLD,1) quad.setPosition(0,1.8,4.1) quad.setScale(4,3) #Applying textures from outputs of Scene2 & Scene3 quad.texture(video1,unit = 0) quad.texture(video2,unit = 1) viz.scene(1) # Setting Scene 1 as main scene #Applying post process shaders to Render Nodes #------------- Post Process Effects on Scene2--------------------# fragCode = """ uniform sampler2D InputTex; void main() { vec4 color = texture2D(InputTex,gl_TexCoord[0].st); gl_FragColor.r = dot(color.rgb, vec3(.393, .769, .189)); gl_FragColor.g = dot(color.rgb, vec3(.349, .686, .168)); gl_FragColor.b = dot(color.rgb, vec3(.272, .534, .131)); gl_FragColor.a = color.a; } """ SepiaEffectShader = viz.addShader(frag = fragCode) ui_inputTex = viz.addUniformInt('InputTex',0) cam1.apply(SepiaEffectShader) cam1.apply(ui_inputTex) #------------- Post Process Effects on Scene3--------------------# fragCode = """ uniform sampler2D vizpp_InputTex; void main() { vec4 color = texture2D(vizpp_InputTex,gl_TexCoord[0].st); gl_FragColor = vec4(1.0-color.r,1.0-color.g,1.0-color.b,1); } """ InvertColorShader = viz.addShader(frag = fragCode) ui_inputTex = viz.addUniformInt('InputTex',0) cam2.apply(InvertColorShader) cam2.apply(ui_inputTex) #Applying Final Blending on quad #------------------- Final Texure blend Effect -----------------------------# VertCode = """ void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } """ FragCode = """ uniform sampler2D tex1; uniform sampler2D tex2; void main(void) { vec4 texture1 = texture2D(tex1,gl_TexCoord[0].st); vec4 texture2 = texture2D(tex2,gl_TexCoord[0].st); gl_FragColor = texture1* texture2; } """ BlendShader = viz.addShader(vert = VertCode,frag = FragCode) ui_tex1 = viz.addUniformInt('tex1',0) ui_tex2 = viz.addUniformInt('tex2',1) quad.apply(BlendShader) quad.apply(ui_tex1) quad.apply(ui_tex2) #Locking mouse navigation as quad is 2D & objective is Augmented Reality & Not Virtual Reality viz.mouse(viz.OFF) We are getting the desired result. Our question is, is there any other much appropriate method or much optimized method to get the same? |
#6
|
|||
|
|||
We are sticking to same logic as above. We have wrote a code using the same logic which is discussed above. This program is working fine as no shader is applied. The program is as below:
Code:
import viz viz.go() #Adding WebCam video = viz.add('VideoCamera.dle') cam = video.addWebcam() #Adding & setting quad to Scene 1 quad1 = viz.addTexQuad(viz.WORLD,1) quad1.setScale(2,1.5) quad1.setPosition(0,1.8,2) #linking quad to MainView to get Constant 3D link = viz.link(viz.MainView,quad1) link.preTrans([0,0,2]) #Adding piazza environment to Scene2 piazza = viz.addChild('piazza.osgb',viz.WORLD,2) #Adding & setting quad to Scene 2 quad2 = viz.addTexQuad(viz.WORLD,2) quad2.setScale(2,1.5) quad2.setPosition(0,2,6) quad2.texture(cam) #Adding Render Texture to Scene2 & setting it as a Texture to quad1 CamScene = viz.addRenderTexture() quad1.texture(CamScene) #Adding Render Node in Scene2 rn = viz.addRenderNode(size=(1280,720)) rn.setScene(2) rn.setFov(60.0,1280/720.0,0.1,1000.0) rn.attachTexture(CamScene) rn.setRenderLimit(viz.RENDER_LIMIT_FRAME) viz.scene(1) The center quad is webcam image. But when we add shader program to the above code Code:
InvFragCode = """ uniform sampler2D InputTex; void main() { vec4 color = texture2D(InputTex,gl_TexCoord[0].st); gl_FragColor = vec4(1.0-color.r,1.0-color.g,1.0-color.b,1); }""" InvShader = viz.addShader(frag = InvFragCode) rn.apply(InvShader) rn.apply(viz.addUniformInt('InputTex',0)) We don't know why this is happening?! Please help us to solve this issue.. Thanks! |
#7
|
|||
|
|||
Your code is applying the shader to all objects in the scene. It seems like you should be applying the shader to the quad that is rendering the rendered texture instead:
Code:
quad1.apply(InvShader) quad1.apply(viz.addUniformInt('InputTex',0)) Code:
import viz viz.go() #Adding WebCam video = viz.add('VideoCamera.dle') cam = video.addWebcam() #Adding piazza environment to Scene2 piazza = viz.addChild('piazza.osgb') #Adding & setting quad to Scene 2 quad2 = viz.addTexQuad() quad2.setScale(2,1.5) quad2.setPosition(0,2,6) quad2.texture(cam) #Apply inverse post-process effect import vizfx.postprocess from vizfx.postprocess.color import InvertColorEffect vizfx.postprocess.addEffect(InvertColorEffect()) |
#8
|
|||
|
|||
Render Node is a node; so my assumption was, we can apply shader to Render Node. Please correct me if I am wrong. We haven't applied InvShader to quad1, because we were planning to apply some other shader to quad1. So our process was:
Render Scene2 on Render node --> Apply InvShader on Render node and use it as a texture on quad1 --> apply SomeShader to quad1. As per our knowledge vizfx.postprocess can be applied to window. We don't know how to apply it on scenes. Please let us know if there is any way. Thanks! |
Tags |
multi pass, multi-pass, multipass, rendering |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Rendering order? | vOliver | Vizard | 3 | 09-29-2010 03:07 PM |
rendering level | shahramy | Vizard | 1 | 07-26-2010 01:12 PM |
how to control vizard rendering | shahramy | Vizard | 0 | 07-23-2010 11:50 PM |
Any way to increase rendering resolution | v-Salik | Vizard | 3 | 09-12-2007 05:57 PM |
Texture map rendering problem | JRichizzle | Vizard | 4 | 03-04-2004 08:20 AM |