Log in

View Full Version : Best practice to generate a video dataset with Vizard?


ZyZhu
04-05-2022, 06:55 AM
Hi friends!

I have been using Vizard to run behavioral studies on human participants, and now I am trying to use the same visual displays to train an deep learning artificial neural network. However, to do that, I need ways to export thousands of video clips (each is 1-2 seconds long). Ideally, I would need access to the underlying RGB value of each pixel for each frame. I'm wondering if there is any way to do this in Vizard? Do you have any suggestions about the best practice to do it? I would appreciate any feedback. Thank you so much!

apenngrace
09-22-2022, 10:31 AM
What is the source of each frame of your videos? Is it from a camera, or is it something you are generating?

Do you need actual video clips, or do you need just series of images saved to disk? Is this real-time or can you take as long as you want per frame? Is this a one-off thing?

If you just need to test, try PILLOW (https://python-pillow.org/) for accessing the images and individual pixels. If you want to accelerate performance with GPU, I'd be thinking about pyOpenCL (https://documen.tician.de/pyopencl/) or pyCuda (https://documen.tician.de/pycuda/) probably.

If you have a bunch of images on disk, or just image buffers in memory, I'd be thinking about using ffmpeg (https://ffmpeg.org/) to compress that into a video file of some sort.